title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
url
stringlengths
79
342
function::task_stime_tid
function::task_stime_tid Name function::task_stime_tid - System time of the given task Synopsis Arguments tid Thread id of the given task Description Returns the system time of the given task in cputime, or zero if the task doesn't exist. Does not include any time used by other tasks in this process, nor does it include any time of the children of this task.
[ "function task_stime_tid:long(tid:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-task-stime-tid
5.9.3. Checking the Default SELinux Context
5.9.3. Checking the Default SELinux Context Use the matchpathcon command to check if files and directories have the correct SELinux context. From the matchpathcon (8) manual page: " matchpathcon queries the system policy and outputs the default security context associated with the file path." [10] . The following example demonstrates using the matchpathcon command to verify that files in /var/www/html/ directory are labeled correctly: As the Linux root user, run the touch /var/www/html/file{1,2,3} command to create three files ( file1 , file2 , and file3 ). These files inherit the httpd_sys_content_t type from the /var/www/html/ directory: As the Linux root user, run the chcon -t samba_share_t /var/www/html/file1 command to change the file1 type to samba_share_t . Note that the Apache HTTP Server cannot read files or directories labeled with the samba_share_t type. The matchpathcon -V option compares the current SELinux context to the correct, default context in SELinux policy. Run the matchpathcon -V /var/www/html/* command to check all files in the /var/www/html/ directory: The following output from the matchpathcon command explains that file1 is labeled with the samba_share_t type, but should be labeled with the httpd_sys_content_t type: To resolve the label problem and allow the Apache HTTP Server access to file1 , as the Linux root user, run the restorecon -v /var/www/html/file1 command: [10] The matchpathcon (8) manual page, as shipped with the libselinux-utils package in Red Hat Enterprise Linux, is written by Daniel Walsh. Any edits or changes in this version were done by Murray McAllister.
[ "~]# touch /var/www/html/file{1,2,3} ~]# ls -Z /var/www/html/ -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file1 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file2 -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 file3", "~]USD matchpathcon -V /var/www/html/* /var/www/html/file1 has context unconfined_u:object_r:samba_share_t:s0, should be system_u:object_r:httpd_sys_content_t:s0 /var/www/html/file2 verified. /var/www/html/file3 verified.", "/var/www/html/file1 has context unconfined_u:object_r:samba_share_t:s0, should be system_u:object_r:httpd_sys_content_t:s0", "~]# restorecon -v /var/www/html/file1 restorecon reset /var/www/html/file1 context unconfined_u:object_r:samba_share_t:s0->system_u:object_r:httpd_sys_content_t:s0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-Security-Enhanced_Linux-Maintaining_SELinux_Labels_-Checking_the_Default_SELinux_Context
Chapter 14. Rebooting the environment
Chapter 14. Rebooting the environment It might become necessary to reboot the environment. For example, when you need to modify physical servers or recover from a power outage. In these types of situations, it is important to make sure your Ceph Storage nodes boot correctly. You must boot the nodes in the following order: Boot all Ceph Monitor nodes first - This ensures the Ceph Monitor service is active in your high availability Ceph cluster. By default, the Ceph Monitor service is installed on the Controller node. If the Ceph Monitor is separate from the Controller in a custom role, make sure this custom Ceph Monitor role is active. Boot all Ceph Storage nodes - This ensures the Ceph OSD cluster can connect to the active Ceph Monitor cluster on the Controller nodes. 14.1. Rebooting a Ceph Storage (OSD) cluster Complete the following steps to reboot a cluster of Ceph Storage (OSD) nodes. Prerequisites On a Ceph Monitor or Controller node that is running the ceph-mon service, check that the Red Hat Ceph Storage cluster status is healthy and the pg status is active+clean : USD sudo cephadm -- shell ceph status If the Ceph cluster is healthy, it returns a status of HEALTH_OK . If the Ceph cluster status is unhealthy, it returns a status of HEALTH_WARN or HEALTH_ERR . For troubleshooting guidance, see the Red Hat Ceph Storage 5 Troubleshooting Guide or the Red Hat Ceph Storage 6 Troubleshooting Guide . Procedure Log in to a Ceph Monitor or Controller node that is running the ceph-mon service, and disable Ceph Storage cluster rebalancing temporarily: USD sudo cephadm shell -- ceph osd set noout USD sudo cephadm shell -- ceph osd set norebalance Note If you have a multistack or distributed compute node (DCN) architecture, you must specify the Ceph cluster name when you set the noout and norebalance flags. For example: sudo cephadm shell -c /etc/ceph/<cluster>.conf -k /etc/ceph/<cluster>.client.keyring . Select the first Ceph Storage node that you want to reboot and log in to the node. Reboot the node: Wait until the node boots. Log in to the node and check the Ceph cluster status: USD sudo cephadm -- shell ceph status Check that the pgmap reports all pgs as normal ( active+clean ). Log out of the node, reboot the node, and check its status. Repeat this process until you have rebooted all Ceph Storage nodes. When complete, log in to a Ceph Monitor or Controller node that is running the ceph-mon service and enable Ceph cluster rebalancing: USD sudo cephadm shell -- ceph osd unset noout USD sudo cephadm shell -- ceph osd unset norebalance Note If you have a multistack or distributed compute node (DCN) architecture, you must specify the Ceph cluster name when you unset the noout and norebalance flags. For example: sudo cephadm shell -c /etc/ceph/<cluster>.conf -k /etc/ceph/<cluster>.client.keyring Perform a final status check to verify that the cluster reports HEALTH_OK : USD sudo cephadm shell ceph status 14.2. Rebooting Ceph Storage OSDs to enable connectivity to the Ceph Monitor service If a situation occurs where all overcloud nodes boot at the same time, the Ceph OSD services might not start correctly on the Ceph Storage nodes. In this situation, reboot the Ceph Storage OSDs so they can connect to the Ceph Monitor service. Procedure Verify a HEALTH_OK status of the Ceph Storage node cluster:
[ "sudo cephadm -- shell ceph status", "sudo cephadm shell -- ceph osd set noout sudo cephadm shell -- ceph osd set norebalance", "sudo reboot", "sudo cephadm -- shell ceph status", "sudo cephadm shell -- ceph osd unset noout sudo cephadm shell -- ceph osd unset norebalance", "sudo cephadm shell ceph status", "sudo ceph status" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_red_hat_ceph_storage_and_red_hat_openstack_platform_together_with_director/assembly_rebooting-the-environment_deployingcontainerizedrhcs
Chapter 12. Allowing JavaScript-based access to the API server from additional hosts
Chapter 12. Allowing JavaScript-based access to the API server from additional hosts 12.1. Allowing JavaScript-based access to the API server from additional hosts The default OpenShift Container Platform configuration only allows the web console to send requests to the API server. If you need to access the API server or OAuth server from a JavaScript application using a different hostname, you can configure additional hostnames to allow. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Edit the APIServer resource: USD oc edit apiserver.config.openshift.io cluster Add the additionalCORSAllowedOrigins field under the spec section and specify one or more additional hostnames: apiVersion: config.openshift.io/v1 kind: APIServer metadata: annotations: release.openshift.io/create-only: "true" creationTimestamp: "2019-07-11T17:35:37Z" generation: 1 name: cluster resourceVersion: "907" selfLink: /apis/config.openshift.io/v1/apiservers/cluster uid: 4b45a8dd-a402-11e9-91ec-0219944e0696 spec: additionalCORSAllowedOrigins: - (?i)//my\.subdomain\.domain\.com(:|\z) 1 1 The hostname is specified as a Golang regular expression that matches against CORS headers from HTTP requests against the API server and OAuth server. Note This example uses the following syntax: The (?i) makes it case-insensitive. The // pins to the beginning of the domain and matches the double slash following http: or https: . The \. escapes dots in the domain name. The (:|\z) matches the end of the domain name (\z) or a port separator (:) . Save the file to apply the changes.
[ "oc edit apiserver.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: APIServer metadata: annotations: release.openshift.io/create-only: \"true\" creationTimestamp: \"2019-07-11T17:35:37Z\" generation: 1 name: cluster resourceVersion: \"907\" selfLink: /apis/config.openshift.io/v1/apiservers/cluster uid: 4b45a8dd-a402-11e9-91ec-0219944e0696 spec: additionalCORSAllowedOrigins: - (?i)//my\\.subdomain\\.domain\\.com(:|\\z) 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/security_and_compliance/allowing-javascript-based-access-api-server
25.4. Fibre Channel
25.4. Fibre Channel This section discusses the Fibre Channel API, native Red Hat Enterprise Linux 7 Fibre Channel drivers, and the Fibre Channel capabilities of these drivers. 25.4.1. Fibre Channel API Following is a list of /sys/class/ directories that contain files used to provide the userspace API. In each item, host numbers are designated by H , bus numbers are B , targets are T , logical unit numbers (LUNs) are L , and remote port numbers are R . Important If your system is using multipath software, Red Hat recommends that you consult your hardware vendor before changing any of the values described in this section. Transport: /sys/class/fc_transport/target H : B : T / port_id - 24-bit port ID/address node_name - 64-bit node name port_name - 64-bit port name Remote Port: /sys/class/fc_remote_ports/rport- H : B - R / port_id node_name port_name dev_loss_tmo : controls when the scsi device gets removed from the system. After dev_loss_tmo triggers, the scsi device is removed. In multipath.conf , you can set dev_loss_tmo to infinity , which sets its value to 2,147,483,647 seconds, or 68 years, and is the maximum dev_loss_tmo value. In Red Hat Enterprise Linux 7, if you do not set the fast_io_fail_tmo option, dev_loss_tmo is capped to 600 seconds. By default, fast_io_fail_tmo is set to 5 seconds in Red Hat Enterprise Linux 7 if the multipathd service is running; otherwise, it is set to off . fast_io_fail_tmo : specifies the number of seconds to wait before it marks a link as "bad". Once a link is marked bad, existing running I/O or any new I/O on its corresponding path fails. If I/O is in a blocked queue, it will not be failed until dev_loss_tmo expires and the queue is unblocked. If fast_io_fail_tmo is set to any value except off , dev_loss_tmo is uncapped. If fast_io_fail_tmo is set to off , no I/O fails until the device is removed from the system. If fast_io_fail_tmo is set to a number, I/O fails immediately when the fast_io_fail_tmo timeout triggers. Host: /sys/class/fc_host/host H / port_id issue_lip : instructs the driver to rediscover remote ports. 25.4.2. Native Fibre Channel Drivers and Capabilities Red Hat Enterprise Linux 7 ships with the following native Fibre Channel drivers: lpfc qla2xxx zfcp bfa Important The qla2xxx driver runs in initiator mode by default. To use qla2xxx with Linux-IO, enable Fibre Channel target mode with the corresponding qlini_mode module parameter. First, make sure that the firmware package for your qla device, such as ql2200-firmware or similar, is installed. To enable target mode, add the following parameter to the /usr/lib/modprobe.d/qla2xxx.conf qla2xxx module configuration file: Then, use the dracut -f command to rebuild the initial ramdisk ( initrd ), and reboot the system for the changes to take effect. Table 25.1, "Fibre Channel API Capabilities" describes the different Fibre Channel API capabilities of each native Red Hat Enterprise Linux 7 driver. X denotes support for the capability. Table 25.1. Fibre Channel API Capabilities lpfc qla2xxx zfcp bfa Transport port_id X X X X Transport node_name X X X X Transport port_name X X X X Remote Port dev_loss_tmo X X X X Remote Port fast_io_fail_tmo X X [a] X [b] X Host port_id X X X X Host issue_lip X X X [a] Supported as of Red Hat Enterprise Linux 5.4 [b] Supported as of Red Hat Enterprise Linux 6.0
[ "options qla2xxx qlini_mode=disabled" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/ch-fibrechanel
Developer Guide
Developer Guide Red Hat Ceph Storage 8 Using the various application programming interfaces for Red Hat Ceph Storage Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/developer_guide/index
Chapter 26. Detecting false sharing
Chapter 26. Detecting false sharing False sharing occurs when a processor core on a Symmetric Multi Processing (SMP) system modifies data items on the same cache line that is in use by other processors to access other data items that are not being shared between the processors. This initial modification requires that the other processors using the cache line invalidate their copy and request an updated one despite the processors not needing, or even necessarily having access to, an updated version of the modified data item. You can use the perf c2c command to detect false sharing. 26.1. The purpose of perf c2c The c2c subcommand of the perf tool enables Shared Data Cache-to-Cache (C2C) analysis. You can use the perf c2c command to inspect cache-line contention to detect both true and false sharing. Cache-line contention occurs when a processor core on a Symmetric Multi Processing (SMP) system modifies data items on the same cache line that is in use by other processors. All other processors using this cache-line must then invalidate their copy and request an updated one. This can lead to degraded performance. The perf c2c command provides the following information: Cache lines where contention has been detected Processes reading and writing the data Instructions causing the contention The Non-Uniform Memory Access (NUMA) nodes involved in the contention 26.2. Detecting cache-line contention with perf c2c Use the perf c2c command to detect cache-line contention in a system. The perf c2c command supports the same options as perf record as well as some options exclusive to the c2c subcommand. The recorded data is stored in a perf.data file in the current directory for later analysis. Prerequisites The perf user space tool is installed. For more information, see installing perf . Procedure Use perf c2c to detect cache-line contention: This example samples and records cache-line contention data across all CPU's for a period of seconds as dictated by the sleep command. You can replace the sleep command with any command you want to collect cache-line contention data over. Additional resources perf-c2c(1) man page on your system 26.3. Visualizing a perf.data file recorded with perf c2c record This procedure describes how to visualize the perf.data file, which is recorded using the perf c2c command. Prerequisites The perf user space tool is installed. For more information, see Installing perf . A perf.data file recorded using the perf c2c command is available in the current directory. For more information, see Detecting cache-line contention with perf c2c . Procedure Open the perf.data file for further analysis: This command visualizes the perf.data file into several graphs within the terminal: 26.4. Interpretation of perf c2c report output The visualization displayed by running the perf c2c report --stdio command sorts the data into several tables: Trace Events Information This table provides a high level summary of all the load and store samples, which are collected by the perf c2c record command. Global Shared Cache Line Event Information This table provides statistics over the shared cache lines. c2c Details This table provides information about what events were sampled and how the perf c2c report data is organized within the visualization. Shared Data Cache Line Table This table provides a one line summary for the hottest cache lines where false sharing is detected and is sorted in descending order by the amount of remote Hitm detected per cache line by default. Shared Cache Line Distribution Pareto This tables provides a variety of information about each cache line experiencing contention: The cache lines are numbered in the NUM column, starting at 0 . The virtual address of each cache line is contained in the Data address Offset column and followed subsequently by the offset into the cache line where different accesses occurred. The Pid column contains the process ID. The Code Address column contains the instruction pointer code address. The columns under the cycles label show average load latencies. The cpu cnt column displays how many different CPUs samples came from (essentially, how many different CPUs were waiting for the data indexed at that given location). The Symbol column displays the function name or symbol. The Shared Object column displays the name of the ELF image where the samples come from (the name [ kernel.kallsyms ] is used when the samples come from the kernel). The Source:Line column displays the source file and line number. The Node{cpu list} column displays which specific CPUs samples came from for each node. 26.5. Detecting false sharing with perf c2c This procedure describes how to detect false sharing using the perf c2c command. Prerequisites The perf user space tool is installed. For more information, see installing perf . A perf.data file recorded using the perf c2c command is available in the current directory. For more information, see Detecting cache-line contention with perf c2c . Procedure Open the perf.data file for further analysis: This opens the perf.data file in the terminal. In the "Trace Event Information" table, locate the row containing the values for LLC Misses to Remote Cache (HITM) : The percentage in the value column of the LLC Misses to Remote Cache (HITM) row represents the percentage of LLC misses that were occurring across NUMA nodes in modified cache-lines and is a key indicator false sharing has occurred. Inspect the Rmt column of the LLC Load Hitm field of the Shared Data Cache Line Table : This table is sorted in descending order by the amount of remote Hitm detected per cache line. A high number in the Rmt column of the LLC Load Hitm section indicates false sharing and requires further inspection of the cache line on which it occurred to debug the false sharing activity.
[ "perf c2c record -a sleep seconds", "perf c2c report --stdio", "================================================= Trace Event Information ================================================= Total records : 329219 Locked Load/Store Operations : 14654 Load Operations : 69679 Loads - uncacheable : 0 Loads - IO : 0 Loads - Miss : 3972 Loads - no mapping : 0 Load Fill Buffer Hit : 11958 Load L1D hit : 17235 Load L2D hit : 21 Load LLC hit : 14219 Load Local HITM : 3402 Load Remote HITM : 12757 Load Remote HIT : 5295 Load Local DRAM : 976 Load Remote DRAM : 3246 Load MESI State Exclusive : 4222 Load MESI State Shared : 0 Load LLC Misses : 22274 LLC Misses to Local DRAM : 4.4% LLC Misses to Remote DRAM : 14.6% LLC Misses to Remote cache (HIT) : 23.8% LLC Misses to Remote cache (HITM) : 57.3% Store Operations : 259539 Store - uncacheable : 0 Store - no mapping : 11 Store L1D Hit : 256696 Store L1D Miss : 2832 No Page Map Rejects : 2376 Unable to parse data source : 1 ================================================= Global Shared Cache Line Event Information ================================================= Total Shared Cache Lines : 55 Load HITs on shared lines : 55454 Fill Buffer Hits on shared lines : 10635 L1D hits on shared lines : 16415 L2D hits on shared lines : 0 LLC hits on shared lines : 8501 Locked Access on shared lines : 14351 Store HITs on shared lines : 109953 Store L1D hits on shared lines : 109449 Total Merged records : 126112 ================================================= c2c details ================================================= Events : cpu/mem-loads,ldlat=30/P : cpu/mem-stores/P Cachelines sort on : Remote HITMs Cacheline data groupping : offset,pid,iaddr ================================================= Shared Data Cache Line Table ================================================= # Total Rmt ----- LLC Load Hitm ----- ---- Store Reference ---- --- Load Dram ---- LLC Total ----- Core Load Hit ----- -- LLC Load Hit -- Index Cacheline records Hitm Total Lcl Rmt Total L1Hit L1Miss Lcl Rmt Ld Miss Loads FB L1 L2 Llc Rmt ..... .................. ....... ....... ....... ....... ....... ....... ....... ....... ........ ........ ....... ....... ....... ....... ....... ........ ..... # 0 0x602180 149904 77.09% 12103 2269 9834 109504 109036 468 727 2657 13747 40400 5355 16154 0 2875 529 1 0x602100 12128 22.20% 3951 1119 2832 0 0 0 65 200 3749 12128 5096 108 0 2056 652 2 0xffff883ffb6a7e80 260 0.09% 15 3 12 161 161 0 1 1 15 99 25 50 0 6 1 3 0xffffffff81aec000 157 0.07% 9 0 9 1 0 1 0 7 20 156 50 59 0 27 4 4 0xffffffff81e3f540 179 0.06% 9 1 8 117 97 20 0 10 25 62 11 1 0 24 7 ================================================= Shared Cache Line Distribution Pareto ================================================= # ----- HITM ----- -- Store Refs -- Data address ---------- cycles ---------- cpu Shared Num Rmt Lcl L1 Hit L1 Miss Offset Pid Code address rmt hitm lcl hitm load cnt Symbol Object Source:Line Node{cpu list} ..... ....... ....... ....... ....... .................. ....... .................. ........ ........ ........ ........ ................... .................... ........................... . # ------------------------------------------------------------- 0 9834 2269 109036 468 0x602180 ------------------------------------------------------------- 65.51% 55.88% 75.20% 0.00% 0x0 14604 0x400b4f 27161 26039 26017 9 [.] read_write_func no_false_sharing.exe false_sharing_example.c:144 0{0-1,4} 1{24-25,120} 2{48,54} 3{169} 0.41% 0.35% 0.00% 0.00% 0x0 14604 0x400b56 18088 12601 26671 9 [.] read_write_func no_false_sharing.exe false_sharing_example.c:145 0{0-1,4} 1{24-25,120} 2{48,54} 3{169} 0.00% 0.00% 24.80% 100.00% 0x0 14604 0x400b61 0 0 0 9 [.] read_write_func no_false_sharing.exe false_sharing_example.c:145 0{0-1,4} 1{24-25,120} 2{48,54} 3{169} 7.50% 9.92% 0.00% 0.00% 0x20 14604 0x400ba7 2470 1729 1897 2 [.] read_write_func no_false_sharing.exe false_sharing_example.c:154 1{122} 2{144} 17.61% 20.89% 0.00% 0.00% 0x28 14604 0x400bc1 2294 1575 1649 2 [.] read_write_func no_false_sharing.exe false_sharing_example.c:158 2{53} 3{170} 8.97% 12.96% 0.00% 0.00% 0x30 14604 0x400bdb 2325 1897 1828 2 [.] read_write_func no_false_sharing.exe false_sharing_example.c:162 0{96} 3{171} ------------------------------------------------------------- 1 2832 1119 0 0 0x602100 ------------------------------------------------------------- 29.13% 36.19% 0.00% 0.00% 0x20 14604 0x400bb3 1964 1230 1788 2 [.] read_write_func no_false_sharing.exe false_sharing_example.c:155 1{122} 2{144} 43.68% 34.41% 0.00% 0.00% 0x28 14604 0x400bcd 2274 1566 1793 2 [.] read_write_func no_false_sharing.exe false_sharing_example.c:159 2{53} 3{170} 27.19% 29.40% 0.00% 0.00% 0x30 14604 0x400be7 2045 1247 2011 2 [.] read_write_func no_false_sharing.exe false_sharing_example.c:163 0{96} 3{171}", "perf c2c report --stdio", "================================================= Trace Event Information ================================================= Total records : 329219 Locked Load/Store Operations : 14654 Load Operations : 69679 Loads - uncacheable : 0 Loads - IO : 0 Loads - Miss : 3972 Loads - no mapping : 0 Load Fill Buffer Hit : 11958 Load L1D hit : 17235 Load L2D hit : 21 Load LLC hit : 14219 Load Local HITM : 3402 Load Remote HITM : 12757 Load Remote HIT : 5295 Load Local DRAM : 976 Load Remote DRAM : 3246 Load MESI State Exclusive : 4222 Load MESI State Shared : 0 Load LLC Misses : 22274 LLC Misses to Local DRAM : 4.4% LLC Misses to Remote DRAM : 14.6% LLC Misses to Remote cache (HIT) : 23.8% LLC Misses to Remote cache (HITM) : 57.3% Store Operations : 259539 Store - uncacheable : 0 Store - no mapping : 11 Store L1D Hit : 256696 Store L1D Miss : 2832 No Page Map Rejects : 2376 Unable to parse data source : 1", "================================================= Shared Data Cache Line Table ================================================= # # Total Rmt ----- LLC Load Hitm ----- ---- Store Reference ---- --- Load Dram ---- LLC Total ----- Core Load Hit ----- -- LLC Load Hit -- # Index Cacheline records Hitm Total Lcl Rmt Total L1Hit L1Miss Lcl Rmt Ld Miss Loads FB L1 L2 Llc Rmt # ..... .................. ....... ....... ....... ....... ....... ....... ....... ....... ........ ........ ....... ....... ....... ....... ....... ........ ..... # 0 0x602180 149904 77.09% 12103 2269 9834 109504 109036 468 727 2657 13747 40400 5355 16154 0 2875 529 1 0x602100 12128 22.20% 3951 1119 2832 0 0 0 65 200 3749 12128 5096 108 0 2056 652 2 0xffff883ffb6a7e80 260 0.09% 15 3 12 161 161 0 1 1 15 99 25 50 0 6 1 3 0xffffffff81aec000 157 0.07% 9 0 9 1 0 1 0 7 20 156 50 59 0 27 4 4 0xffffffff81e3f540 179 0.06% 9 1 8 117 97 20 0 10 25 62 11 1 0 24 7" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/monitoring_and_managing_system_status_and_performance/detecting-false-sharing_monitoring-and-managing-system-status-and-performance
14.2. Configure Locking (Library Mode)
14.2. Configure Locking (Library Mode) For Library mode, the locking element and its parameters are set within the default element and for each named cache, it occurs within the namedCache element. The following is an example of this configuration: Procedure 14.2. Configure Locking (Library Mode) The concurrencyLevel parameter specifies the concurrency level for the lock container. Set this value according to the number of concurrent threads interacting with the data grid. The isolationLevel parameter specifies the cache's isolation level. Valid isolation levels are READ_COMMITTED and REPEATABLE_READ . For details about isolation levels, see Section 16.1, "About Isolation Levels" The lockAcquisitionTimeout parameter specifies time (in milliseconds) after which a lock acquisition attempt times out. The useLockStriping parameter specifies whether a pool of shared locks are maintained for all entries that require locks. If set to FALSE , locks are created for each entry in the cache. For details, see Section 15.1, "About Lock Striping" The writeSkewCheck parameter is only valid if the isolationLevel is set to REPEATABLE_READ . If this parameter is set to FALSE , a disparity between a working entry and the underlying entry at write time results in the working entry overwriting the underlying entry. If the parameter is set to TRUE , such conflicts (namely write skews) throw an exception. The writeSkewCheck parameter can be only used with OPTIMISTIC transactions and it requires entry versioning to be enabled, with SIMPLE versioning scheme. Report a bug
[ "<infinispan> <!-- Other configuration elements here --> <default> <locking concurrencyLevel=\"USD{VALUE}\" isolationLevel=\"USD{LEVEL}\" lockAcquisitionTimeout=\"USD{TIME}\" useLockStriping=\"USD{TRUE/FALSE}\" writeSkewCheck=\"USD{TRUE/FALSE}\" />" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/configure_locking_library_mode
40.5.2. Using opreport on a Single Executable
40.5.2. Using opreport on a Single Executable To retrieve more detailed profiled information about a specific executable, use opreport : <executable> must be the full path to the executable to be analyzed. <mode> must be one of the following: -l List sample data by symbols. For example, the following is part of the output from running the command opreport -l /lib/tls/libc- <version> .so : The first column is the number of samples for the symbol, the second column is the percentage of samples for this symbol relative to the overall samples for the executable, and the third column is the symbol name. To sort the output from the largest number of samples to the smallest (reverse order), use -r in conjunction with the -l option. -i <symbol-name> List sample data specific to a symbol name. For example, the following output is from the command opreport -l -i __gconv_transform_utf8_internal /lib/tls/libc- <version> .so : The first line is a summary for the symbol/executable combination. The first column is the number of samples for the memory symbol. The second column is the percentage of samples for the memory address relative to the total number of samples for the symbol. The third column is the symbol name. -d List sample data by symbols with more detail than -l . For example, the following output is from the command opreport -l -d __gconv_transform_utf8_internal /lib/tls/libc- <version> .so : The data is the same as the -l option except that for each symbol, each virtual memory address used is shown. For each virtual memory address, the number of samples and percentage of samples relative to the number of samples for the symbol is displayed. -x <symbol-name> Exclude the comma-separated list of symbols from the output. session : <name> Specify the full path to the session or a directory relative to the /var/lib/oprofile/samples/ directory.
[ "opreport <mode> <executable>", "samples % symbol name 12 21.4286 __gconv_transform_utf8_internal 5 8.9286 _int_malloc 4 7.1429 malloc 3 5.3571 __i686.get_pc_thunk.bx 3 5.3571 _dl_mcount_wrapper_check 3 5.3571 mbrtowc 3 5.3571 memcpy 2 3.5714 _int_realloc 2 3.5714 _nl_intern_locale_data 2 3.5714 free 2 3.5714 strcmp 1 1.7857 __ctype_get_mb_cur_max 1 1.7857 __unregister_atfork 1 1.7857 __write_nocancel 1 1.7857 _dl_addr 1 1.7857 _int_free 1 1.7857 _itoa_word 1 1.7857 calc_eclosure_iter 1 1.7857 fopen@@GLIBC_2.1 1 1.7857 getpid 1 1.7857 memmove 1 1.7857 msort_with_tmp 1 1.7857 strcpy 1 1.7857 strlen 1 1.7857 vfprintf 1 1.7857 write", "samples % symbol name 12 100.000 __gconv_transform_utf8_internal", "vma samples % symbol name 00a98640 12 100.000 __gconv_transform_utf8_internal 00a98640 1 8.3333 00a9868c 2 16.6667 00a9869a 1 8.3333 00a986c1 1 8.3333 00a98720 1 8.3333 00a98749 1 8.3333 00a98753 1 8.3333 00a98789 1 8.3333 00a98864 1 8.3333 00a98869 1 8.3333 00a98b08 1 8.3333" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Analyzing_the_Data-Using_opreport_on_a_Single_Executable
Chapter 19. KubeScheduler [operator.openshift.io/v1]
Chapter 19. KubeScheduler [operator.openshift.io/v1] Description KubeScheduler provides information to configure an operator to manage scheduler. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 19.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec is the specification of the desired behavior of the Kubernetes Scheduler status object status is the most recently observed status of the Kubernetes Scheduler 19.1.1. .spec Description spec is the specification of the desired behavior of the Kubernetes Scheduler Type object Property Type Description failedRevisionLimit integer failedRevisionLimit is the number of failed static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) forceRedeploymentReason string forceRedeploymentReason can be used to force the redeployment of the operand by providing a unique string. This provides a mechanism to kick a previously failed deployment and provide a reason why you think it will work this time instead of failing again on the same config. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". succeededRevisionLimit integer succeededRevisionLimit is the number of successful static pod installer revisions to keep on disk and in the api -1 = unlimited, 0 or unset = 5 (default) unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides 19.1.2. .status Description status is the most recently observed status of the Kubernetes Scheduler Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. latestAvailableRevision integer latestAvailableRevision is the deploymentID of the most recent deployment latestAvailableRevisionReason string latestAvailableRevisionReason describe the detailed reason for the most recent deployment nodeStatuses array nodeStatuses track the deployment values and errors across individual nodes nodeStatuses[] object NodeStatus provides information about the current state of a particular node managed by this operator. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 19.1.3. .status.conditions Description conditions is a list of conditions and their status Type array 19.1.4. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 19.1.5. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 19.1.6. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 19.1.7. .status.nodeStatuses Description nodeStatuses track the deployment values and errors across individual nodes Type array 19.1.8. .status.nodeStatuses[] Description NodeStatus provides information about the current state of a particular node managed by this operator. Type object Property Type Description currentRevision integer currentRevision is the generation of the most recently successful deployment lastFailedCount integer lastFailedCount is how often the installer pod of the last failed revision failed. lastFailedReason string lastFailedReason is a machine readable failure reason string. lastFailedRevision integer lastFailedRevision is the generation of the deployment we tried and failed to deploy. lastFailedRevisionErrors array (string) lastFailedRevisionErrors is a list of human readable errors during the failed deployment referenced in lastFailedRevision. lastFailedTime string lastFailedTime is the time the last failed revision failed the last time. lastFallbackCount integer lastFallbackCount is how often a fallback to a revision happened. nodeName string nodeName is the name of the node targetRevision integer targetRevision is the generation of the deployment we're trying to apply 19.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/kubeschedulers DELETE : delete collection of KubeScheduler GET : list objects of kind KubeScheduler POST : create a KubeScheduler /apis/operator.openshift.io/v1/kubeschedulers/{name} DELETE : delete a KubeScheduler GET : read the specified KubeScheduler PATCH : partially update the specified KubeScheduler PUT : replace the specified KubeScheduler /apis/operator.openshift.io/v1/kubeschedulers/{name}/status GET : read status of the specified KubeScheduler PATCH : partially update status of the specified KubeScheduler PUT : replace status of the specified KubeScheduler 19.2.1. /apis/operator.openshift.io/v1/kubeschedulers Table 19.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of KubeScheduler Table 19.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 19.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind KubeScheduler Table 19.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 19.5. HTTP responses HTTP code Reponse body 200 - OK KubeSchedulerList schema 401 - Unauthorized Empty HTTP method POST Description create a KubeScheduler Table 19.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.7. Body parameters Parameter Type Description body KubeScheduler schema Table 19.8. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 201 - Created KubeScheduler schema 202 - Accepted KubeScheduler schema 401 - Unauthorized Empty 19.2.2. /apis/operator.openshift.io/v1/kubeschedulers/{name} Table 19.9. Global path parameters Parameter Type Description name string name of the KubeScheduler Table 19.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a KubeScheduler Table 19.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 19.12. Body parameters Parameter Type Description body DeleteOptions schema Table 19.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified KubeScheduler Table 19.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 19.15. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified KubeScheduler Table 19.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.17. Body parameters Parameter Type Description body Patch schema Table 19.18. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified KubeScheduler Table 19.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.20. Body parameters Parameter Type Description body KubeScheduler schema Table 19.21. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 201 - Created KubeScheduler schema 401 - Unauthorized Empty 19.2.3. /apis/operator.openshift.io/v1/kubeschedulers/{name}/status Table 19.22. Global path parameters Parameter Type Description name string name of the KubeScheduler Table 19.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified KubeScheduler Table 19.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 19.25. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified KubeScheduler Table 19.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.27. Body parameters Parameter Type Description body Patch schema Table 19.28. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified KubeScheduler Table 19.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 19.30. Body parameters Parameter Type Description body KubeScheduler schema Table 19.31. HTTP responses HTTP code Reponse body 200 - OK KubeScheduler schema 201 - Created KubeScheduler schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/operator_apis/kubescheduler-operator-openshift-io-v1
Part VI. Installing and configuring KIE Server on Oracle WebLogic Server
Part VI. Installing and configuring KIE Server on Oracle WebLogic Server As a system administrator, you can configure your Oracle WebLogic Server for Red Hat KIE Server and install KIE Server on that Oracle server instance. Note Support for Red Hat Process Automation Manager on Oracle WebLogic Server is now in the maintenance phase. Red Hat will continue to support Red Hat Process Automation Manager on Oracle WebLogic Server with the following limitations: Red Hat will not release new certifications or software functionality. Red Hat will release only qualified security patches that have a critical impact and mission-critical bug fix patches. In the future, Red Hat might direct customers to migrate to new platforms and product components that are compatible with the Red Hat hybrid cloud strategy. Prerequisites An Oracle WebLogic Server instance version 12.2.1.3.0 or later is installed. For complete installation instructions, see the Oracle WebLogic Server product page . You have access to the Oracle WebLogic Server Administration Console, usually at http://<HOST>:7001/console .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/assembly-installing-kie-server-on-wls
Architecture
Architecture OpenShift Container Platform 4.11 An overview of the architecture for OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/architecture/index
Chapter 14. Tuning nodes for low latency with the performance profile
Chapter 14. Tuning nodes for low latency with the performance profile Tune nodes for low latency by using the cluster performance profile. You can restrict CPUs for infra and application containers, configure huge pages, Hyper-Threading, and configure CPU partitions for latency-sensitive processes. 14.1. Creating a performance profile You can create a cluster performance profile by using the Performance Profile Creator (PPC) tool. The PPC is a function of the Node Tuning Operator. The PPC combines information about your cluster with user-supplied configurations to generate a performance profile that is appropriate to your hardware, topology and use-case. Note Performance profiles are applicable only to bare-metal environments where the cluster has direct access to the underlying hardware resources. You can configure performances profiles for both single-node OpenShift and multi-node clusters. The following is a high-level workflow for creating and applying a performance profile in your cluster: Create a machine config pool (MCP) for nodes that you want to target with performance configurations. In single-node OpenShift clusters, you must use the master MCP because there is only one node in the cluster. Gather information about your cluster using the must-gather command. Use the PPC tool to create a performance profile by using either of the following methods: Run the PPC tool by using Podman. Run the PPC tool by using a wrapper script. Configure the performance profile for your use case and apply the performance profile to your cluster. 14.1.1. About the Performance Profile Creator The Performance Profile Creator (PPC) is a command-line tool, delivered with the Node Tuning Operator, that can help you to create a performance profile for your cluster. Initially, you can use the PPC tool to process the must-gather data to display key performance configurations for your cluster, including the following information: NUMA cell partitioning with the allocated CPU IDs Hyper-Threading node configuration You can use this information to help you configure the performance profile. Running the PPC Specify performance configuration arguments to the PPC tool to generate a proposed performance profile that is appropriate for your hardware, topology, and use-case. You can run the PPC by using one of the following methods: Run the PPC by using Podman Run the PPC by using the wrapper script Note Using the wrapper script abstracts some of the more granular Podman tasks into an executable script. For example, the wrapper script handles tasks such as pulling and running the required container image, mounting directories into the container, and providing parameters directly to the container through Podman. Both methods achieve the same result. 14.1.2. Creating a machine config pool to target nodes for performance tuning For multi-node clusters, you can define a machine config pool (MCP) to identify the target nodes that you want to configure with a performance profile. In single-node OpenShift clusters, you must use the master MCP because there is only one node in the cluster. You do not need to create a separate MCP for single-node OpenShift clusters. Prerequisites You have cluster-admin role access. You installed the OpenShift CLI ( oc ). Procedure Label the target nodes for configuration by running the following command: USD oc label node <node_name> node-role.kubernetes.io/worker-cnf="" 1 1 Replace <node_name> with the name of your node. This example applies the worker-cnf label. Create a MachineConfigPool resource containing the target nodes: Create a YAML file that defines the MachineConfigPool resource: Example mcp-worker-cnf.yaml file apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-cnf 1 labels: machineconfiguration.openshift.io/role: worker-cnf 2 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker, worker-cnf], } paused: false nodeSelector: matchLabels: node-role.kubernetes.io/worker-cnf: "" 3 1 Specify a name for the MachineConfigPool resource. 2 Specify a unique label for the machine config pool. 3 Specify the nodes with the target label that you defined. Apply the MachineConfigPool resource by running the following command: USD oc apply -f mcp-worker-cnf.yaml Example output machineconfigpool.machineconfiguration.openshift.io/worker-cnf created Verification Check the machine config pools in your cluster by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-58433c7c3c1b4ed5ffef95234d451490 True False False 3 3 3 0 6h46m worker rendered-worker-168f52b168f151e4f853259729b6azc4 True False False 2 2 2 0 6h46m worker-cnf rendered-worker-cnf-168f52b168f151e4f853259729b6azc4 True False False 1 1 1 0 73s 14.1.3. Gathering data about your cluster for the PPC The Performance Profile Creator (PPC) tool requires must-gather data. As a cluster administrator, run the must-gather command to capture information about your cluster. Prerequisites Access to the cluster as a user with the cluster-admin role. You installed the OpenShift CLI ( oc ). You identified a target MCP that you want to configure with a performance profile. Procedure Navigate to the directory where you want to store the must-gather data. Collect cluster information by running the following command: USD oc adm must-gather The command creates a folder with the must-gather data in your local directory with a naming format similar to the following: must-gather.local.1971646453781853027 . Optional: Create a compressed file from the must-gather directory: USD tar cvaf must-gather.tar.gz <must_gather_folder> 1 1 Replace with the name of the must-gather data folder. Note Compressed output is required if you are running the Performance Profile Creator wrapper script. Additional resources For more information about the must-gather tool, see Gathering data about your cluster . 14.1.4. Running the Performance Profile Creator using Podman As a cluster administrator, you can use Podman with the Performance Profile Creator (PPC) to create a performance profile. For more information about the PPC arguments, see the section "Performance Profile Creator arguments" . Important The PPC uses the must-gather data from your cluster to create the performance profile. If you make any changes to your cluster, such as relabeling a node targeted for performance configuration, you must re-create the must-gather data before running PPC again. Prerequisites Access to the cluster as a user with the cluster-admin role. A cluster installed on bare-metal hardware. You installed podman and the OpenShift CLI ( oc ). Access to the Node Tuning Operator image. You identified a machine config pool containing target nodes for configuration. You have access to the must-gather data for your cluster. Procedure Check the machine config pool by running the following command: USD oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-58433c8c3c0b4ed5feef95434d455490 True False False 3 3 3 0 8h worker rendered-worker-668f56a164f151e4a853229729b6adc4 True False False 2 2 2 0 8h worker-cnf rendered-worker-cnf-668f56a164f151e4a853229729b6adc4 True False False 1 1 1 0 79m Use Podman to authenticate to registry.redhat.io by running the following command: USD podman login registry.redhat.io Username: <user_name> Password: <password> Optional: Display help for the PPC tool by running the following command: USD podman run --rm --entrypoint performance-profile-creator registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.17 -h Example output A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default "log") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default "must-gather") --offlined-cpu-count int Number of offlined CPUs --per-pod-power-management Enable Per Pod Power Management --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default "default") --profile-name string Name of the performance profile to be created (default "performance") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default "restricted") --user-level-networking Run with User level Networking(DPDK) enabled To display information about the cluster, run the PPC tool with the log argument by running the following command: USD podman run --entrypoint performance-profile-creator -v <path_to_must_gather>:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.17 --info log --must-gather-dir-path /must-gather --entrypoint performance-profile-creator defines the performance profile creator as a new entry point to podman . -v <path_to_must_gather> specifies the path to either of the following components: The directory containing the must-gather data. An existing directory containing the must-gather decompressed .tar file. --info log specifies a value for the output format. Example output level=info msg="Cluster info:" level=info msg="MCP 'master' nodes:" level=info msg=--- level=info msg="MCP 'worker' nodes:" level=info msg="Node: host.example.com (NUMA cells: 1, HT: true)" level=info msg="NUMA cell 0 : [0 1 2 3]" level=info msg="CPU(s): 4" level=info msg="Node: host1.example.com (NUMA cells: 1, HT: true)" level=info msg="NUMA cell 0 : [0 1 2 3]" level=info msg="CPU(s): 4" level=info msg=--- level=info msg="MCP 'worker-cnf' nodes:" level=info msg="Node: host2.example.com (NUMA cells: 1, HT: true)" level=info msg="NUMA cell 0 : [0 1 2 3]" level=info msg="CPU(s): 4" level=info msg=--- Create a performance profile by running the following command. The example uses sample PPC arguments and values: USD podman run --entrypoint performance-profile-creator -v <path_to_must_gather>:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.17 --mcp-name=worker-cnf --reserved-cpu-count=1 --rt-kernel=true --split-reserved-cpus-across-numa=false --must-gather-dir-path /must-gather --power-consumption-mode=ultra-low-latency --offlined-cpu-count=1 > my-performance-profile.yaml -v <path_to_must_gather> specifies the path to either of the following components: The directory containing the must-gather data. The directory containing the must-gather decompressed .tar file. --mcp-name=worker-cnf specifies the worker-=cnf machine config pool. --reserved-cpu-count=1 specifies one reserved CPU. --rt-kernel=true enables the real-time kernel. --split-reserved-cpus-across-numa=false disables reserved CPUs splitting across NUMA nodes. --power-consumption-mode=ultra-low-latency specifies minimal latency at the cost of increased power consumption. --offlined-cpu-count=1 specifies one offlined CPU. Note The mcp-name argument in this example is set to worker-cnf based on the output of the command oc get mcp . For single-node OpenShift use --mcp-name=master . Example output level=info msg="Nodes targeted by worker-cnf MCP are: [worker-2]" level=info msg="NUMA cell(s): 1" level=info msg="NUMA cell 0 : [0 1 2 3]" level=info msg="CPU(s): 4" level=info msg="1 reserved CPUs allocated: 0 " level=info msg="2 isolated CPUs allocated: 2-3" level=info msg="Additional Kernel Args based on configuration: []" Review the created YAML file by running the following command: USD cat my-performance-profile.yaml Example output --- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 2-3 offlined: "1" reserved: "0" machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-cnf nodeSelector: node-role.kubernetes.io/worker-cnf: "" numa: topologyPolicy: restricted realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true Apply the generated profile: USD oc apply -f my-performance-profile.yaml Example output performanceprofile.performance.openshift.io/performance created 14.1.5. Running the Performance Profile Creator wrapper script The wrapper script simplifies the process of creating a performance profile with the Performance Profile Creator (PPC) tool. The script handles tasks such as pulling and running the required container image, mounting directories into the container, and providing parameters directly to the container through Podman. For more information about the Performance Profile Creator arguments, see the section "Performance Profile Creator arguments" . Important The PPC uses the must-gather data from your cluster to create the performance profile. If you make any changes to your cluster, such as relabeling a node targeted for performance configuration, you must re-create the must-gather data before running PPC again. Prerequisites Access to the cluster as a user with the cluster-admin role. A cluster installed on bare-metal hardware. You installed podman and the OpenShift CLI ( oc ). Access to the Node Tuning Operator image. You identified a machine config pool containing target nodes for configuration. Access to the must-gather tarball. Procedure Create a file on your local machine named, for example, run-perf-profile-creator.sh : USD vi run-perf-profile-creator.sh Paste the following code into the file: #!/bin/bash readonly CONTAINER_RUNTIME=USD{CONTAINER_RUNTIME:-podman} readonly CURRENT_SCRIPT=USD(basename "USD0") readonly CMD="USD{CONTAINER_RUNTIME} run --entrypoint performance-profile-creator" readonly IMG_EXISTS_CMD="USD{CONTAINER_RUNTIME} image exists" readonly IMG_PULL_CMD="USD{CONTAINER_RUNTIME} image pull" readonly MUST_GATHER_VOL="/must-gather" NTO_IMG="registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.17" MG_TARBALL="" DATA_DIR="" usage() { print "Wrapper usage:" print " USD{CURRENT_SCRIPT} [-h] [-p image][-t path] -- [performance-profile-creator flags]" print "" print "Options:" print " -h help for USD{CURRENT_SCRIPT}" print " -p Node Tuning Operator image" print " -t path to a must-gather tarball" USD{IMG_EXISTS_CMD} "USD{NTO_IMG}" && USD{CMD} "USD{NTO_IMG}" -h } function cleanup { [ -d "USD{DATA_DIR}" ] && rm -rf "USD{DATA_DIR}" } trap cleanup EXIT exit_error() { print "error: USD*" usage exit 1 } print() { echo "USD*" >&2 } check_requirements() { USD{IMG_EXISTS_CMD} "USD{NTO_IMG}" || USD{IMG_PULL_CMD} "USD{NTO_IMG}" || \ exit_error "Node Tuning Operator image not found" [ -n "USD{MG_TARBALL}" ] || exit_error "Must-gather tarball file path is mandatory" [ -f "USD{MG_TARBALL}" ] || exit_error "Must-gather tarball file not found" DATA_DIR=USD(mktemp -d -t "USD{CURRENT_SCRIPT}XXXX") || exit_error "Cannot create the data directory" tar -zxf "USD{MG_TARBALL}" --directory "USD{DATA_DIR}" || exit_error "Cannot decompress the must-gather tarball" chmod a+rx "USD{DATA_DIR}" return 0 } main() { while getopts ':hp:t:' OPT; do case "USD{OPT}" in h) usage exit 0 ;; p) NTO_IMG="USD{OPTARG}" ;; t) MG_TARBALL="USD{OPTARG}" ;; ?) exit_error "invalid argument: USD{OPTARG}" ;; esac done shift USD((OPTIND - 1)) check_requirements || exit 1 USD{CMD} -v "USD{DATA_DIR}:USD{MUST_GATHER_VOL}:z" "USD{NTO_IMG}" "USD@" --must-gather-dir-path "USD{MUST_GATHER_VOL}" echo "" 1>&2 } main "USD@" Add execute permissions for everyone on this script: USD chmod a+x run-perf-profile-creator.sh Use Podman to authenticate to registry.redhat.io by running the following command: USD podman login registry.redhat.io Username: <user_name> Password: <password> Optional: Display help for the PPC tool by running the following command: USD ./run-perf-profile-creator.sh -h Example output Wrapper usage: run-perf-profile-creator.sh [-h] [-p image][-t path] -- [performance-profile-creator flags] Options: -h help for run-perf-profile-creator.sh -p Node Tuning Operator image -t path to a must-gather tarball A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default "log") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default "must-gather") --offlined-cpu-count int Number of offlined CPUs --per-pod-power-management Enable Per Pod Power Management --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default "default") --profile-name string Name of the performance profile to be created (default "performance") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default "restricted") --user-level-networking Run with User level Networking(DPDK) enabled --enable-hardware-tuning Enable setting maximum CPU frequencies Note You can optionally set a path for the Node Tuning Operator image using the -p option. If you do not set a path, the wrapper script uses the default image: registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.17 . To display information about the cluster, run the PPC tool with the log argument by running the following command: USD ./run-perf-profile-creator.sh -t /<path_to_must_gather_dir>/must-gather.tar.gz -- --info=log -t /<path_to_must_gather_dir>/must-gather.tar.gz specifies the path to directory containing the must-gather tarball. This is a required argument for the wrapper script. Example output level=info msg="Cluster info:" level=info msg="MCP 'master' nodes:" level=info msg=--- level=info msg="MCP 'worker' nodes:" level=info msg="Node: host.example.com (NUMA cells: 1, HT: true)" level=info msg="NUMA cell 0 : [0 1 2 3]" level=info msg="CPU(s): 4" level=info msg="Node: host1.example.com (NUMA cells: 1, HT: true)" level=info msg="NUMA cell 0 : [0 1 2 3]" level=info msg="CPU(s): 4" level=info msg=--- level=info msg="MCP 'worker-cnf' nodes:" level=info msg="Node: host2.example.com (NUMA cells: 1, HT: true)" level=info msg="NUMA cell 0 : [0 1 2 3]" level=info msg="CPU(s): 4" level=info msg=--- Create a performance profile by running the following command. USD ./run-perf-profile-creator.sh -t /path-to-must-gather/must-gather.tar.gz -- --mcp-name=worker-cnf --reserved-cpu-count=1 --rt-kernel=true --split-reserved-cpus-across-numa=false --power-consumption-mode=ultra-low-latency --offlined-cpu-count=1 > my-performance-profile.yaml This example uses sample PPC arguments and values. --mcp-name=worker-cnf specifies the worker-=cnf machine config pool. --reserved-cpu-count=1 specifies one reserved CPU. --rt-kernel=true enables the real-time kernel. --split-reserved-cpus-across-numa=false disables reserved CPUs splitting across NUMA nodes. --power-consumption-mode=ultra-low-latency specifies minimal latency at the cost of increased power consumption. --offlined-cpu-count=1 specifies one offlined CPUs. Note The mcp-name argument in this example is set to worker-cnf based on the output of the command oc get mcp . For single-node OpenShift use --mcp-name=master . Review the created YAML file by running the following command: USD cat my-performance-profile.yaml Example output --- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 2-3 offlined: "1" reserved: "0" machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-cnf nodeSelector: node-role.kubernetes.io/worker-cnf: "" numa: topologyPolicy: restricted realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true Apply the generated profile: USD oc apply -f my-performance-profile.yaml Example output performanceprofile.performance.openshift.io/performance created 14.1.6. Performance Profile Creator arguments Table 14.1. Required Performance Profile Creator arguments Argument Description mcp-name Name for MCP; for example, worker-cnf corresponding to the target machines. must-gather-dir-path The path of the must gather directory. This argument is only required if you run the PPC tool by using Podman. If you use the PPC with the wrapper script, do not use this argument. Instead, specify the directory path to the must-gather tarball by using the -t option for the wrapper script. reserved-cpu-count Number of reserved CPUs. Use a natural number greater than zero. rt-kernel Enables real-time kernel. Possible values: true or false . Table 14.2. Optional Performance Profile Creator arguments Argument Description disable-ht Disable Hyper-Threading. Possible values: true or false . Default: false . Warning If this argument is set to true you should not disable Hyper-Threading in the BIOS. Disabling Hyper-Threading is accomplished with a kernel command line argument. enable-hardware-tuning Enable the setting of maximum CPU frequencies. To enable this feature, set the maximum frequency for applications running on isolated and reserved CPUs for both of the following fields: spec.hardwareTuning.isolatedCpuFreq spec.hardwareTuning.reservedCpuFreq This is an advanced feature. If you configure hardware tuning, the generated PerformanceProfile includes warnings and guidance on how to set frequency settings. info This captures cluster information. This argument also requires the must-gather-dir-path argument. If any other arguments are set they are ignored. Possible values: log JSON Default: log . offlined-cpu-count Number of offlined CPUs. Note Use a natural number greater than zero. If not enough logical processors are offlined, then error messages are logged. The messages are: Error: failed to compute the reserved and isolated CPUs: please ensure that reserved-cpu-count plus offlined-cpu-count should be in the range [0,1] Error: failed to compute the reserved and isolated CPUs: please specify the offlined CPU count in the range [0,1] power-consumption-mode The power consumption mode. Possible values: default : Performance achieved through CPU partitioning only. low-latency : Enhanced measures to improve latency. ultra-low-latency : Priority given to optimal latency, at the expense of power management. Default: default . per-pod-power-management Enable per pod power management. You cannot use this argument if you configured ultra-low-latency as the power consumption mode. Possible values: true or false . Default: false . profile-name Name of the performance profile to create. Default: performance . split-reserved-cpus-across-numa Split the reserved CPUs across NUMA nodes. Possible values: true or false . Default: false . topology-manager-policy Kubelet Topology Manager policy of the performance profile to be created. Possible values: single-numa-node best-effort restricted Default: restricted . user-level-networking Run with user level networking (DPDK) enabled. Possible values: true or false . Default: false . 14.1.7. Reference performance profiles Use the following reference performance profiles as the basis to develop your own custom profiles. 14.1.7.1. Performance profile template for clusters that use OVS-DPDK on OpenStack To maximize machine performance in a cluster that uses Open vSwitch with the Data Plane Development Kit (OVS-DPDK) on Red Hat OpenStack Platform (RHOSP), you can use a performance profile. You can use the following performance profile template to create a profile for your deployment. Performance profile template for clusters that use OVS-DPDK apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: cnf-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - default_hugepagesz=1GB - hugepagesz=1G - intel_iommu=on cpu: isolated: <CPU_ISOLATED> reserved: <CPU_RESERVED> hugepages: defaultHugepagesSize: 1G pages: - count: <HUGEPAGES_COUNT> node: 0 size: 1G nodeSelector: node-role.kubernetes.io/worker: '' realTimeKernel: enabled: false globallyDisableIrqLoadBalancing: true Insert values that are appropriate for your configuration for the CPU_ISOLATED , CPU_RESERVED , and HUGEPAGES_COUNT keys. 14.1.7.2. Telco RAN DU reference design performance profile The following performance profile configures node-level performance settings for OpenShift Container Platform clusters on commodity hardware to host telco RAN DU workloads. Telco RAN DU reference design performance profile apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: "ran-du.redhat.com" spec: additionalKernelArgs: - "rcupdate.rcu_normal_after_boot=0" - "efi=runtime" - "vfio_pci.enable_sriov=1" - "vfio_pci.disable_idle_d3=1" - "module_blacklist=irdma" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: "" nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: "restricted" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false 14.1.7.3. Telco core reference design performance profile The following performance profile configures node-level performance settings for OpenShift Container Platform clusters on commodity hardware to host telco core workloads. Telco core reference design performance profile apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: "ran-du.redhat.com" spec: additionalKernelArgs: - "rcupdate.rcu_normal_after_boot=0" - "efi=runtime" - "vfio_pci.enable_sriov=1" - "vfio_pci.disable_idle_d3=1" - "module_blacklist=irdma" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: "" nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: "restricted" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false 14.2. Supported performance profile API versions The Node Tuning Operator supports v2 , v1 , and v1alpha1 for the performance profile apiVersion field. The v1 and v1alpha1 APIs are identical. The v2 API includes an optional boolean field globallyDisableIrqLoadBalancing with a default value of false . Upgrading the performance profile to use device interrupt processing When you upgrade the Node Tuning Operator performance profile custom resource definition (CRD) from v1 or v1alpha1 to v2, globallyDisableIrqLoadBalancing is set to true on existing profiles. Note globallyDisableIrqLoadBalancing toggles whether IRQ load balancing will be disabled for the Isolated CPU set. When the option is set to true it disables IRQ load balancing for the Isolated CPU set. Setting the option to false allows the IRQs to be balanced across all CPUs. Upgrading Node Tuning Operator API from v1alpha1 to v1 When upgrading Node Tuning Operator API version from v1alpha1 to v1, the v1alpha1 performance profiles are converted on-the-fly using a "None" Conversion strategy and served to the Node Tuning Operator with API version v1. Upgrading Node Tuning Operator API from v1alpha1 or v1 to v2 When upgrading from an older Node Tuning Operator API version, the existing v1 and v1alpha1 performance profiles are converted using a conversion webhook that injects the globallyDisableIrqLoadBalancing field with a value of true . 14.3. Configuring node power consumption and realtime processing with workload hints Procedure Create a PerformanceProfile appropriate for the environment's hardware and topology by using the Performance Profile Creator (PPC) tool. The following table describes the possible values set for the power-consumption-mode flag associated with the PPC tool and the workload hint that is applied. Table 14.3. Impact of combinations of power consumption and real-time settings on latency Performance Profile creator setting Hint Environment Description Default workloadHints: highPowerConsumption: false realTime: false High throughput cluster without latency requirements Performance achieved through CPU partitioning only. Low-latency workloadHints: highPowerConsumption: false realTime: true Regional data-centers Both energy savings and low-latency are desirable: compromise between power management, latency and throughput. Ultra-low-latency workloadHints: highPowerConsumption: true realTime: true Far edge clusters, latency critical workloads Optimized for absolute minimal latency and maximum determinism at the cost of increased power consumption. Per-pod power management workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true Critical and non-critical workloads Allows for power management per pod. Example The following configuration is commonly used in a telco RAN DU deployment. apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: workload-hints spec: ... workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: false 1 1 Disables some debugging and monitoring features that can affect system latency. Note When the realTime workload hint flag is set to true in a performance profile, add the cpu-quota.crio.io: disable annotation to every guaranteed pod with pinned CPUs. This annotation is necessary to prevent the degradation of the process performance within the pod. If the realTime workload hint is not explicitly set, it defaults to true . For more information how combinations of power consumption and real-time settings impact latency, see Understanding workload hints . 14.4. Configuring power saving for nodes that run colocated high and low priority workloads You can enable power savings for a node that has low priority workloads that are colocated with high priority workloads without impacting the latency or throughput of the high priority workloads. Power saving is possible without modifications to the workloads themselves. Important The feature is supported on Intel Ice Lake and later generations of Intel CPUs. The capabilities of the processor might impact the latency and throughput of the high priority workloads. Prerequisites You enabled C-states and operating system controlled P-states in the BIOS Procedure Generate a PerformanceProfile with the per-pod-power-management argument set to true : USD podman run --entrypoint performance-profile-creator -v \ /must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.17 \ --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true \ --split-reserved-cpus-across-numa=false --topology-manager-policy=single-numa-node \ --must-gather-dir-path /must-gather --power-consumption-mode=low-latency \ 1 --per-pod-power-management=true > my-performance-profile.yaml 1 The power-consumption-mode argument must be default or low-latency when the per-pod-power-management argument is set to true . Example PerformanceProfile with perPodPowerManagement apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: [.....] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true Set the default cpufreq governor as an additional kernel argument in the PerformanceProfile custom resource (CR): apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: ... additionalKernelArgs: - cpufreq.default_governor=schedutil 1 1 Using the schedutil governor is recommended, however, you can use other governors such as the ondemand or powersave governors. Set the maximum CPU frequency in the TunedPerformancePatch CR: spec: profile: - data: | [sysfs] /sys/devices/system/cpu/intel_pstate/max_perf_pct = <x> 1 1 The max_perf_pct controls the maximum frequency that the cpufreq driver is allowed to set as a percentage of the maximum supported cpu frequency. This value applies to all CPUs. You can check the maximum supported frequency in /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq . As a starting point, you can use a percentage that caps all CPUs at the All Cores Turbo frequency. The All Cores Turbo frequency is the frequency that all cores will run at when the cores are all fully occupied. Additional resources About the Performance Profile Creator Disabling power saving mode for high priority pods Managing device interrupt processing for guaranteed pod isolated CPUs 14.5. Restricting CPUs for infra and application containers Generic housekeeping and workload tasks use CPUs in a way that may impact latency-sensitive processes. By default, the container runtime uses all online CPUs to run all containers together, which can result in context switches and spikes in latency. Partitioning the CPUs prevents noisy processes from interfering with latency-sensitive processes by separating them from each other. The following table describes how processes run on a CPU after you have tuned the node using the Node Tuning Operator: Table 14.4. Process' CPU assignments Process type Details Burstable and BestEffort pods Runs on any CPU except where low latency workload is running Infrastructure pods Runs on any CPU except where low latency workload is running Interrupts Redirects to reserved CPUs (optional in OpenShift Container Platform 4.7 and later) Kernel processes Pins to reserved CPUs Latency-sensitive workload pods Pins to a specific set of exclusive CPUs from the isolated pool OS processes/systemd services Pins to reserved CPUs The allocatable capacity of cores on a node for pods of all QoS process types, Burstable , BestEffort , or Guaranteed , is equal to the capacity of the isolated pool. The capacity of the reserved pool is removed from the node's total core capacity for use by the cluster and operating system housekeeping duties. Example 1 A node features a capacity of 100 cores. Using a performance profile, the cluster administrator allocates 50 cores to the isolated pool and 50 cores to the reserved pool. The cluster administrator assigns 25 cores to QoS Guaranteed pods and 25 cores for BestEffort or Burstable pods. This matches the capacity of the isolated pool. Example 2 A node features a capacity of 100 cores. Using a performance profile, the cluster administrator allocates 50 cores to the isolated pool and 50 cores to the reserved pool. The cluster administrator assigns 50 cores to QoS Guaranteed pods and one core for BestEffort or Burstable pods. This exceeds the capacity of the isolated pool by one core. Pod scheduling fails because of insufficient CPU capacity. The exact partitioning pattern to use depends on many factors like hardware, workload characteristics and the expected system load. Some sample use cases are as follows: If the latency-sensitive workload uses specific hardware, such as a network interface controller (NIC), ensure that the CPUs in the isolated pool are as close as possible to this hardware. At a minimum, you should place the workload in the same Non-Uniform Memory Access (NUMA) node. The reserved pool is used for handling all interrupts. When depending on system networking, allocate a sufficiently-sized reserve pool to handle all the incoming packet interrupts. In 4.17 and later versions, workloads can optionally be labeled as sensitive. The decision regarding which specific CPUs should be used for reserved and isolated partitions requires detailed analysis and measurements. Factors like NUMA affinity of devices and memory play a role. The selection also depends on the workload architecture and the specific use case. Important The reserved and isolated CPU pools must not overlap and together must span all available cores in the worker node. To ensure that housekeeping tasks and workloads do not interfere with each other, specify two groups of CPUs in the spec section of the performance profile. isolated - Specifies the CPUs for the application container workloads. These CPUs have the lowest latency. Processes in this group have no interruptions and can, for example, reach much higher DPDK zero packet loss bandwidth. reserved - Specifies the CPUs for the cluster and operating system housekeeping duties. Threads in the reserved group are often busy. Do not run latency-sensitive applications in the reserved group. Latency-sensitive applications run in the isolated group. Procedure Create a performance profile appropriate for the environment's hardware and topology. Add the reserved and isolated parameters with the CPUs you want reserved and isolated for the infra and application containers: \ufeffapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: infra-cpus spec: cpu: reserved: "0-4,9" 1 isolated: "5-8" 2 nodeSelector: 3 node-role.kubernetes.io/worker: "" 1 Specify which CPUs are for infra containers to perform cluster and operating system housekeeping duties. 2 Specify which CPUs are for application containers to run workloads. 3 Optional: Specify a node selector to apply the performance profile to specific nodes. 14.6. Configuring Hyper-Threading for a cluster To configure Hyper-Threading for an OpenShift Container Platform cluster, set the CPU threads in the performance profile to the same cores that are configured for the reserved or isolated CPU pools. Note If you configure a performance profile, and subsequently change the Hyper-Threading configuration for the host, ensure that you update the CPU isolated and reserved fields in the PerformanceProfile YAML to match the new configuration. Warning Disabling a previously enabled host Hyper-Threading configuration can cause the CPU core IDs listed in the PerformanceProfile YAML to be incorrect. This incorrect configuration can cause the node to become unavailable because the listed CPUs can no longer be found. Prerequisites Access to the cluster as a user with the cluster-admin role. Install the OpenShift CLI (oc). Procedure Ascertain which threads are running on what CPUs for the host you want to configure. You can view which threads are running on the host CPUs by logging in to the cluster and running the following command: USD lscpu --all --extended Example output CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ 0 0 0 0 0:0:0:0 yes 4800.0000 400.0000 1 0 0 1 1:1:1:0 yes 4800.0000 400.0000 2 0 0 2 2:2:2:0 yes 4800.0000 400.0000 3 0 0 3 3:3:3:0 yes 4800.0000 400.0000 4 0 0 0 0:0:0:0 yes 4800.0000 400.0000 5 0 0 1 1:1:1:0 yes 4800.0000 400.0000 6 0 0 2 2:2:2:0 yes 4800.0000 400.0000 7 0 0 3 3:3:3:0 yes 4800.0000 400.0000 In this example, there are eight logical CPU cores running on four physical CPU cores. CPU0 and CPU4 are running on physical Core0, CPU1 and CPU5 are running on physical Core 1, and so on. Alternatively, to view the threads that are set for a particular physical CPU core ( cpu0 in the example below), open a shell prompt and run the following: USD cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list Example output 0-4 Apply the isolated and reserved CPUs in the PerformanceProfile YAML. For example, you can set logical cores CPU0 and CPU4 as isolated , and logical cores CPU1 to CPU3 and CPU5 to CPU7 as reserved . When you configure reserved and isolated CPUs, the infra containers in pods use the reserved CPUs and the application containers use the isolated CPUs. ... cpu: isolated: 0,4 reserved: 1-3,5-7 ... Note The reserved and isolated CPU pools must not overlap and together must span all available cores in the worker node. Important Hyper-Threading is enabled by default on most Intel processors. If you enable Hyper-Threading, all threads processed by a particular core must be isolated or processed on the same core. When Hyper-Threading is enabled, all guaranteed pods must use multiples of the simultaneous multi-threading (SMT) level to avoid a "noisy neighbor" situation that can cause the pod to fail. See Static policy options for more information. 14.6.1. Disabling Hyper-Threading for low latency applications When configuring clusters for low latency processing, consider whether you want to disable Hyper-Threading before you deploy the cluster. To disable Hyper-Threading, perform the following steps: Create a performance profile that is appropriate for your hardware and topology. Set nosmt as an additional kernel argument. The following example performance profile illustrates this setting: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: example-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - nosmt cpu: isolated: 2-3 reserved: 0-1 hugepages: defaultHugepagesSize: 1G pages: - count: 2 node: 0 size: 1G nodeSelector: node-role.kubernetes.io/performance: '' realTimeKernel: enabled: true Note When you configure reserved and isolated CPUs, the infra containers in pods use the reserved CPUs and the application containers use the isolated CPUs. 14.7. Managing device interrupt processing for guaranteed pod isolated CPUs The Node Tuning Operator can manage host CPUs by dividing them into reserved CPUs for cluster and operating system housekeeping duties, including pod infra containers, and isolated CPUs for application containers to run the workloads. This allows you to set CPUs for low latency workloads as isolated. Device interrupts are load balanced between all isolated and reserved CPUs to avoid CPUs being overloaded, with the exception of CPUs where there is a guaranteed pod running. Guaranteed pod CPUs are prevented from processing device interrupts when the relevant annotations are set for the pod. In the performance profile, globallyDisableIrqLoadBalancing is used to manage whether device interrupts are processed or not. For certain workloads, the reserved CPUs are not always sufficient for dealing with device interrupts, and for this reason, device interrupts are not globally disabled on the isolated CPUs. By default, Node Tuning Operator does not disable device interrupts on isolated CPUs. 14.7.1. Finding the effective IRQ affinity setting for a node Some IRQ controllers lack support for IRQ affinity setting and will always expose all online CPUs as the IRQ mask. These IRQ controllers effectively run on CPU 0. The following are examples of drivers and hardware that Red Hat are aware lack support for IRQ affinity setting. The list is, by no means, exhaustive: Some RAID controller drivers, such as megaraid_sas Many non-volatile memory express (NVMe) drivers Some LAN on motherboard (LOM) network controllers The driver uses managed_irqs Note The reason they do not support IRQ affinity setting might be associated with factors such as the type of processor, the IRQ controller, or the circuitry connections in the motherboard. If the effective affinity of any IRQ is set to an isolated CPU, it might be a sign of some hardware or driver not supporting IRQ affinity setting. To find the effective affinity, log in to the host and run the following command: USD find /proc/irq -name effective_affinity -printf "%p: " -exec cat {} \; Example output /proc/irq/0/effective_affinity: 1 /proc/irq/1/effective_affinity: 8 /proc/irq/2/effective_affinity: 0 /proc/irq/3/effective_affinity: 1 /proc/irq/4/effective_affinity: 2 /proc/irq/5/effective_affinity: 1 /proc/irq/6/effective_affinity: 1 /proc/irq/7/effective_affinity: 1 /proc/irq/8/effective_affinity: 1 /proc/irq/9/effective_affinity: 2 /proc/irq/10/effective_affinity: 1 /proc/irq/11/effective_affinity: 1 /proc/irq/12/effective_affinity: 4 /proc/irq/13/effective_affinity: 1 /proc/irq/14/effective_affinity: 1 /proc/irq/15/effective_affinity: 1 /proc/irq/24/effective_affinity: 2 /proc/irq/25/effective_affinity: 4 /proc/irq/26/effective_affinity: 2 /proc/irq/27/effective_affinity: 1 /proc/irq/28/effective_affinity: 8 /proc/irq/29/effective_affinity: 4 /proc/irq/30/effective_affinity: 4 /proc/irq/31/effective_affinity: 8 /proc/irq/32/effective_affinity: 8 /proc/irq/33/effective_affinity: 1 /proc/irq/34/effective_affinity: 2 Some drivers use managed_irqs , whose affinity is managed internally by the kernel and userspace cannot change the affinity. In some cases, these IRQs might be assigned to isolated CPUs. For more information about managed_irqs , see Affinity of managed interrupts cannot be changed even if they target isolated CPU . 14.7.2. Configuring node interrupt affinity Configure a cluster node for IRQ dynamic load balancing to control which cores can receive device interrupt requests (IRQ). Prerequisites For core isolation, all server hardware components must support IRQ affinity. To check if the hardware components of your server support IRQ affinity, view the server's hardware specifications or contact your hardware provider. Procedure Log in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. Set the performance profile apiVersion to use performance.openshift.io/v2 . Remove the globallyDisableIrqLoadBalancing field or set it to false . Set the appropriate isolated and reserved CPUs. The following snippet illustrates a profile that reserves 2 CPUs. IRQ load-balancing is enabled for pods running on the isolated CPU set: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: dynamic-irq-profile spec: cpu: isolated: 2-5 reserved: 0-1 ... Note When you configure reserved and isolated CPUs, operating system processes, kernel processes, and systemd services run on reserved CPUs. Infrastructure pods run on any CPU except where the low latency workload is running. Low latency workload pods run on exclusive CPUs from the isolated pool. For more information, see "Restricting CPUs for infra and application containers". 14.8. Configuring huge pages Nodes must pre-allocate huge pages used in an OpenShift Container Platform cluster. Use the Node Tuning Operator to allocate huge pages on a specific node. OpenShift Container Platform provides a method for creating and allocating huge pages. Node Tuning Operator provides an easier method for doing this using the performance profile. For example, in the hugepages pages section of the performance profile, you can specify multiple blocks of size , count , and, optionally, node : hugepages: defaultHugepagesSize: "1G" pages: - size: "1G" count: 4 node: 0 1 1 node is the NUMA node in which the huge pages are allocated. If you omit node , the pages are evenly spread across all NUMA nodes. Note Wait for the relevant machine config pool status that indicates the update is finished. These are the only configuration steps you need to do to allocate huge pages. Verification To verify the configuration, see the /proc/meminfo file on the node: USD oc debug node/ip-10-0-141-105.ec2.internal # grep -i huge /proc/meminfo Example output AnonHugePages: ###### ## ShmemHugePages: 0 kB HugePages_Total: 2 HugePages_Free: 2 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: #### ## Hugetlb: #### ## Use oc describe to report the new size: USD oc describe node worker-0.ocp4poc.example.com | grep -i huge Example output hugepages-1g=true hugepages-###: ### hugepages-###: ### 14.8.1. Allocating multiple huge page sizes You can request huge pages with different sizes under the same container. This allows you to define more complicated pods consisting of containers with different huge page size needs. For example, you can define sizes 1G and 2M and the Node Tuning Operator will configure both sizes on the node, as shown here: spec: hugepages: defaultHugepagesSize: 1G pages: - count: 1024 node: 0 size: 2M - count: 4 node: 1 size: 1G 14.9. Reducing NIC queues using the Node Tuning Operator The Node Tuning Operator facilitates reducing NIC queues for enhanced performance. Adjustments are made using the performance profile, allowing customization of queues for different network devices. 14.9.1. Adjusting the NIC queues with the performance profile The performance profile lets you adjust the queue count for each network device. Supported network devices: Non-virtual network devices Network devices that support multiple queues (channels) Unsupported network devices: Pure software network interfaces Block devices Intel DPDK virtual functions Prerequisites Access to the cluster as a user with the cluster-admin role. Install the OpenShift CLI ( oc ). Procedure Log in to the OpenShift Container Platform cluster running the Node Tuning Operator as a user with cluster-admin privileges. Create and apply a performance profile appropriate for your hardware and topology. For guidance on creating a profile, see the "Creating a performance profile" section. Edit this created performance profile: USD oc edit -f <your_profile_name>.yaml Populate the spec field with the net object. The object list can contain two fields: userLevelNetworking is a required field specified as a boolean flag. If userLevelNetworking is true , the queue count is set to the reserved CPU count for all supported devices. The default is false . devices is an optional field specifying a list of devices that will have the queues set to the reserved CPU count. If the device list is empty, the configuration applies to all network devices. The configuration is as follows: interfaceName : This field specifies the interface name, and it supports shell-style wildcards, which can be positive or negative. Example wildcard syntax is as follows: <string> .* Negative rules are prefixed with an exclamation mark. To apply the net queue changes to all devices other than the excluded list, use !<device> , for example, !eno1 . vendorID : The network device vendor ID represented as a 16-bit hexadecimal number with a 0x prefix. deviceID : The network device ID (model) represented as a 16-bit hexadecimal number with a 0x prefix. Note When a deviceID is specified, the vendorID must also be defined. A device that matches all of the device identifiers specified in a device entry interfaceName , vendorID , or a pair of vendorID plus deviceID qualifies as a network device. This network device then has its net queues count set to the reserved CPU count. When two or more devices are specified, the net queues count is set to any net device that matches one of them. Set the queue count to the reserved CPU count for all devices by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-cnf: "" Set the queue count to the reserved CPU count for all devices matching any of the defined device identifiers by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: "eth0" - interfaceName: "eth1" - vendorID: "0x1af4" deviceID: "0x1000" nodeSelector: node-role.kubernetes.io/worker-cnf: "" Set the queue count to the reserved CPU count for all devices starting with the interface name eth by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: "eth*" nodeSelector: node-role.kubernetes.io/worker-cnf: "" Set the queue count to the reserved CPU count for all devices with an interface named anything other than eno1 by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: "!eno1" nodeSelector: node-role.kubernetes.io/worker-cnf: "" Set the queue count to the reserved CPU count for all devices that have an interface name eth0 , vendorID of 0x1af4 , and deviceID of 0x1000 by using this example performance profile: apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: "eth0" - vendorID: "0x1af4" deviceID: "0x1000" nodeSelector: node-role.kubernetes.io/worker-cnf: "" Apply the updated performance profile: USD oc apply -f <your_profile_name>.yaml Additional resources Creating a performance profile . 14.9.2. Verifying the queue status In this section, a number of examples illustrate different performance profiles and how to verify the changes are applied. Example 1 In this example, the net queue count is set to the reserved CPU count (2) for all supported devices. The relevant section from the performance profile is: apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true # ... Display the status of the queues associated with a device using the following command: Note Run this command on the node where the performance profile was applied. USD ethtool -l <device> Verify the queue status before the profile is applied: USD ethtool -l ens4 Example output Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 4 Verify the queue status after the profile is applied: USD ethtool -l ens4 Example output Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1 1 The combined channel shows that the total count of reserved CPUs for all supported devices is 2. This matches what is configured in the performance profile. Example 2 In this example, the net queue count is set to the reserved CPU count (2) for all supported network devices with a specific vendorID . The relevant section from the performance profile is: apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - vendorID = 0x1af4 # ... Display the status of the queues associated with a device using the following command: Note Run this command on the node where the performance profile was applied. USD ethtool -l <device> Verify the queue status after the profile is applied: USD ethtool -l ens4 Example output Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1 1 The total count of reserved CPUs for all supported devices with vendorID=0x1af4 is 2. For example, if there is another network device ens2 with vendorID=0x1af4 it will also have total net queues of 2. This matches what is configured in the performance profile. Example 3 In this example, the net queue count is set to the reserved CPU count (2) for all supported network devices that match any of the defined device identifiers. The command udevadm info provides a detailed report on a device. In this example the devices are: # udevadm info -p /sys/class/net/ens4 ... E: ID_MODEL_ID=0x1000 E: ID_VENDOR_ID=0x1af4 E: INTERFACE=ens4 ... # udevadm info -p /sys/class/net/eth0 ... E: ID_MODEL_ID=0x1002 E: ID_VENDOR_ID=0x1001 E: INTERFACE=eth0 ... Set the net queues to 2 for a device with interfaceName equal to eth0 and any devices that have a vendorID=0x1af4 with the following performance profile: apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - interfaceName = eth0 - vendorID = 0x1af4 ... Verify the queue status after the profile is applied: USD ethtool -l ens4 Example output Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1 1 The total count of reserved CPUs for all supported devices with vendorID=0x1af4 is set to 2. For example, if there is another network device ens2 with vendorID=0x1af4 , it will also have the total net queues set to 2. Similarly, a device with interfaceName equal to eth0 will have total net queues set to 2. 14.9.3. Logging associated with adjusting NIC queues Log messages detailing the assigned devices are recorded in the respective Tuned daemon logs. The following messages might be recorded to the /var/log/tuned/tuned.log file: An INFO message is recorded detailing the successfully assigned devices: INFO tuned.plugins.base: instance net_test (net): assigning devices ens1, ens2, ens3 A WARNING message is recorded if none of the devices can be assigned: WARNING tuned.plugins.base: instance net_test: no matching devices available
[ "oc label node <node_name> node-role.kubernetes.io/worker-cnf=\"\" 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: name: worker-cnf 1 labels: machineconfiguration.openshift.io/role: worker-cnf 2 spec: machineConfigSelector: matchExpressions: - { key: machineconfiguration.openshift.io/role, operator: In, values: [worker, worker-cnf], } paused: false nodeSelector: matchLabels: node-role.kubernetes.io/worker-cnf: \"\" 3", "oc apply -f mcp-worker-cnf.yaml", "machineconfigpool.machineconfiguration.openshift.io/worker-cnf created", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-58433c7c3c1b4ed5ffef95234d451490 True False False 3 3 3 0 6h46m worker rendered-worker-168f52b168f151e4f853259729b6azc4 True False False 2 2 2 0 6h46m worker-cnf rendered-worker-cnf-168f52b168f151e4f853259729b6azc4 True False False 1 1 1 0 73s", "oc adm must-gather", "tar cvaf must-gather.tar.gz <must_gather_folder> 1", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-58433c8c3c0b4ed5feef95434d455490 True False False 3 3 3 0 8h worker rendered-worker-668f56a164f151e4a853229729b6adc4 True False False 2 2 2 0 8h worker-cnf rendered-worker-cnf-668f56a164f151e4a853229729b6adc4 True False False 1 1 1 0 79m", "podman login registry.redhat.io", "Username: <user_name> Password: <password>", "podman run --rm --entrypoint performance-profile-creator registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.17 -h", "A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default \"log\") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default \"must-gather\") --offlined-cpu-count int Number of offlined CPUs --per-pod-power-management Enable Per Pod Power Management --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default \"default\") --profile-name string Name of the performance profile to be created (default \"performance\") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default \"restricted\") --user-level-networking Run with User level Networking(DPDK) enabled", "podman run --entrypoint performance-profile-creator -v <path_to_must_gather>:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.17 --info log --must-gather-dir-path /must-gather", "level=info msg=\"Cluster info:\" level=info msg=\"MCP 'master' nodes:\" level=info msg=--- level=info msg=\"MCP 'worker' nodes:\" level=info msg=\"Node: host.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=\"Node: host1.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=--- level=info msg=\"MCP 'worker-cnf' nodes:\" level=info msg=\"Node: host2.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=---", "podman run --entrypoint performance-profile-creator -v <path_to_must_gather>:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.17 --mcp-name=worker-cnf --reserved-cpu-count=1 --rt-kernel=true --split-reserved-cpus-across-numa=false --must-gather-dir-path /must-gather --power-consumption-mode=ultra-low-latency --offlined-cpu-count=1 > my-performance-profile.yaml", "level=info msg=\"Nodes targeted by worker-cnf MCP are: [worker-2]\" level=info msg=\"NUMA cell(s): 1\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=\"1 reserved CPUs allocated: 0 \" level=info msg=\"2 isolated CPUs allocated: 2-3\" level=info msg=\"Additional Kernel Args based on configuration: []\"", "cat my-performance-profile.yaml", "--- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 2-3 offlined: \"1\" reserved: \"0\" machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-cnf nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true", "oc apply -f my-performance-profile.yaml", "performanceprofile.performance.openshift.io/performance created", "vi run-perf-profile-creator.sh", "#!/bin/bash readonly CONTAINER_RUNTIME=USD{CONTAINER_RUNTIME:-podman} readonly CURRENT_SCRIPT=USD(basename \"USD0\") readonly CMD=\"USD{CONTAINER_RUNTIME} run --entrypoint performance-profile-creator\" readonly IMG_EXISTS_CMD=\"USD{CONTAINER_RUNTIME} image exists\" readonly IMG_PULL_CMD=\"USD{CONTAINER_RUNTIME} image pull\" readonly MUST_GATHER_VOL=\"/must-gather\" NTO_IMG=\"registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.17\" MG_TARBALL=\"\" DATA_DIR=\"\" usage() { print \"Wrapper usage:\" print \" USD{CURRENT_SCRIPT} [-h] [-p image][-t path] -- [performance-profile-creator flags]\" print \"\" print \"Options:\" print \" -h help for USD{CURRENT_SCRIPT}\" print \" -p Node Tuning Operator image\" print \" -t path to a must-gather tarball\" USD{IMG_EXISTS_CMD} \"USD{NTO_IMG}\" && USD{CMD} \"USD{NTO_IMG}\" -h } function cleanup { [ -d \"USD{DATA_DIR}\" ] && rm -rf \"USD{DATA_DIR}\" } trap cleanup EXIT exit_error() { print \"error: USD*\" usage exit 1 } print() { echo \"USD*\" >&2 } check_requirements() { USD{IMG_EXISTS_CMD} \"USD{NTO_IMG}\" || USD{IMG_PULL_CMD} \"USD{NTO_IMG}\" || exit_error \"Node Tuning Operator image not found\" [ -n \"USD{MG_TARBALL}\" ] || exit_error \"Must-gather tarball file path is mandatory\" [ -f \"USD{MG_TARBALL}\" ] || exit_error \"Must-gather tarball file not found\" DATA_DIR=USD(mktemp -d -t \"USD{CURRENT_SCRIPT}XXXX\") || exit_error \"Cannot create the data directory\" tar -zxf \"USD{MG_TARBALL}\" --directory \"USD{DATA_DIR}\" || exit_error \"Cannot decompress the must-gather tarball\" chmod a+rx \"USD{DATA_DIR}\" return 0 } main() { while getopts ':hp:t:' OPT; do case \"USD{OPT}\" in h) usage exit 0 ;; p) NTO_IMG=\"USD{OPTARG}\" ;; t) MG_TARBALL=\"USD{OPTARG}\" ;; ?) exit_error \"invalid argument: USD{OPTARG}\" ;; esac done shift USD((OPTIND - 1)) check_requirements || exit 1 USD{CMD} -v \"USD{DATA_DIR}:USD{MUST_GATHER_VOL}:z\" \"USD{NTO_IMG}\" \"USD@\" --must-gather-dir-path \"USD{MUST_GATHER_VOL}\" echo \"\" 1>&2 } main \"USD@\"", "chmod a+x run-perf-profile-creator.sh", "podman login registry.redhat.io", "Username: <user_name> Password: <password>", "./run-perf-profile-creator.sh -h", "Wrapper usage: run-perf-profile-creator.sh [-h] [-p image][-t path] -- [performance-profile-creator flags] Options: -h help for run-perf-profile-creator.sh -p Node Tuning Operator image -t path to a must-gather tarball A tool that automates creation of Performance Profiles Usage: performance-profile-creator [flags] Flags: --disable-ht Disable Hyperthreading -h, --help help for performance-profile-creator --info string Show cluster information; requires --must-gather-dir-path, ignore the other arguments. [Valid values: log, json] (default \"log\") --mcp-name string MCP name corresponding to the target machines (required) --must-gather-dir-path string Must gather directory path (default \"must-gather\") --offlined-cpu-count int Number of offlined CPUs --per-pod-power-management Enable Per Pod Power Management --power-consumption-mode string The power consumption mode. [Valid values: default, low-latency, ultra-low-latency] (default \"default\") --profile-name string Name of the performance profile to be created (default \"performance\") --reserved-cpu-count int Number of reserved CPUs (required) --rt-kernel Enable Real Time Kernel (required) --split-reserved-cpus-across-numa Split the Reserved CPUs across NUMA nodes --topology-manager-policy string Kubelet Topology Manager Policy of the performance profile to be created. [Valid values: single-numa-node, best-effort, restricted] (default \"restricted\") --user-level-networking Run with User level Networking(DPDK) enabled --enable-hardware-tuning Enable setting maximum CPU frequencies", "./run-perf-profile-creator.sh -t /<path_to_must_gather_dir>/must-gather.tar.gz -- --info=log", "level=info msg=\"Cluster info:\" level=info msg=\"MCP 'master' nodes:\" level=info msg=--- level=info msg=\"MCP 'worker' nodes:\" level=info msg=\"Node: host.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=\"Node: host1.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=--- level=info msg=\"MCP 'worker-cnf' nodes:\" level=info msg=\"Node: host2.example.com (NUMA cells: 1, HT: true)\" level=info msg=\"NUMA cell 0 : [0 1 2 3]\" level=info msg=\"CPU(s): 4\" level=info msg=---", "./run-perf-profile-creator.sh -t /path-to-must-gather/must-gather.tar.gz -- --mcp-name=worker-cnf --reserved-cpu-count=1 --rt-kernel=true --split-reserved-cpus-across-numa=false --power-consumption-mode=ultra-low-latency --offlined-cpu-count=1 > my-performance-profile.yaml", "cat my-performance-profile.yaml", "--- apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: cpu: isolated: 2-3 offlined: \"1\" reserved: \"0\" machineConfigPoolSelector: machineconfiguration.openshift.io/role: worker-cnf nodeSelector: node-role.kubernetes.io/worker-cnf: \"\" numa: topologyPolicy: restricted realTimeKernel: enabled: true workloadHints: highPowerConsumption: true perPodPowerManagement: false realTime: true", "oc apply -f my-performance-profile.yaml", "performanceprofile.performance.openshift.io/performance created", "Error: failed to compute the reserved and isolated CPUs: please ensure that reserved-cpu-count plus offlined-cpu-count should be in the range [0,1]", "Error: failed to compute the reserved and isolated CPUs: please specify the offlined CPU count in the range [0,1]", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: cnf-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - default_hugepagesz=1GB - hugepagesz=1G - intel_iommu=on cpu: isolated: <CPU_ISOLATED> reserved: <CPU_RESERVED> hugepages: defaultHugepagesSize: 1G pages: - count: <HUGEPAGES_COUNT> node: 0 size: 1G nodeSelector: node-role.kubernetes.io/worker: '' realTimeKernel: enabled: false globallyDisableIrqLoadBalancing: true", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: # if you change this name make sure the 'include' line in TunedPerformancePatch.yaml # matches this name: include=openshift-node-performance-USD{PerformanceProfile.metadata.name} # Also in file 'validatorCRs/informDuValidator.yaml': # name: 50-performance-USD{PerformanceProfile.metadata.name} name: openshift-node-performance-profile annotations: ran.openshift.io/reference-configuration: \"ran-du.redhat.com\" spec: additionalKernelArgs: - \"rcupdate.rcu_normal_after_boot=0\" - \"efi=runtime\" - \"vfio_pci.enable_sriov=1\" - \"vfio_pci.disable_idle_d3=1\" - \"module_blacklist=irdma\" cpu: isolated: USDisolated reserved: USDreserved hugepages: defaultHugepagesSize: USDdefaultHugepagesSize pages: - size: USDsize count: USDcount node: USDnode machineConfigPoolSelector: pools.operator.machineconfiguration.openshift.io/USDmcp: \"\" nodeSelector: node-role.kubernetes.io/USDmcp: '' numa: topologyPolicy: \"restricted\" # To use the standard (non-realtime) kernel, set enabled to false realTimeKernel: enabled: true workloadHints: # WorkloadHints defines the set of upper level flags for different type of workloads. # See https://github.com/openshift/cluster-node-tuning-operator/blob/master/docs/performanceprofile/performance_profile.md#workloadhints # for detailed descriptions of each item. # The configuration below is set for a low latency, performance mode. realTime: true highPowerConsumption: false perPodPowerManagement: false", "workloadHints: highPowerConsumption: false realTime: false", "workloadHints: highPowerConsumption: false realTime: true", "workloadHints: highPowerConsumption: true realTime: true", "workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: workload-hints spec: workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: false 1", "podman run --entrypoint performance-profile-creator -v /must-gather:/must-gather:z registry.redhat.io/openshift4/ose-cluster-node-tuning-rhel9-operator:v4.17 --mcp-name=worker-cnf --reserved-cpu-count=20 --rt-kernel=true --split-reserved-cpus-across-numa=false --topology-manager-policy=single-numa-node --must-gather-dir-path /must-gather --power-consumption-mode=low-latency \\ 1 --per-pod-power-management=true > my-performance-profile.yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: [.....] workloadHints: realTime: true highPowerConsumption: false perPodPowerManagement: true", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: performance spec: additionalKernelArgs: - cpufreq.default_governor=schedutil 1", "spec: profile: - data: | [sysfs] /sys/devices/system/cpu/intel_pstate/max_perf_pct = <x> 1", "\\ufeffapiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: infra-cpus spec: cpu: reserved: \"0-4,9\" 1 isolated: \"5-8\" 2 nodeSelector: 3 node-role.kubernetes.io/worker: \"\"", "lscpu --all --extended", "CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ 0 0 0 0 0:0:0:0 yes 4800.0000 400.0000 1 0 0 1 1:1:1:0 yes 4800.0000 400.0000 2 0 0 2 2:2:2:0 yes 4800.0000 400.0000 3 0 0 3 3:3:3:0 yes 4800.0000 400.0000 4 0 0 0 0:0:0:0 yes 4800.0000 400.0000 5 0 0 1 1:1:1:0 yes 4800.0000 400.0000 6 0 0 2 2:2:2:0 yes 4800.0000 400.0000 7 0 0 3 3:3:3:0 yes 4800.0000 400.0000", "cat /sys/devices/system/cpu/cpu0/topology/thread_siblings_list", "0-4", "cpu: isolated: 0,4 reserved: 1-3,5-7", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: example-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - nosmt cpu: isolated: 2-3 reserved: 0-1 hugepages: defaultHugepagesSize: 1G pages: - count: 2 node: 0 size: 1G nodeSelector: node-role.kubernetes.io/performance: '' realTimeKernel: enabled: true", "find /proc/irq -name effective_affinity -printf \"%p: \" -exec cat {} \\;", "/proc/irq/0/effective_affinity: 1 /proc/irq/1/effective_affinity: 8 /proc/irq/2/effective_affinity: 0 /proc/irq/3/effective_affinity: 1 /proc/irq/4/effective_affinity: 2 /proc/irq/5/effective_affinity: 1 /proc/irq/6/effective_affinity: 1 /proc/irq/7/effective_affinity: 1 /proc/irq/8/effective_affinity: 1 /proc/irq/9/effective_affinity: 2 /proc/irq/10/effective_affinity: 1 /proc/irq/11/effective_affinity: 1 /proc/irq/12/effective_affinity: 4 /proc/irq/13/effective_affinity: 1 /proc/irq/14/effective_affinity: 1 /proc/irq/15/effective_affinity: 1 /proc/irq/24/effective_affinity: 2 /proc/irq/25/effective_affinity: 4 /proc/irq/26/effective_affinity: 2 /proc/irq/27/effective_affinity: 1 /proc/irq/28/effective_affinity: 8 /proc/irq/29/effective_affinity: 4 /proc/irq/30/effective_affinity: 4 /proc/irq/31/effective_affinity: 8 /proc/irq/32/effective_affinity: 8 /proc/irq/33/effective_affinity: 1 /proc/irq/34/effective_affinity: 2", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: dynamic-irq-profile spec: cpu: isolated: 2-5 reserved: 0-1", "hugepages: defaultHugepagesSize: \"1G\" pages: - size: \"1G\" count: 4 node: 0 1", "oc debug node/ip-10-0-141-105.ec2.internal", "grep -i huge /proc/meminfo", "AnonHugePages: ###### ## ShmemHugePages: 0 kB HugePages_Total: 2 HugePages_Free: 2 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: #### ## Hugetlb: #### ##", "oc describe node worker-0.ocp4poc.example.com | grep -i huge", "hugepages-1g=true hugepages-###: ### hugepages-###: ###", "spec: hugepages: defaultHugepagesSize: 1G pages: - count: 1024 node: 0 size: 2M - count: 4 node: 1 size: 1G", "oc edit -f <your_profile_name>.yaml", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth0\" - interfaceName: \"eth1\" - vendorID: \"0x1af4\" deviceID: \"0x1000\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth*\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"!eno1\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: manual spec: cpu: isolated: 3-51,55-103 reserved: 0-2,52-54 net: userLevelNetworking: true devices: - interfaceName: \"eth0\" - vendorID: \"0x1af4\" deviceID: \"0x1000\" nodeSelector: node-role.kubernetes.io/worker-cnf: \"\"", "oc apply -f <your_profile_name>.yaml", "apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true", "ethtool -l <device>", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 4", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1", "apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - vendorID = 0x1af4", "ethtool -l <device>", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1", "udevadm info -p /sys/class/net/ens4 E: ID_MODEL_ID=0x1000 E: ID_VENDOR_ID=0x1af4 E: INTERFACE=ens4", "udevadm info -p /sys/class/net/eth0 E: ID_MODEL_ID=0x1002 E: ID_VENDOR_ID=0x1001 E: INTERFACE=eth0", "apiVersion: performance.openshift.io/v2 metadata: name: performance spec: kind: PerformanceProfile spec: cpu: reserved: 0-1 #total = 2 isolated: 2-8 net: userLevelNetworking: true devices: - interfaceName = eth0 - vendorID = 0x1af4", "ethtool -l ens4", "Channel parameters for ens4: Pre-set maximums: RX: 0 TX: 0 Other: 0 Combined: 4 Current hardware settings: RX: 0 TX: 0 Other: 0 Combined: 2 1", "INFO tuned.plugins.base: instance net_test (net): assigning devices ens1, ens2, ens3", "WARNING tuned.plugins.base: instance net_test: no matching devices available" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/scalability_and_performance/cnf-tuning-low-latency-nodes-with-perf-profile
4.281. sblim-cmpi-fsvol
4.281. sblim-cmpi-fsvol 4.281.1. RHBA-2011:1549 - sblim-cmpi-fsvol bug fix and enhancement update An updated sblim-cmpi-fsvol package that fixes several bugs and provides various enhancements is now available for Red Hat Enterprise Linux 6. The sblim-cmpi-fsvol package provides the filesystem and volume management instrumentation allowing users to obtain information about mounted and unmounted file systems by use of CIMOM technology and infrastructure. The sblim-cmpi-fsvol package has been upgraded to upstream version 1.5.1, which includes the Linux_CSProcessor class registration fix, and provides a number of other bug fixes and enhancements over the version. (BZ# 694506 ) Bug Fix BZ# 663833 CIMOM did not collect any information about ext4 file systems because the Linux_Ext4FileSystem class was not defined. This class has been defined and information about ext4 file systems is now collected properly. All users of sblim-cmpi-fsvol are advised to upgrade to this updated sblim-cmpi-fsvol package, which resolves these issues and adds these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/sblim-cmpi-fsvol
Appendix A. Reference Material
Appendix A. Reference Material A.1. Provided Undertow Handlers Note For the complete list of handlers, you must check the source JAR file of the Undertow core in the version that matches the Undertow core in your JBoss EAP installation. You can download the Undertow core source JAR file from the JBoss EAP Maven Repository , and then refer to the available handlers in the /io/undertow/server/handlers/ directory. You can verify the Undertow core version used in your current installation of JBoss EAP by searching the server.log file for the INFO message that is printed during JBoss EAP server startup, similar to the one shown in the example below: AccessControlListHandler Class Name: io.undertow.server.handlers.AccessControlListHandler Name: access-control Handler that can accept or reject a request based on an attribute of the remote peer. Table A.1. Parameters Name Description acl ACL rules. This parameter is required. attribute Exchange attribute string. This parameter is required. default-allow Boolean specifying whether handler accepts or rejects a request by default. Defaults to false . AccessLogHandler Class Name: io.undertow.server.handlers.accesslog.AccessLogHandler Name: access-log Access log handler. This handler generates access log messages based on the provided format string and pass these messages into the provided AccessLogReceiver . This handler can log any attribute that is provided via the ExchangeAttribute mechanism. This factory produces token handlers for the following patterns. Table A.2. Patterns Pattern Description %a Remote IP address %A Local IP address %b Bytes sent, excluding HTTP headers or - if no bytes were sent %B Bytes sent, excluding HTTP headers %h Remote host name %H Request protocol %l Remote logical username from identd (always returns - ) %m Request method %p Local port %q Query string (excluding the ? character) %r First line of the request %s HTTP status code of the response %t Date and time, in Common Log Format format %u Remote user that was authenticated %U Requested URL path %v Local server name %D Time taken to process the request, in milliseconds %T Time taken to process the request, in seconds %I Current Request thread name (can compare later with stack traces) common %h %l %u %t "%r" %s %b combined %h %l %u %t "%r" %s %b "%{i,Referer}" "%{i,User-Agent}" There is also support to write information from the cookie, incoming header, or the session. It is modeled after the Apache syntax: %{i,xxx} for incoming headers %{o,xxx} for outgoing response headers %{c,xxx} for a specific cookie %{r,xxx} where xxx is an attribute in the ServletRequest %{s,xxx} where xxx is an attribute in the HttpSession Table A.3. Parameters Name Description format Format used to generate the log messages. This is the default parameter . AllowedMethodsHandler Handler that whitelists certain HTTP methods. Only requests with a method in the allowed methods set are allowed to continue. Class Name: io.undertow.server.handlers.AllowedMethodsHandler Name: allowed-methods Table A.4. Parameters Name Description methods Methods to allow, for example GET , POST , PUT , and so on. This is the default parameter . BlockingHandler An HttpHandler that initiates a blocking request. If the thread is currently running in the I/O thread it is dispatched. Class Name: io.undertow.server.handlers.BlockingHandler Name: blocking This handler has no parameters. ByteRangeHandler Handler for range requests. This is a generic handler that can handle range requests to any resource of a fixed content length, for example, any resource where the content-length header has been set. This is not necessarily the most efficient way to handle range requests, as the full content is generated and then discarded. At present this handler can only handle simple, single range requests. If multiple ranges are requested the Range header is ignored. Class Name: io.undertow.server.handlers.ByteRangeHandler Name: byte-range Table A.5. Parameters Name Description send-accept-ranges Boolean value on whether or not to send accept ranges. This is the default parameter . CanonicalPathHandler This handler transforms a relative path to a canonical path. Class Name: io.undertow.server.handlers.CanonicalPathHandler Name: canonical-path This handler has no parameters. DisableCacheHandler Handler that disables response caching by browsers and proxies. Class Name: io.undertow.server.handlers.DisableCacheHandler Name: disable-cache This handler has no parameters. DisallowedMethodsHandler Handler that blacklists certain HTTP methods. Class Name: io.undertow.server.handlers.DisallowedMethodsHandler Name: disallowed-methods Table A.6. Parameters Name Description methods Methods to disallow, for example GET , POST , PUT , and so on. This is the default parameter . EncodingHandler This handler serves as the basis for content encoding implementations. Encoding handlers are added as delegates to this handler, with a specified server side priority. The q value will be used to determine the correct handler. If a request comes in with no q value then the server picks the handler with the highest priority as the encoding to use. If no handler matches then the identity encoding is assumed. If the identity encoding has been specifically disallowed due to a q value of 0 then the handler sets the response code 406 (Not Acceptable) and returns. Class Name: io.undertow.server.handlers.encoding.EncodingHandler Name: compress This handler has no parameters. FileErrorPageHandler Handler that serves up a file from disk to serve as an error page. This handler does not serve up any response codes by default, you must configure the response codes it responds to. Class Name: io.undertow.server.handlers.error.FileErrorPageHandler Name: error-file Table A.7. Parameters Name Description file Location of file to serve up as an error page. response-codes List of response codes that result in a redirect to the defined error page file. HttpTraceHandler A handler that handles HTTP trace requests. Class Name: io.undertow.server.handlers.HttpTraceHandler Name: trace This handler has no parameters. IPAddressAccessControlHandler Handler that can accept or reject a request based on the IP address of the remote peer. Class Name: io.undertow.server.handlers.IPAddressAccessControlHandler Name: ip-access-control Table A.8. Parameters Name Description acl String representing the access control list. This is the default parameter . failure-status Integer representing the status code to return on rejected requests. default-allow Boolean representing whether or not to allow by default. JDBCLogHandler Class Name: io.undertow.server.handlers.JDBCLogHandler Name: jdbc-access-log Table A.9. Parameters Name Description format Specifies the JDBC Log pattern. Default value is common . You can also use combined , which adds the VirtualHost, request method, referrer, and user agent information to the log message. datasource Name of the datasource to log. This parameter is required and is the default parameter . tableName Table name. remoteHostField Remote Host address. userField Username. timestampField Timestamp. virtualHostField VirtualHost. methodField Method. queryField Query. statusField Status. bytesField Bytes. refererField Referrer. userAgentField UserAgent. LearningPushHandler Handler that builds up a cache of resources that a browser requests, and uses server push to push them when supported. Class Name: io.undertow.server.handlers.LearningPushHandler Name: learning-push Table A.10. Parameters Name Description max-age Integer representing the maximum time of a cache entry. max-entries Integer representing the maximum number of cache entries LocalNameResolvingHandler A handler that performs DNS lookup to resolve a local address. Unresolved local address can be created when a front end server has sent a X-forwarded-host header or AJP is in use. Class Name: io.undertow.server.handlers.LocalNameResolvingHandler Name: resolve-local-name This handler has no parameters. PathSeparatorHandler A handler that translates non-slash separator characters in the URL into a slash. In general this will translate backslash into slash on Windows systems. Class Name: io.undertow.server.handlers.PathSeparatorHandler Name: path-separator This handler has no parameters. PeerNameResolvingHandler A handler that performs reverse DNS lookup to resolve a peer address. Class Name: io.undertow.server.handlers.PeerNameResolvingHandler Name: resolve-peer-name This handler has no parameters. ProxyPeerAddressHandler Handler that sets the peer address to the value of the X-Forwarded-For header. This should only be used behind a proxy that always sets this header, otherwise it is possible for an attacker to forge their peer address. Class Name: io.undertow.server.handlers.ProxyPeerAddressHandler Name: proxy-peer-address This handler has no parameters. RedirectHandler A redirect handler that redirects to the specified location via a 302 redirect. The location is specified as an exchange attribute string. Class Name: io.undertow.server.handlers.RedirectHandler Name: redirect Table A.11. Parameters Name Description value Destination for the redirect. This is the default parameter . RequestBufferingHandler Handler that buffers all request data. Class Name: io.undertow.server.handlers.RequestBufferingHandler Name: buffer-request Table A.12. Parameters Name Description buffers Integer that defines the maximum number of buffers. This is the default parameter . RequestDumpingHandler Handler that dumps an exchange to a log. Class Name: io.undertow.server.handlers.RequestDumpingHandler Name: dump-request This handler has no parameters. RequestLimitingHandler A handler that limits the maximum number of concurrent requests. Requests beyond the limit will block until the request is complete. Class Name: io.undertow.server.handlers.RequestLimitingHandler Name: request-limit Table A.13. Parameters Name Description requests Integer that represents the maximum number of concurrent requests. This is the default parameter and is required. ResourceHandler A handler for serving resources. Class Name: io.undertow.server.handlers.resource.ResourceHandler Name: resource Table A.14. Parameters Name Description location Location of resources. This is the default parameter and is required. allow-listing Boolean value to determine whether or not to allow directory listings. ResponseRateLimitingHandler Handler that limits the download rate to a set number of bytes/time. Class Name: io.undertow.server.handlers.ResponseRateLimitingHandler Name: response-rate-limit Table A.15. Parameters Name Description bytes Number of bytes to limit the download rate. This parameter is required. time Time in seconds to limit the download rate. This parameter is required. SetHeaderHandler A handler that sets a fixed response header. Class Name: io.undertow.server.handlers.SetHeaderHandler Name: header Table A.16. Parameters Name Description header Name of header attribute. This parameter is required. value Value of header attribute. This parameter is required. SSLHeaderHandler Handler that sets SSL information on the connection based on the following headers: SSL_CLIENT_CERT SSL_CIPHER SSL_SESSION_ID If this handler is present in the chain it always overrides the SSL session information, even if these headers are not present. This handler must only be used on servers that are behind a reverse proxy, where the reverse proxy has been configured to always set these headers for every request or to strip existing headers with these names if no SSL information is present. Otherwise it might be possible for a malicious client to spoof an SSL connection. Class Name: io.undertow.server.handlers.SSLHeaderHandler Name: ssl-headers This handler has no parameters. StuckThreadDetectionHandler This handler detects requests that take a long time to process, which might indicate that the thread that is processing it is stuck. Class Name: io.undertow.server.handlers.StuckThreadDetectionHandler Name: stuck-thread-detector Table A.17. Parameters Name Description threshhold Integer value in seconds that determines the threshold for how long a request should take to process. Default value is 600 (10 minutes). This is the default parameter . URLDecodingHandler A handler that decodes the URL and query parameters to the specified charset. If you are using this handler you must set the UndertowOptions.DECODE_URL parameter to false . This is not as efficient as using the parser's built in UTF-8 decoder. Unless you need to decode to something other than UTF-8 you should rely on the parsers decoding instead. Class Name: io.undertow.server.handlers.URLDecodingHandler Name: url-decoding Table A.18. Parameters Name Description charset Charset to decode. This is the default parameter and it is required. A.2. Persistence Unit Properties Persistence unit definition supports the following properties, which can be configured from the persistence.xml file. Property Description jboss.as.jpa.providerModule Name of the persistence provider module. Default is org.hibernate . Should be the application name if a persistence provider is packaged with the application. jboss.as.jpa.adapterModule Name of the integration classes that help JBoss EAP to work with the persistence provider. jboss.as.jpa.adapterClass Class name of the integration adapter. jboss.as.jpa.managed Set to false to disable container-managed Jakarta Persistence access to the persistence unit. The default is true . jboss.as.jpa.classtransformer Set to false to disable class transformers for the persistence unit. The default is true , which allows class transforming. Hibernate also needs persistence unit property hibernate.ejb.use_class_enhancer to be true for class transforming to be enabled. jboss.as.jpa.scopedname Specify the qualified application-scoped persistence unit name to be used. By default, this is set to the application name and persistence unit name, collectively. The hibernate.cache.region_prefix defaults to whatever you set jboss.as.jpa.scopedname to. Make sure you set the jboss.as.jpa.scopedname value to a value not already in use by other applications deployed on the same application server instance. jboss.as.jpa.deferdetach Controls whether transaction-scoped persistence context used in non-Jakarta Transactions transaction thread, will detach loaded entities after each EntityManager invocation or when the persistence context is closed. The default value is false . If set to true , the detach is deferred until the context is closed. wildfly.jpa.default-unit Set to true to choose the default persistence unit in an application. This is useful if you inject a persistence context without specifying the unitName , but have multiple persistence units specified in your persistence.xml file. wildfly.jpa.twophasebootstrap Persistence providers allow a two-phase persistence unit bootstrap, which improves Jakarta Persistence integration with Jakarta Contexts and Dependency Injection. Setting the wildfly.jpa.twophasebootstrap value to false disables the two-phase bootstrap for the persistence unit that contains the value. wildfly.jpa.allowdefaultdatasourceuse Set to false to prevent persistence unit from using the default datasource. The default value is true . This is only important for persistence units that do not specify a datasource. wildfly.jpa.hibernate.search.module Controls which version of Hibernate Search to include on the classpath. The default is auto ; other valid values are none or a full module identifier to use an alternative version. A.3. Policy Provider Properties Table A.19. policy-provider Attributes Property Description custom-policy A custom policy provider definition. jacc-policy A policy provider definition that sets up Jakarta Authorization and related services. Table A.20. custom-policy Attributes Property Description class-name The name of a java.security.Policy implementation referencing a policy provider. module The name of the module to load the provider from. Table A.21. jacc-policy Attributes Property Description policy The name of a java.security.Policy implementation referencing a policy provider. configuration-factory The name of a javax.security.jacc.PolicyConfigurationFactory implementation referencing a policy configuration factory provider. module The name of the module to load the provider from. A.4. Jakarta EE Profiles and Technologies Reference The following tables list the Jakarta EE technologies by category and note whether they are included in the Web Profile or Full Platform profiles. Jakarta EE Web Application Technologies Jakarta EE Enterprise Application Technologies Jakarta EE Web Services Technologies Jakarta EE Management and Security Technologies See Jakarta EE Specification for the specifications. Table A.22. Jakarta EE Web Application Technologies Technology Web Profile Full Platform Jakarta WebSocket 1.1 ✔ ✔ Jakarta JSON Binding 1.0 ✔ ✔ Jakarta JSON Processing 1.1 ✔ ✔ Jakarta Servlet 4.0 ✔ ✔ Jakarta Server Faces 2.3 ✔ ✔ Jakarta Expression Language 3.0 ✔ ✔ Jakarta Server Pages 2.3 ✔ ✔ Jakarta Standard Tag Library 1.2 1 ✔ ✔ 1 Additional Jakarta Standard Tag Library information: Note A known security risk in JBoss EAP exists where the Jakarta Standard Tag Library allows the processing of external entity references in untrusted XML documents which could access resources on the host system and, potentially, allow arbitrary code execution. To avoid this, the JBoss EAP server has to be run with system property org.apache.taglibs.standard.xml.accessExternalEntity correctly set, usually with an empty string as value. This can be done in two ways: Configuring the system properties and restarting the server. Passing -Dorg.apache.taglibs.standard.xml.accessExternalEntity="" as an argument to the standalone.sh or domain.sh scripts. Table A.23. Jakarta EE Enterprise Application Technologies Technology Web Profile Full Platform Jakarta Batch 1.0 ✔ Jakarta Concurrency 1.0 ✔ Jakarta Contexts and Dependency Injection 2.0 ✔ ✔ Jakarta Contexts and Dependency Injection 1.0 ✔ ✔ Jakarta Bean Validation 2.0 ✔ ✔ Jakarta Managed Beans 1.0 ✔ ✔ Jakarta Enterprise Beans 3.2 ✔ Jakarta Interceptors 1.2 ✔ ✔ Jakarta Connectors 1.7 ✔ Jakarta Persistence 2.2 ✔ ✔ Jakarta Annotations 1.3 ✔ Jakarta Messaging 2.0 ✔ Jakarta Transactions 1.2 ✔ ✔ Jakarta Mail 1.6 ✔ Table A.24. Jakarta EE Web Services Technologies Technology Web Profile Full Platform Jakarta RESTful Web Services 2.1 ✔ Jakarta Enterprise Web Services 1.3 ✔ Web Services Metadata for the Java Platform 2.1 ✔ Jakarta XML RPC 1.1 (Optional) Jakarta XML Registries 1.0 (Optional) Table A.25. Jakarta EE Management and Security Technologies Technology Web Profile Full Platform Jakarta Security 1.0 ✔ ✔ Jakarta Authentication 1.1 ✔ ✔ Jakarta Authorization 1.5 ✔ Jakarta Deployment 1.2 (Optional) ✔ Jakarta Management 1.1 ✔ Jakarta Debugging Support for Other Languages 1.0 ✔ Revised on 2024-01-17 05:24:58 UTC
[ "INFO [org.wildfly.extension.undertow] (MSC service thread 1-1) WFLYUT0003: Undertow 1.4.18.Final-redhat-1 starting", "org.apache.taglibs.standard.xml.accessExternalEntity" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/development_guide/reference_material
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation. The Jira issue will be created in the Red Hat OpenStack Platform Jira project, where you can track the progress of your feedback. Ensure that you are logged in to Jira. If you do not have a Jira account, create an account to submit feedback. Click the following link to open a the Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/networking_guide/proc_providing-feedback-on-red-hat-documentation
Chapter 1. Release notes
Chapter 1. Release notes 1.1. Logging 5.9 Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . 1.1.1. Logging 5.9.12 This release includes RHSA-2025:1985 . 1.1.1.1. CVEs CVE-2020-11023 CVE-2022-49043 CVE-2024-12797 CVE-2025-25184 Note For detailed information on Red Hat security ratings, review Severity ratings . 1.1.2. Logging 5.9.11 This release includes RHSA-2025:1227 . 1.1.2.1. Enhancements This enhancement adds OTel semantic stream labels to the lokiStack output so that you can query logs by using both ViaQ and OTel stream labels. ( LOG-6581 ) 1.1.2.2. Bug Fixes Before this update, the collector container mounted all log sources. With this update, it mounts only the defined input sources. ( LOG-5691 ) Before this update, fluentd ignored the no_proxy setting when using the HTTP output. With this update, the no_proxy setting is picked up correctly. ( LOG-6586 ) Before this update, clicking on "more logs" from the pod detail view triggered a false permission error due to a missing namespace parameter required for authorization. With this update, clicking "more logs" includes the namespace parameter, preventing the permission error and allowing access to more logs. ( LOG-6645 ) Before this update, specifying syslog.addLogSource added namespace_name , container_name , and pod_name to the messages of non-container logs. With this update, only container logs will include namespace_name , container_name , and pod_name in their messages when syslog.addLogSource is set. ( LOG-6656 ) 1.1.2.3. CVEs CVE-2024-12085 CVE-2024-47220 Note For detailed information on Red Hat security ratings, review Severity ratings . 1.1.3. Logging 5.9.10 This release includes RHSA-2024:10990 . 1.1.3.1. Bug Fixes Before this update, any namespace containing openshift or kube was treated as an infrastructure namespace. With this update, only the following namespaces are treated as infrastructure namespaces: default , kube , openshift , and namespaces that begin with openshift- or kube- . ( LOG-6044 ) Before this update, Loki attempted to detect the level of log messages, which caused confusion when the collector also detected log levels and produced different results. With this update, automatic log level detection in Loki is disabled. ( LOG-6321 ) Before this update, when the ClusterLogForwarder custom resource defined tls.insecureSkipVerify: true in combination with type: http and an HTTP URL, the certificate validation was not skipped. This misconfiguration caused the collector to fail because it attempted to validate certificates despite the setting. With this update, when tls.insecureSkipVerify: true is set, the URL is checked for the HTTPS. An HTTP URL will cause a misconfiguration error. ( LOG-6376 ) Before this update, when any infrastructure namespaces were specified in the application inputs in the ClusterLogForwarder custom resource, logs were generated with the incorrect log_type: application tags. With this update, when any infrastructure namespaces are specified in the application inputs, logs are generated with the correct log_type: infrastructure tags. ( LOG-6377 ) Important When updating to Logging for Red Hat OpenShift 5.9.10, if you previously added any infrastructure namespaces in the application inputs in the ClusterLogForwarder custom resource, you must add the permissions for collecting logs from infrastructure namespaces. For more details, see "Setting up log collection". 1.1.3.2. CVEs CVE-2024-2236 CVE-2024-2511 CVE-2024-3596 CVE-2024-4603 CVE-2024-4741 CVE-2024-5535 CVE-2024-10963 CVE-2024-50602 CVE-2024-55565 1.1.4. Logging 5.9.9 This release includes RHBA-2024:10049 . 1.1.4.1. Bug fixes Before this update, upgrades to version 6.0 failed with errors if a Log File Metric Exporter instance was present. This update fixes the issue, enabling upgrades to proceed smoothly without errors. ( LOG-6201 ) Before this update, Loki did not correctly load some configurations, which caused issues when using Alibaba Cloud or IBM Cloud object storage. This update fixes the configuration-loading code in Loki, resolving the issue. ( LOG-6293 ) 1.1.4.2. CVEs CVE-2024-6119 1.1.5. Logging 5.9.8 This release includes OpenShift Logging Bug Fix Release 5.9.8 . 1.1.5.1. Bug fixes Before this update, the Loki Operator failed to add the default namespace label to all AlertingRule resources, which caused the User-Workload-Monitoring Alertmanager to skip routing these alerts. This update adds the rule namespace as a label to all alerting and recording rules, resolving the issue and restoring proper alert routing in Alertmanager. ( LOG-6181 ) Before this update, the LokiStack ruler component view did not initialize properly, causing an invalid field error when the ruler component was disabled. This update ensures that the component view initializes with an empty value, resolving the issue. ( LOG-6183 ) Before this update, an LF character in the vector.toml file under the ES authentication configuration caused the collector pods to crash. This update removes the newline characters from the username and password fields, resolving the issue. ( LOG-6206 ) Before this update, it was possible to set the .containerLimit.maxRecordsPerSecond parameter in the ClusterLogForwarder custom resource to 0 , which could lead to an exception during Vector's startup. With this update, the configuration is validated before being applied, and any invalid values (less than or equal to zero) are rejected. ( LOG-6214 ) 1.1.5.2. CVEs ( CVE-2024-24791 ) ( CVE-2024-34155 ) ( CVE-2024-34156 ) ( CVE-2024-34158 ) ( CVE-2024-6119 ( CVE-2024-45490 ( CVE-2024-45491 ( CVE-2024-45492 1.1.6. Logging 5.9.7 This release includes OpenShift Logging Bug Fix Release 5.9.7 . 1.1.6.1. Bug fixes Before this update, the clusterlogforwarder.spec.outputs.http.timeout parameter was not applied to the Fluentd configuration when Fluentd was used as the collector type, causing HTTP timeouts to be misconfigured. With this update, the clusterlogforwarder.spec.outputs.http.timeout parameter is now correctly applied, ensuring Fluentd honors the specified timeout and handles HTTP connections according to the user's configuration. ( LOG-6125 ) Before this update, the TLS section was added without verifying the broker URL schema, resulting in SSL connection errors if the URLs did not start with tls . With this update, the TLS section is now added only if the broker URLs start with tls , preventing SSL connection errors. ( LOG-6041 ) 1.1.6.2. CVEs CVE-2024-6104 CVE-2024-6119 CVE-2024-34397 CVE-2024-45296 CVE-2024-45490 CVE-2024-45491 CVE-2024-45492 CVE-2024-45801 Note For detailed information on Red Hat security ratings, review Severity ratings . 1.1.7. Logging 5.9.6 This release includes OpenShift Logging Bug Fix Release 5.9.6 . 1.1.7.1. Bug fixes Before this update, the collector deployment ignored secret changes, causing receivers to reject logs. With this update, the system rolls out a new pod when there is a change in the secret value, ensuring that the collector reloads the updated secrets. ( LOG-5525 ) Before this update, the Vector could not correctly parse field values that included a single dollar sign ( USD ). With this update, field values with a single dollar sign are automatically changed to two dollar signs ( USDUSD ), ensuring proper parsing by the Vector. ( LOG-5602 ) Before this update, the drop filter could not handle non-string values (e.g., .responseStatus.code: 403 ). With this update, the drop filter now works properly with these values. ( LOG-5815 ) Before this update, the collector used the default settings to collect audit logs, without handling the backload from output receivers. With this update, the process for collecting audit logs has been improved to better manage file handling and log reading efficiency. ( LOG-5866 ) Before this update, the must-gather tool failed on clusters with non-AMD64 architectures such as Azure Resource Manager (ARM) or PowerPC. With this update, the tool now detects the cluster architecture at runtime and uses architecture-independent paths and dependencies. The detection allows must-gather to run smoothly on platforms like ARM and PowerPC. ( LOG-5997 ) Before this update, the log level was set using a mix of structured and unstructured keywords that were unclear. With this update, the log level follows a clear, documented order, starting with structured keywords. ( LOG-6016 ) Before this update, multiple unnamed pipelines writing to the default output in the ClusterLogForwarder caused a validation error due to duplicate auto-generated names. With this update, the pipeline names are now generated without duplicates. ( LOG-6033 ) Before this update, the collector pods did not have the PreferredScheduling annotation. With this update, the PreferredScheduling annotation is added to the collector daemonset. ( LOG-6023 ) 1.1.7.2. CVEs CVE-2024-0286 CVE-2024-2398 CVE-2024-37370 CVE-2024-37371 1.1.8. Logging 5.9.5 This release includes OpenShift Logging Bug Fix Release 5.9.5 1.1.8.1. Bug Fixes Before this update, duplicate conditions in the LokiStack resource status led to invalid metrics from the Loki Operator. With this update, the Operator removes duplicate conditions from the status. ( LOG-5855 ) Before this update, the Loki Operator did not trigger alerts when it dropped log events due to validation failures. With this update, the Loki Operator includes a new alert definition that triggers an alert if Loki drops log events due to validation failures. ( LOG-5895 ) Before this update, the Loki Operator overwrote user annotations on the LokiStack Route resource, causing customizations to drop. With this update, the Loki Operator no longer overwrites Route annotations, fixing the issue. ( LOG-5945 ) 1.1.8.2. CVEs None. 1.1.9. Logging 5.9.4 This release includes OpenShift Logging Bug Fix Release 5.9.4 1.1.9.1. Bug Fixes Before this update, an incorrectly formatted timeout configuration caused the OCP plugin to crash. With this update, a validation prevents the crash and informs the user about the incorrect configuration. ( LOG-5373 ) Before this update, workloads with labels containing - caused an error in the collector when normalizing log entries. With this update, the configuration change ensures the collector uses the correct syntax. ( LOG-5524 ) Before this update, an issue prevented selecting pods that no longer existed, even if they had generated logs. With this update, this issue has been fixed, allowing selection of such pods. ( LOG-5697 ) Before this update, the Loki Operator would crash if the CredentialRequest specification was registered in an environment without the cloud-credentials-operator . With this update, the CredentialRequest specification only registers in environments that are cloud-credentials-operator enabled. ( LOG-5701 ) Before this update, the Logging Operator watched and processed all config maps across the cluster. With this update, the dashboard controller only watches the config map for the logging dashboard. ( LOG-5702 ) Before this update, the ClusterLogForwarder introduced an extra space in the message payload which did not follow the RFC3164 specification. With this update, the extra space has been removed, fixing the issue. ( LOG-5707 ) Before this update, removing the seeding for grafana-dashboard-cluster-logging as a part of ( LOG-5308 ) broke new greenfield deployments without dashboards. With this update, the Logging Operator seeds the dashboard at the beginning and continues to update it for changes. ( LOG-5747 ) Before this update, LokiStack was missing a route for the Volume API causing the following error: 404 not found . With this update, LokiStack exposes the Volume API, resolving the issue. ( LOG-5749 ) 1.1.9.2. CVEs CVE-2024-24790 1.1.10. Logging 5.9.3 This release includes OpenShift Logging Bug Fix Release 5.9.3 1.1.10.1. Bug Fixes Before this update, there was a delay in restarting Ingesters when configuring LokiStack , because the Loki Operator sets the write-ahead log replay_memory_ceiling to zero bytes for the 1x.demo size. With this update, the minimum value used for the replay_memory_ceiling has been increased to avoid delays. ( LOG-5614 ) Before this update, monitoring the Vector collector output buffer state was not possible. With this update, monitoring and alerting the Vector collector output buffer size is possible that improves observability capabilities and helps keep the system running optimally. ( LOG-5586 ) 1.1.10.2. CVEs CVE-2024-2961 CVE-2024-28182 CVE-2024-33599 CVE-2024-33600 CVE-2024-33601 CVE-2024-33602 1.1.11. Logging 5.9.2 This release includes OpenShift Logging Bug Fix Release 5.9.2 1.1.11.1. Bug Fixes Before this update, changes to the Logging Operator caused an error due to an incorrect configuration in the ClusterLogForwarder CR. As a result, upgrades to logging deleted the daemonset collector. With this update, the Logging Operator re-creates collector daemonsets except when a Not authorized to collect error occurs. ( LOG-4910 ) Before this update, the rotated infrastructure log files were sent to the application index in some scenarios due to an incorrect configuration in the Vector log collector. With this update, the Vector log collector configuration avoids collecting any rotated infrastructure log files. ( LOG-5156 ) Before this update, the Logging Operator did not monitor changes to the grafana-dashboard-cluster-logging config map. With this update, the Logging Operator monitors changes in the ConfigMap objects, ensuring the system stays synchronized and responds effectively to config map modifications. ( LOG-5308 ) Before this update, an issue in the metrics collection code of the Logging Operator caused it to report stale telemetry metrics. With this update, the Logging Operator does not report stale telemetry metrics. ( LOG-5426 ) Before this change, the Fluentd out_http plugin ignored the no_proxy environment variable. With this update, the Fluentd patches the HTTP#start method of ruby to honor the no_proxy environment variable. ( LOG-5466 ) 1.1.11.2. CVEs CVE-2022-48554 CVE-2023-2975 CVE-2023-3446 CVE-2023-3817 CVE-2023-5678 CVE-2023-6129 CVE-2023-6237 CVE-2023-7008 CVE-2023-45288 CVE-2024-0727 CVE-2024-22365 CVE-2024-25062 CVE-2024-28834 CVE-2024-28835 1.1.12. Logging 5.9.1 This release includes OpenShift Logging Bug Fix Release 5.9.1 1.1.12.1. Enhancements Before this update, the Loki Operator configured Loki to use path-based style access for the Amazon Simple Storage Service (S3), which has been deprecated. With this update, the Loki Operator defaults to virtual-host style without users needing to change their configuration. ( LOG-5401 ) Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint used in the storage secret. With this update, the validation process ensures the S3 endpoint is a valid S3 URL, and the LokiStack status updates to indicate any invalid URLs. ( LOG-5395 ) 1.1.12.2. Bug Fixes Before this update, a bug in LogQL parsing left out some line filters from the query. With this update, the parsing now includes all the line filters while keeping the original query unchanged. ( LOG-5268 ) Before this update, a prune filter without a defined pruneFilterSpec would cause a segfault. With this update, there is a validation error if a prune filter is without a defined puneFilterSpec . ( LOG-5322 ) Before this update, a drop filter without a defined dropTestsSpec would cause a segfault. With this update, there is a validation error if a prune filter is without a defined puneFilterSpec . ( LOG-5323 ) Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint URL format used in the storage secret. With this update, the S3 endpoint URL goes through a validation step that reflects on the status of the LokiStack . ( LOG-5397 ) Before this update, poorly formatted timestamp fields in audit log records led to WARN messages in Red Hat OpenShift Logging Operator logs. With this update, a remap transformation ensures that the timestamp field is properly formatted. ( LOG-4672 ) Before this update, the error message thrown while validating a ClusterLogForwarder resource name and namespace did not correspond to the correct error. With this update, the system checks if a ClusterLogForwarder resource with the same name exists in the same namespace. If not, it corresponds to the correct error. ( LOG-5062 ) Before this update, the validation feature for output config required a TLS URL, even for services such as Amazon CloudWatch or Google Cloud Logging where a URL is not needed by design. With this update, the validation logic for services without URLs are improved, and the error message are more informative. ( LOG-5307 ) Before this update, defining an infrastructure input type did not exclude logging workloads from the collection. With this update, the collection excludes logging services to avoid feedback loops. ( LOG-5309 ) 1.1.12.3. CVEs No CVEs. 1.1.13. Logging 5.9.0 This release includes OpenShift Logging Bug Fix Release 5.9.0 1.1.13.1. Removal notice The Logging 5.9 release does not contain an updated version of the OpenShift Elasticsearch Operator. Instances of OpenShift Elasticsearch Operator from prior logging releases, remain supported until the EOL of the logging release. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. For more information on the Logging lifecycle dates, see Platform Agnostic Operators . 1.1.13.2. Deprecation notice In Logging 5.9, Fluentd, and Kibana are deprecated and are planned to be removed in Logging 6.0, which is expected to be shipped alongside a future release of OpenShift Container Platform. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. The Vector-based collector provided by the Red Hat OpenShift Logging Operator and LokiStack provided by the Loki Operator are the preferred Operators for log collection and storage. We encourage all users to adopt the Vector and Loki log stack, as this will be the stack that will be enhanced going forward. In Logging 5.9, the Fields option for the Splunk output type was never implemented and is now deprecated. It will be removed in a future release. 1.1.13.3. Enhancements 1.1.13.3.1. Log Collection This enhancement adds the ability to refine the process of log collection by using a workload's metadata to drop or prune logs based on their content. Additionally, it allows the collection of infrastructure logs, such as journal or container logs, and audit logs, such as kube api or ovn logs, to only collect individual sources. ( LOG-2155 ) This enhancement introduces a new type of remote log receiver, the syslog receiver. You can configure it to expose a port over a network, allowing external systems to send syslog logs using compatible tools such as rsyslog. ( LOG-3527 ) With this update, the ClusterLogForwarder API now supports log forwarding to Azure Monitor Logs, giving users better monitoring abilities. This feature helps users to maintain optimal system performance and streamline the log analysis processes in Azure Monitor, which speeds up issue resolution and improves operational efficiency. ( LOG-4605 ) This enhancement improves collector resource utilization by deploying collectors as a deployment with two replicas. This occurs when the only input source defined in the ClusterLogForwarder custom resource (CR) is a receiver input instead of using a daemon set on all nodes. Additionally, collectors deployed in this manner do not mount the host file system. To use this enhancement, you need to annotate the ClusterLogForwarder CR with the logging.openshift.io/dev-preview-enable-collector-as-deployment annotation. ( LOG-4779 ) This enhancement introduces the capability for custom tenant configuration across all supported outputs, facilitating the organization of log records in a logical manner. However, it does not permit custom tenant configuration for logging managed storage. ( LOG-4843 ) With this update, the ClusterLogForwarder CR that specifies an application input with one or more infrastructure namespaces like default , openshift* , or kube* , now requires a service account with the collect-infrastructure-logs role. ( LOG-4943 ) This enhancement introduces the capability for tuning some output settings, such as compression, retry duration, and maximum payloads, to match the characteristics of the receiver. Additionally, this feature includes a delivery mode to allow administrators to choose between throughput and log durability. For example, the AtLeastOnce option configures minimal disk buffering of collected logs so that the collector can deliver those logs after a restart. ( LOG-5026 ) This enhancement adds three new Prometheus alerts, warning users about the deprecation of Elasticsearch, Fluentd, and Kibana. ( LOG-5055 ) 1.1.13.3.2. Log Storage This enhancement in LokiStack improves support for OTEL by using the new V13 object storage format and enabling automatic stream sharding by default. This also prepares the collector for future enhancements and configurations. ( LOG-4538 ) This enhancement introduces support for short-lived token workload identity federation with Azure and AWS log stores for STS enabled OpenShift Container Platform 4.14 and later clusters. Local storage requires the addition of a CredentialMode: static annotation under spec.storage.secret in the LokiStack CR. ( LOG-4540 ) With this update, the validation of the Azure storage secret is now extended to give early warning for certain error conditions. ( LOG-4571 ) With this update, Loki now adds upstream and downstream support for GCP workload identity federation mechanism. This allows authenticated and authorized access to the corresponding object storage services. ( LOG-4754 ) 1.1.13.4. Bug Fixes Before this update, the logging must-gather could not collect any logs on a FIPS-enabled cluster. With this update, a new oc client is available in cluster-logging-rhel9-operator , and must-gather works properly on FIPS clusters. ( LOG-4403 ) Before this update, the LokiStack ruler pods could not format the IPv6 pod IP in HTTP URLs used for cross-pod communication. This issue caused querying rules and alerts through the Prometheus-compatible API to fail. With this update, the LokiStack ruler pods encapsulate the IPv6 pod IP in square brackets, resolving the problem. Now, querying rules and alerts through the Prometheus-compatible API works just like in IPv4 environments. ( LOG-4709 ) Before this fix, the YAML content from the logging must-gather was exported in a single line, making it unreadable. With this update, the YAML white spaces are preserved, ensuring that the file is properly formatted. ( LOG-4792 ) Before this update, when the ClusterLogForwarder CR was enabled, the Red Hat OpenShift Logging Operator could run into a nil pointer exception when ClusterLogging.Spec.Collection was nil. With this update, the issue is now resolved in the Red Hat OpenShift Logging Operator. ( LOG-5006 ) Before this update, in specific corner cases, replacing the ClusterLogForwarder CR status field caused the resourceVersion to constantly update due to changing timestamps in Status conditions. This condition led to an infinite reconciliation loop. With this update, all status conditions synchronize, so that timestamps remain unchanged if conditions stay the same. ( LOG-5007 ) Before this update, there was an internal buffering behavior to drop_newest to address high memory consumption by the collector resulting in significant log loss. With this update, the behavior reverts to using the collector defaults. ( LOG-5123 ) Before this update, the Loki Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and CA files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring spec on the ServiceMonitor configuration. With this update, the Loki Operator ServiceMonitor in openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring spec in the Prometheus Operator to handle the Loki Operator ServiceMonitor successfully, enabling Prometheus to scrape the Loki Operator metrics. ( LOG-5165 ) Before this update, the configuration of the Loki Operator ServiceMonitor could match many Kubernetes services, resulting in the Loki Operator metrics being collected multiple times. With this update, the configuration of ServiceMonitor now only matches the dedicated metrics service. ( LOG-5212 ) 1.1.13.5. Known Issues None. 1.1.13.6. CVEs CVE-2023-5363 CVE-2023-5981 CVE-2023-46218 CVE-2024-0553 CVE-2023-0567 1.2. Logging 5.8 Note Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . 1.2.1. Logging 5.8.18 This release includes RHSA-2025:1983 and RHBA-2025:1984 . 1.2.1.1. CVEs CVE-2019-12900 CVE-2020-11023 CVE-2022-49043 CVE-2024-12797 CVE-2024-53104 CVE-2025-1244 Note For detailed information on Red Hat security ratings, review Severity ratings . 1.2.2. Logging 5.8.17 This release includes OpenShift Logging Bug Fix Release 5.8.17 and OpenShift Logging Bug Fix Release 5.8.17 . 1.2.2.1. Enhancements This enhancement adds OTel semantic stream labels to the lokiStack output so that you can query logs by using both ViaQ and OTel stream labels. ( LOG-6582 ) 1.2.2.2. CVEs CVE-2019-12900 CVE-2024-9287 CVE-2024-11168 CVE-2024-12085 CVE-2024-46713 CVE-2024-50208 CVE-2024-50252 CVE-2024-53122 Note For detailed information on Red Hat security ratings, review Severity ratings . 1.2.3. Logging 5.8.16 This release includes RHBA-2024:10989 and RHBA-2024:143685 . 1.2.3.1. Bug fixes Before this update, Loki automatically tried to guess the log level of log messages, which caused confusion because the collector already does this, and Loki and the collector would sometimes come to different results. With this update, the automatic log level discovery in Loki is disabled. LOG-6322 . 1.2.3.2. CVEs CVE-2019-12900 CVE-2021-3903 CVE-2023-38709 CVE-2024-2236 CVE-2024-2511 CVE-2024-3596 CVE-2024-4603 CVE-2024-4741 CVE-2024-5535 CVE-2024-6232 CVE-2024-9287 CVE-2024-10041 CVE-2024-10963 CVE-2024-11168 CVE-2024-24795 CVE-2024-36387 CVE-2024-41009 CVE-2024-42244 CVE-2024-47175 CVE-2024-47875 CVE-2024-50226 CVE-2024-50602 1.2.4. Logging 5.8.15 This release includes RHBA-2024:10052 and RHBA-2024:10053 . 1.2.4.1. Bug fixes Before this update, Loki did not correctly load some configurations, which caused issues when using Alibaba Cloud or IBM Cloud object storage. This update fixes the configuration-loading code in Loki, resolving the issue. ( LOG-6294 ) Before this update, upgrades to version 6.0 failed with errors if a Log File Metric Exporter instance was present. This update fixes the issue, enabling upgrades to proceed smoothly without errors. ( LOG-6328 ) 1.2.4.2. CVEs CVE-2021-47385 CVE-2023-28746 CVE-2023-48161 CVE-2023-52658 CVE-2024-6119 CVE-2024-6232 CVE-2024-21208 CVE-2024-21210 CVE-2024-21217 CVE-2024-21235 CVE-2024-27403 CVE-2024-35989 CVE-2024-36889 CVE-2024-36978 CVE-2024-38556 CVE-2024-39483 CVE-2024-39502 CVE-2024-40959 CVE-2024-42079 CVE-2024-42272 CVE-2024-42284 CVE-2024-3596 CVE-2024-5535 1.2.5. Logging 5.8.14 This release includes OpenShift Logging Bug Fix Release 5.8.14 and OpenShift Logging Bug Fix Release 5.8.14 . 1.2.5.1. Bug fixes Before this update, it was possible to set the .containerLimit.maxRecordsPerSecond parameter in the ClusterLogForwarder custom resource to 0 , which could lead to an exception during Vector's startup. With this update, the configuration is validated before being applied, and any invalid values (less than or equal to zero) are rejected. ( LOG-4671 ) Before this update, the Loki Operator did not automatically add the default namespace label to all its alerting rules, which caused Alertmanager instance for user-defined projects to skip routing such alerts. With this update, all alerting and recording rules have the namespace label and Alertmanager now routes these alerts correctly. ( LOG-6182 ) Before this update, the LokiStack ruler component view was not properly initialized, which caused the invalid field error when the ruler component was disabled. With this update, the issue is resolved by the component view being initialized with an empty value. ( LOG-6184 ) 1.2.5.2. CVEs CVE-2023-37920 CVE-2024-2398 CVE-2024-4032 CVE-2024-6232 CVE-2024-6345 CVE-2024-6923 CVE-2024-30203 CVE-2024-30205 CVE-2024-39331 CVE-2024-45490 CVE-2024-45491 CVE-2024-45492 CVE-2024-6119 CVE-2024-24791 CVE-2024-34155 CVE-2024-34156 CVE-2024-34158 CVE-2024-34397 Note For detailed information on Red Hat security ratings, review Severity ratings . 1.2.6. Logging 5.8.13 This release includes OpenShift Logging Bug Fix Release 5.8.13 and OpenShift Logging Bug Fix Release 5.8.13 . 1.2.6.1. Bug fixes Before this update, the clusterlogforwarder.spec.outputs.http.timeout parameter was not applied to the Fluentd configuration when Fluentd was used as the collector type, causing HTTP timeouts to be misconfigured. With this update, the clusterlogforwarder.spec.outputs.http.timeout parameter is now correctly applied, ensuring that Fluentd honors the specified timeout and handles HTTP connections according to the user's configuration. ( LOG-5210 ) Before this update, the Elasticsearch Operator did not issue an alert to inform users about the upcoming removal, leaving existing installations unsupported without notice. With this update, the Elasticsearch Operator will trigger a continuous alert on OpenShift Container Platform version 4.16 and later, notifying users of its removal from the catalog in November 2025. ( LOG-5966 ) Before this update, the Red Hat OpenShift Logging Operator was unavailable on OpenShift Container Platform version 4.16 and later, preventing Telco customers from completing their certifications for the upcoming Logging 6.0 release. With this update, the Red Hat OpenShift Logging Operator is now available on OpenShift Container Platform versions 4.16 and 4.17, resolving the issue. ( LOG-6103 ) Before this update, the Elasticsearch Operator was not available in the OpenShift Container Platform versions 4.17 and 4.18, preventing the installation of ServiceMesh, Kiali, and Distributed Tracing. With this update, the Elasticsearch Operator properties have been expanded for OpenShift Container Platform versions 4.17 and 4.18, resolving the issue and allowing ServiceMesh, Kiali, and Distributed Tracing operators to install their stacks. ( LOG-6134 ) 1.2.6.2. CVEs CVE-2023-52463 CVE-2023-52801 CVE-2024-6104 CVE-2024-6119 CVE-2024-26629 CVE-2024-26630 CVE-2024-26720 CVE-2024-26886 CVE-2024-26946 CVE-2024-34397 CVE-2024-35791 CVE-2024-35797 CVE-2024-35875 CVE-2024-36000 CVE-2024-36019 CVE-2024-36883 CVE-2024-36979 CVE-2024-38559 CVE-2024-38619 CVE-2024-39331 CVE-2024-40927 CVE-2024-40936 CVE-2024-41040 CVE-2024-41044 CVE-2024-41055 CVE-2024-41073 CVE-2024-41096 CVE-2024-42082 CVE-2024-42096 CVE-2024-42102 CVE-2024-42131 CVE-2024-45490 CVE-2024-45491 CVE-2024-45492 CVE-2024-2398 CVE-2024-4032 CVE-2024-6232 CVE-2024-6345 CVE-2024-6923 CVE-2024-30203 CVE-2024-30205 CVE-2024-39331 CVE-2024-45490 CVE-2024-45491 CVE-2024-45492 Note For detailed information on Red Hat security ratings, review Severity ratings . 1.2.7. Logging 5.8.12 This release includes OpenShift Logging Bug Fix Release 5.8.12 and OpenShift Logging Bug Fix Release 5.8.12 . 1.2.7.1. Bug fixes Before this update, the collector used internal buffering with the drop_newest setting to reduce high memory usage, which caused significant log loss. With this update, the collector goes back to its default behavior, where sink<>.buffer is not customized. ( LOG-6026 ) 1.2.7.2. CVEs CVE-2023-52771 CVE-2023-52880 CVE-2024-2398 CVE-2024-6345 CVE-2024-6923 CVE-2024-26581 CVE-2024-26668 CVE-2024-26810 CVE-2024-26855 CVE-2024-26908 CVE-2024-26925 CVE-2024-27016 CVE-2024-27019 CVE-2024-27020 CVE-2024-27415 CVE-2024-35839 CVE-2024-35896 CVE-2024-35897 CVE-2024-35898 CVE-2024-35962 CVE-2024-36003 CVE-2024-36025 CVE-2024-37370 CVE-2024-37371 CVE-2024-37891 CVE-2024-38428 CVE-2024-38476 CVE-2024-38538 CVE-2024-38540 CVE-2024-38544 CVE-2024-38579 CVE-2024-38608 CVE-2024-39476 CVE-2024-40905 CVE-2024-40911 CVE-2024-40912 CVE-2024-40914 CVE-2024-40929 CVE-2024-40939 CVE-2024-40941 CVE-2024-40957 CVE-2024-40978 CVE-2024-40983 CVE-2024-41041 CVE-2024-41076 CVE-2024-41090 CVE-2024-41091 CVE-2024-42110 CVE-2024-42152 1.2.8. Logging 5.8.11 This release includes OpenShift Logging Bug Fix Release 5.8.11 and OpenShift Logging Bug Fix Release 5.8.11 . 1.2.8.1. Bug fixes Before this update, the TLS section was added without verifying the broker URL schema, leading to SSL connection errors if the URLs did not start with tls . With this update, the TLS section is added only if broker URLs start with tls , preventing SSL connection errors. ( LOG-5139 ) Before this update, the Loki Operator did not trigger alerts when it dropped log events due to validation failures. With this update, the Loki Operator includes a new alert definition that triggers an alert if Loki drops log events due to validation failures. ( LOG-5896 ) Before this update, the 4.16 GA catalog did not include Elasticsearch Operator 5.8, preventing the installation of products like Service Mesh, Kiali, and Tracing. With this update, Elasticsearch Operator 5.8 is now available on 4.16, resolving the issue and providing support for Elasticsearch storage for these products only. ( LOG-5911 ) Before this update, duplicate conditions in the LokiStack resource status led to invalid metrics from the Loki Operator. With this update, the Operator removes duplicate conditions from the status. ( LOG-5857 ) Before this update, the Loki Operator overwrote user annotations on the LokiStack Route resource, causing customizations to drop. With this update, the Loki Operator no longer overwrites Route annotations, fixing the issue. ( LOG-5946 ) 1.2.8.2. CVEs CVE-2021-47548 CVE-2021-47596 CVE-2022-48627 CVE-2023-52638 CVE-2024-4032 CVE-2024-6409 CVE-2024-21131 CVE-2024-21138 CVE-2024-21140 CVE-2024-21144 CVE-2024-21145 CVE-2024-21147 CVE-2024-24806 CVE-2024-26783 CVE-2024-26858 CVE-2024-27397 CVE-2024-27435 CVE-2024-35235 CVE-2024-35958 CVE-2024-36270 CVE-2024-36886 CVE-2024-36904 CVE-2024-36957 CVE-2024-38473 CVE-2024-38474 CVE-2024-38475 CVE-2024-38477 CVE-2024-38543 CVE-2024-38586 CVE-2024-38593 CVE-2024-38663 CVE-2024-39573 1.2.9. Logging 5.8.10 This release includes OpenShift Logging Bug Fix Release 5.8.10 and OpenShift Logging Bug Fix Release 5.8.10 . 1.2.9.1. Known issues Before this update, when enabling retention, the Loki Operator produced an invalid configuration. As a result, Loki did not start properly. With this update, Loki pods can set retention. ( LOG-5821 ) 1.2.9.2. Bug fixes Before this update, the ClusterLogForwarder introduced an extra space in the message payload that did not follow the RFC3164 specification. With this update, the extra space has been removed, fixing the issue. ( LOG-5647 ) 1.2.9.3. CVEs CVE-2023-6597 CVE-2024-0450 CVE-2024-3651 CVE-2024-6387 CVE-2024-26735 CVE-2024-26993 CVE-2024-32002 CVE-2024-32004 CVE-2024-32020 CVE-2024-32021 CVE-2024-32465 1.2.10. Logging 5.8.9 This release includes OpenShift Logging Bug Fix Release 5.8.9 and OpenShift Logging Bug Fix Release 5.8.9 . 1.2.10.1. Bug fixes Before this update, an issue prevented selecting pods that no longer existed, even if they had generated logs. With this update, this issue has been fixed, allowing selection of such pods. ( LOG-5698 ) Before this update, LokiStack was missing a route for the Volume API, which caused the following error: 404 not found . With this update, LokiStack exposes the Volume API, resolving the issue. ( LOG-5750 ) Before this update, the Elasticsearch operator overwrote all service account annotations without considering ownership. As a result, the kube-controller-manager recreated service account secrets because it logged the link to the owning service account. With this update, the Elasticsearch operator merges annotations, resolving the issue. ( LOG-5776 ) 1.2.10.2. CVEs CVE-2023-6597 CVE-2024-0450 CVE-2024-3651 CVE-2024-6387 CVE-2024-24790 CVE-2024-26735 CVE-2024-26993 CVE-2024-32002 CVE-2024-32004 CVE-2024-32020 CVE-2024-32021 CVE-2024-32465 1.2.11. Logging 5.8.8 This release includes OpenShift Logging Bug Fix Release 5.8.8 and OpenShift Logging Bug Fix Release 5.8.8 . 1.2.11.1. Bug fixes Before this update, there was a delay in restarting Ingesters when configuring LokiStack , because the Loki Operator sets the write-ahead log replay_memory_ceiling to zero bytes for the 1x.demo size. With this update, the minimum value used for the replay_memory_ceiling has been increased to avoid delays. ( LOG-5615 ) 1.2.11.2. CVEs CVE-2020-15778 CVE-2021-43618 CVE-2023-6004 CVE-2023-6597 CVE-2023-6918 CVE-2023-7008 CVE-2024-0450 CVE-2024-2961 CVE-2024-22365 CVE-2024-25062 CVE-2024-26458 CVE-2024-26461 CVE-2024-26642 CVE-2024-26643 CVE-2024-26673 CVE-2024-26804 CVE-2024-28182 CVE-2024-32487 CVE-2024-33599 CVE-2024-33600 CVE-2024-33601 CVE-2024-33602 1.2.12. Logging 5.8.7 This release includes OpenShift Logging Bug Fix Release 5.8.7 Security Update and OpenShift Logging Bug Fix Release 5.8.7 . 1.2.12.1. Bug fixes Before this update, the elasticsearch-im-<type>-* pods failed if no <type> logs (audit, infrastructure, or application) were collected. With this update, the pods no longer fail when <type> logs are not collected. ( LOG-4949 ) Before this update, the validation feature for output config required an SSL/TLS URL, even for services such as Amazon CloudWatch or Google Cloud Logging where a URL is not needed by design. With this update, the validation logic for services without URLs are improved, and the error message is more informative. ( LOG-5467 ) Before this update, an issue in the metrics collection code of the Logging Operator caused it to report stale telemetry metrics. With this update, the Logging Operator does not report stale telemetry metrics. ( LOG-5471 ) Before this update, changes to the Logging Operator caused an error due to an incorrect configuration in the ClusterLogForwarder CR. As a result, upgrades to logging deleted the daemonset collector. With this update, the Logging Operator re-creates collector daemonsets except when a Not authorized to collect error occurs. ( LOG-5514 ) 1.2.12.2. CVEs CVE-2020-26555 CVE-2021-29390 CVE-2022-0480 CVE-2022-38096 CVE-2022-40090 CVE-2022-45934 CVE-2022-48554 CVE-2022-48624 CVE-2023-2975 CVE-2023-3446 CVE-2023-3567 CVE-2023-3618 CVE-2023-3817 CVE-2023-4133 CVE-2023-5678 CVE-2023-6040 CVE-2023-6121 CVE-2023-6129 CVE-2023-6176 CVE-2023-6228 CVE-2023-6237 CVE-2023-6531 CVE-2023-6546 CVE-2023-6622 CVE-2023-6915 CVE-2023-6931 CVE-2023-6932 CVE-2023-7008 CVE-2023-24023 CVE-2023-25193 CVE-2023-25775 CVE-2023-28464 CVE-2023-28866 CVE-2023-31083 CVE-2023-31122 CVE-2023-37453 CVE-2023-38469 CVE-2023-38470 CVE-2023-38471 CVE-2023-38472 CVE-2023-38473 CVE-2023-39189 CVE-2023-39193 CVE-2023-39194 CVE-2023-39198 CVE-2023-40745 CVE-2023-41175 CVE-2023-42754 CVE-2023-42756 CVE-2023-43785 CVE-2023-43786 CVE-2023-43787 CVE-2023-43788 CVE-2023-43789 CVE-2023-45288 CVE-2023-45863 CVE-2023-46862 CVE-2023-47038 CVE-2023-51043 CVE-2023-51779 CVE-2023-51780 CVE-2023-52434 CVE-2023-52448 CVE-2023-52476 CVE-2023-52489 CVE-2023-52522 CVE-2023-52529 CVE-2023-52574 CVE-2023-52578 CVE-2023-52580 CVE-2023-52581 CVE-2023-52597 CVE-2023-52610 CVE-2023-52620 CVE-2024-0565 CVE-2024-0727 CVE-2024-0841 CVE-2024-1085 CVE-2024-1086 CVE-2024-21011 CVE-2024-21012 CVE-2024-21068 CVE-2024-21085 CVE-2024-21094 CVE-2024-22365 CVE-2024-25062 CVE-2024-26582 CVE-2024-26583 CVE-2024-26584 CVE-2024-26585 CVE-2024-26586 CVE-2024-26593 CVE-2024-26602 CVE-2024-26609 CVE-2024-26633 CVE-2024-27316 CVE-2024-28834 CVE-2024-28835 1.2.13. Logging 5.8.6 This release includes OpenShift Logging Bug Fix Release 5.8.6 Security Update and OpenShift Logging Bug Fix Release 5.8.6 . 1.2.13.1. Enhancements Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint used in the storage secret. With this update, the validation process ensures the S3 endpoint is a valid S3 URL, and the LokiStack status updates to indicate any invalid URLs. ( LOG-5392 ) Before this update, the Loki Operator configured Loki to use path-based style access for the Amazon Simple Storage Service (S3), which has been deprecated. With this update, the Loki Operator defaults to virtual-host style without users needing to change their configuration. ( LOG-5402 ) 1.2.13.2. Bug fixes Before this update, the Elastisearch Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and certificate authority (CA) files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring specification on the ServiceMonitor configuration. With this update, the Elastisearch Operator ServiceMonitor in the openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring specifications in the Prometheus Operator to handle the Elastisearch Operator ServiceMonitor successfully. This enables Prometheus to scrape the Elastisearch Operator metrics. ( LOG-5164 ) Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint URL format used in the storage secret. With this update, the S3 endpoint URL goes through a validation step that reflects on the status of the LokiStack . ( LOG-5398 ) 1.2.13.3. CVEs CVE-2023-4244 CVE-2023-5363 CVE-2023-5717 CVE-2023-5981 CVE-2023-6356 CVE-2023-6535 CVE-2023-6536 CVE-2023-6606 CVE-2023-6610 CVE-2023-6817 CVE-2023-46218 CVE-2023-51042 CVE-2024-0193 CVE-2024-0553 CVE-2024-0567 CVE-2024-0646 1.2.14. Logging 5.8.5 This release includes OpenShift Logging Bug Fix Release 5.8.5 . 1.2.14.1. Bug fixes Before this update, the configuration of the Loki Operator's ServiceMonitor could match many Kubernetes services, resulting in the Loki Operator's metrics being collected multiple times. With this update, the configuration of ServiceMonitor now only matches the dedicated metrics service. ( LOG-5250 ) Before this update, the Red Hat build pipeline did not use the existing build details in Loki builds and omitted information such as revision, branch, and version. With this update, the Red Hat build pipeline now adds these details to the Loki builds, fixing the issue. ( LOG-5201 ) Before this update, the Loki Operator checked if the pods were running to decide if the LokiStack was ready. With this update, it also checks if the pods are ready, so that the readiness of the LokiStack reflects the state of its components. ( LOG-5171 ) Before this update, running a query for log metrics caused an error in the histogram. With this update, the histogram toggle function and the chart are disabled and hidden because the histogram doesn't work with log metrics. ( LOG-5044 ) Before this update, the Loki and Elasticsearch bundle had the wrong maxOpenShiftVersion , resulting in IncompatibleOperatorsInstalled alerts. With this update, including 4.16 as the maxOpenShiftVersion property in the bundle fixes the issue. ( LOG-5272 ) Before this update, the build pipeline did not include linker flags for the build date, causing Loki builds to show empty strings for buildDate and goVersion . With this update, adding the missing linker flags in the build pipeline fixes the issue. ( LOG-5274 ) Before this update, a bug in LogQL parsing left out some line filters from the query. With this update, the parsing now includes all the line filters while keeping the original query unchanged. ( LOG-5270 ) Before this update, the Loki Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and CA files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring spec on the ServiceMonitor configuration. With this update, the Loki Operator ServiceMonitor in openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring spec in the Prometheus Operator to handle the Loki Operator ServiceMonitor successfully, enabling Prometheus to scrape the Loki Operator metrics. ( LOG-5240 ) 1.2.14.2. CVEs CVE-2023-5363 CVE-2023-5981 CVE-2023-6135 CVE-2023-46218 CVE-2023-48795 CVE-2023-51385 CVE-2024-0553 CVE-2024-0567 CVE-2024-24786 CVE-2024-28849 1.2.15. Logging 5.8.4 This release includes OpenShift Logging Bug Fix Release 5.8.4 . 1.2.15.1. Bug fixes Before this update, the developer console's logs did not account for the current namespace, resulting in query rejection for users without cluster-wide log access. With this update, all supported OCP versions ensure correct namespace inclusion. ( LOG-4905 ) Before this update, the Cluster Logging Operator deployed ClusterRoles supporting LokiStack deployments only when the default log output was LokiStack. With this update, the roles are split into two groups: read and write. The write roles deploys based on the setting of the default log storage, just like all the roles used to do before. The read roles deploys based on whether the logging console plugin is active. ( LOG-4987 ) Before this update, multiple ClusterLogForwarders defining the same input receiver name had their service endlessly reconciled because of changing ownerReferences on one service. With this update, each receiver input will have its own service named with the convention of <CLF.Name>-<input.Name> . ( LOG-5009 ) Before this update, the ClusterLogForwarder did not report errors when forwarding logs to cloudwatch without a secret. With this update, the following error message appears when forwarding logs to cloudwatch without a secret: secret must be provided for cloudwatch output . ( LOG-5021 ) Before this update, the log_forwarder_input_info included application , infrastructure , and audit input metric points. With this update, http is also added as a metric point. ( LOG-5043 ) 1.2.15.2. CVEs CVE-2021-35937 CVE-2021-35938 CVE-2021-35939 CVE-2022-3545 CVE-2022-24963 CVE-2022-36402 CVE-2022-41858 CVE-2023-2166 CVE-2023-2176 CVE-2023-3777 CVE-2023-3812 CVE-2023-4015 CVE-2023-4622 CVE-2023-4623 CVE-2023-5178 CVE-2023-5363 CVE-2023-5388 CVE-2023-5633 CVE-2023-6679 CVE-2023-7104 CVE-2023-27043 CVE-2023-38409 CVE-2023-40283 CVE-2023-42753 CVE-2023-43804 CVE-2023-45803 CVE-2023-46813 CVE-2024-20918 CVE-2024-20919 CVE-2024-20921 CVE-2024-20926 CVE-2024-20945 CVE-2024-20952 1.2.16. Logging 5.8.3 This release includes Logging Bug Fix 5.8.3 and Logging Security Fix 5.8.3 1.2.16.1. Bug fixes Before this update, when configured to read a custom S3 Certificate Authority the Loki Operator would not automatically update the configuration when the name of the ConfigMap or the contents changed. With this update, the Loki Operator is watching for changes to the ConfigMap and automatically updates the generated configuration. ( LOG-4969 ) Before this update, Loki outputs configured without a valid URL caused the collector pods to crash. With this update, outputs are subject to URL validation, resolving the issue. ( LOG-4822 ) Before this update the Cluster Logging Operator would generate collector configuration fields for outputs that did not specify a secret to use the service account bearer token. With this update, an output does not require authentication, resolving the issue. ( LOG-4962 ) Before this update, the tls.insecureSkipVerify field of an output was not set to a value of true without a secret defined. With this update, a secret is no longer required to set this value. ( LOG-4963 ) Before this update, output configurations allowed the combination of an insecure (HTTP) URL with TLS authentication. With this update, outputs configured for TLS authentication require a secure (HTTPS) URL. ( LOG-4893 ) 1.2.16.2. CVEs CVE-2021-35937 CVE-2021-35938 CVE-2021-35939 CVE-2023-7104 CVE-2023-27043 CVE-2023-48795 CVE-2023-51385 CVE-2024-0553 1.2.17. Logging 5.8.2 This release includes OpenShift Logging Bug Fix Release 5.8.2 . 1.2.17.1. Bug fixes Before this update, the LokiStack ruler pods would not format the IPv6 pod IP in HTTP URLs used for cross pod communication, causing querying rules and alerts through the Prometheus-compatible API to fail. With this update, the LokiStack ruler pods encapsulate the IPv6 pod IP in square brackets, resolving the issue. ( LOG-4890 ) Before this update, the developer console logs did not account for the current namespace, resulting in query rejection for users without cluster-wide log access. With this update, namespace inclusion has been corrected, resolving the issue. ( LOG-4947 ) Before this update, the logging view plugin of the OpenShift Container Platform web console did not allow for custom node placement and tolerations. With this update, defining custom node placements and tolerations has been added to the logging view plugin of the OpenShift Container Platform web console. ( LOG-4912 ) 1.2.17.2. CVEs CVE-2022-44638 CVE-2023-1192 CVE-2023-5345 CVE-2023-20569 CVE-2023-26159 CVE-2023-39615 CVE-2023-45871 1.2.18. Logging 5.8.1 This release includes OpenShift Logging Bug Fix Release 5.8.1 and OpenShift Logging Bug Fix Release 5.8.1 Kibana . 1.2.18.1. Enhancements 1.2.18.1.1. Log Collection With this update, while configuring Vector as a collector, you can add logic to the Red Hat OpenShift Logging Operator to use a token specified in the secret in place of the token associated with the service account. ( LOG-4780 ) With this update, the BoltDB Shipper Loki dashboards are now renamed to Index dashboards. ( LOG-4828 ) 1.2.18.2. Bug fixes Before this update, the ClusterLogForwarder created empty indices after enabling the parsing of JSON logs, even when the rollover conditions were not met. With this update, the ClusterLogForwarder skips the rollover when the write-index is empty. ( LOG-4452 ) Before this update, the Vector set the default log level incorrectly. With this update, the correct log level is set by improving the enhancement of regular expression, or regexp , for log level detection. ( LOG-4480 ) Before this update, during the process of creating index patterns, the default alias was missing from the initial index in each log output. As a result, Kibana users were unable to create index patterns by using OpenShift Elasticsearch Operator. This update adds the missing aliases to OpenShift Elasticsearch Operator, resolving the issue. Kibana users can now create index patterns that include the {app,infra,audit}-000001 indexes. ( LOG-4683 ) Before this update, Fluentd collector pods were in a CrashLoopBackOff state due to binding of the Prometheus server on IPv6 clusters. With this update, the collectors work properly on IPv6 clusters. ( LOG-4706 ) Before this update, the Red Hat OpenShift Logging Operator would undergo numerous reconciliations whenever there was a change in the ClusterLogForwarder . With this update, the Red Hat OpenShift Logging Operator disregards the status changes in the collector daemonsets that triggered the reconciliations. ( LOG-4741 ) Before this update, the Vector log collector pods were stuck in the CrashLoopBackOff state on IBM Power machines. With this update, the Vector log collector pods start successfully on IBM Power architecture machines. ( LOG-4768 ) Before this update, forwarding with a legacy forwarder to an internal LokiStack would produce SSL certificate errors using Fluentd collector pods. With this update, the log collector service account is used by default for authentication, using the associated token and ca.crt . ( LOG-4791 ) Before this update, forwarding with a legacy forwarder to an internal LokiStack would produce SSL certificate errors using Vector collector pods. With this update, the log collector service account is used by default for authentication and also using the associated token and ca.crt . ( LOG-4852 ) Before this fix, IPv6 addresses would not be parsed correctly after evaluating a host or multiple hosts for placeholders. With this update, IPv6 addresses are correctly parsed. ( LOG-4811 ) Before this update, it was necessary to create a ClusterRoleBinding to collect audit permissions for HTTP receiver inputs. With this update, it is not necessary to create the ClusterRoleBinding because the endpoint already depends upon the cluster certificate authority. ( LOG-4815 ) Before this update, the Loki Operator did not mount a custom CA bundle to the ruler pods. As a result, during the process to evaluate alerting or recording rules, object storage access failed. With this update, the Loki Operator mounts the custom CA bundle to all ruler pods. The ruler pods can download logs from object storage to evaluate alerting or recording rules. ( LOG-4836 ) Before this update, while removing the inputs.receiver section in the ClusterLogForwarder , the HTTP input services and its associated secrets were not deleted. With this update, the HTTP input resources are deleted when not needed. ( LOG-4612 ) Before this update, the ClusterLogForwarder indicated validation errors in the status, but the outputs and the pipeline status did not accurately reflect the specific issues. With this update, the pipeline status displays the validation failure reasons correctly in case of misconfigured outputs, inputs, or filters. ( LOG-4821 ) Before this update, changing a LogQL query that used controls such as time range or severity changed the label matcher operator defining it like a regular expression. With this update, regular expression operators remain unchanged when updating the query. ( LOG-4841 ) 1.2.18.3. CVEs CVE-2007-4559 CVE-2021-3468 CVE-2021-3502 CVE-2021-3826 CVE-2021-43618 CVE-2022-3523 CVE-2022-3565 CVE-2022-3594 CVE-2022-4285 CVE-2022-38457 CVE-2022-40133 CVE-2022-40982 CVE-2022-41862 CVE-2022-42895 CVE-2023-0597 CVE-2023-1073 CVE-2023-1074 CVE-2023-1075 CVE-2023-1076 CVE-2023-1079 CVE-2023-1206 CVE-2023-1249 CVE-2023-1252 CVE-2023-1652 CVE-2023-1855 CVE-2023-1981 CVE-2023-1989 CVE-2023-2731 CVE-2023-3138 CVE-2023-3141 CVE-2023-3161 CVE-2023-3212 CVE-2023-3268 CVE-2023-3316 CVE-2023-3358 CVE-2023-3576 CVE-2023-3609 CVE-2023-3772 CVE-2023-3773 CVE-2023-4016 CVE-2023-4128 CVE-2023-4155 CVE-2023-4194 CVE-2023-4206 CVE-2023-4207 CVE-2023-4208 CVE-2023-4273 CVE-2023-4641 CVE-2023-22745 CVE-2023-26545 CVE-2023-26965 CVE-2023-26966 CVE-2023-27522 CVE-2023-29491 CVE-2023-29499 CVE-2023-30456 CVE-2023-31486 CVE-2023-32324 CVE-2023-32573 CVE-2023-32611 CVE-2023-32665 CVE-2023-33203 CVE-2023-33285 CVE-2023-33951 CVE-2023-33952 CVE-2023-34241 CVE-2023-34410 CVE-2023-35825 CVE-2023-36054 CVE-2023-37369 CVE-2023-38197 CVE-2023-38545 CVE-2023-38546 CVE-2023-39191 CVE-2023-39975 CVE-2023-44487 1.2.19. Logging 5.8.0 This release includes OpenShift Logging Bug Fix Release 5.8.0 and OpenShift Logging Bug Fix Release 5.8.0 Kibana . 1.2.19.1. Deprecation notice In Logging 5.8, Elasticsearch, Fluentd, and Kibana are deprecated and are planned to be removed in Logging 6.0, which is expected to be shipped alongside a future release of OpenShift Container Platform. Red Hat will provide critical and above CVE bug fixes and support for these components during the current release lifecycle, but these components will no longer receive feature enhancements. The Vector-based collector provided by the Red Hat OpenShift Logging Operator and LokiStack provided by the Loki Operator are the preferred Operators for log collection and storage. We encourage all users to adopt the Vector and Loki log stack, as this will be the stack that will be enhanced going forward. 1.2.19.2. Enhancements 1.2.19.2.1. Log Collection With this update, the LogFileMetricExporter is no longer deployed with the collector by default. You must manually create a LogFileMetricExporter custom resource (CR) to generate metrics from the logs produced by running containers. If you do not create the LogFileMetricExporter CR, you may see a No datapoints found message in the OpenShift Container Platform web console dashboard for Produced Logs . ( LOG-3819 ) With this update, you can deploy multiple, isolated, and RBAC-protected ClusterLogForwarder custom resource (CR) instances in any namespace. This allows independent groups to forward desired logs to any destination while isolating their configuration from other collector deployments. ( LOG-1343 ) Important In order to support multi-cluster log forwarding in additional namespaces other than the openshift-logging namespace, you must update the Red Hat OpenShift Logging Operator to watch all namespaces. This functionality is supported by default in new Red Hat OpenShift Logging Operator version 5.8 installations. With this update, you can use the flow control or rate limiting mechanism to limit the volume of log data that can be collected or forwarded by dropping excess log records. The input limits prevent poorly-performing containers from overloading the Logging and the output limits put a ceiling on the rate of logs shipped to a given data store. ( LOG-884 ) With this update, you can configure the log collector to look for HTTP connections and receive logs as an HTTP server, also known as a webhook. ( LOG-4562 ) With this update, you can configure audit polices to control which Kubernetes and OpenShift API server events are forwarded by the log collector. ( LOG-3982 ) 1.2.19.2.2. Log Storage With this update, LokiStack administrators can have more fine-grained control over who can access which logs by granting access to logs on a namespace basis. ( LOG-3841 ) With this update, the Loki Operator introduces PodDisruptionBudget configuration on LokiStack deployments to ensure normal operations during OpenShift Container Platform cluster restarts by keeping ingestion and the query path available. ( LOG-3839 ) With this update, the reliability of existing LokiStack installations are seamlessly improved by applying a set of default Affinity and Anti-Affinity policies. ( LOG-3840 ) With this update, you can manage zone-aware data replication as an administrator in LokiStack, in order to enhance reliability in the event of a zone failure. ( LOG-3266 ) With this update, a new supported small-scale LokiStack size of 1x.extra-small is introduced for OpenShift Container Platform clusters hosting a few workloads and smaller ingestion volumes (up to 100GB/day). ( LOG-4329 ) With this update, the LokiStack administrator has access to an official Loki dashboard to inspect the storage performance and the health of each component. ( LOG-4327 ) 1.2.19.2.3. Log Console With this update, you can enable the Logging Console Plugin when Elasticsearch is the default Log Store. ( LOG-3856 ) With this update, OpenShift Container Platform application owners can receive notifications for application log-based alerts on the OpenShift Container Platform web console Developer perspective for OpenShift Container Platform version 4.14 and later. ( LOG-3548 ) 1.2.19.3. Known Issues Currently, Splunk log forwarding might not work after upgrading to version 5.8 of the Red Hat OpenShift Logging Operator. This issue is caused by transitioning from OpenSSL version 1.1.1 to version 3.0.7. In the newer OpenSSL version, there is a default behavior change, where connections to TLS 1.2 endpoints are rejected if they do not expose the RFC 5746 extension. As a workaround, enable TLS 1.3 support on the TLS terminating load balancer in front of the Splunk HEC (HTTP Event Collector) endpoint. Splunk is a third-party system and this should be configured from the Splunk end. Currently, there is a flaw in handling multiplexed streams in the HTTP/2 protocol, where you can repeatedly make a request for a new multiplex stream and immediately send an RST_STREAM frame to cancel it. This created extra work for the server set up and tore down the streams, resulting in a denial of service due to server resource consumption. There is currently no workaround for this issue. ( LOG-4609 ) Currently, when using FluentD as the collector, the collector pod cannot start on the OpenShift Container Platform IPv6-enabled cluster. The pod logs produce the fluentd pod [error]: unexpected error error_class=SocketError error="getaddrinfo: Name or service not known error. There is currently no workaround for this issue. ( LOG-4706 ) Currently, the log alert is not available on an IPv6-enabled cluster. There is currently no workaround for this issue. ( LOG-4709 ) Currently, must-gather cannot gather any logs on a FIPS-enabled cluster, because the required OpenSSL library is not available in the cluster-logging-rhel9-operator . There is currently no workaround for this issue. ( LOG-4403 ) Currently, when deploying the logging version 5.8 on a FIPS-enabled cluster, the collector pods cannot start and are stuck in CrashLoopBackOff status, while using FluentD as a collector. There is currently no workaround for this issue. ( LOG-3933 ) 1.2.19.4. CVEs CVE-2023-40217 1.3. Logging 5.7 Note Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . 1.3.1. Logging 5.7.15 This release includes OpenShift Logging Bug Fix 5.7.15 . 1.3.1.1. Bug fixes Before this update, there was a delay in restarting Ingesters when configuring LokiStack , because the Loki Operator sets the write-ahead log replay_memory_ceiling to zero bytes for the 1x.demo size. With this update, the minimum value used for the replay_memory_ceiling has been increased to avoid delays. ( LOG-5616 ) 1.3.1.2. CVEs CVE-2019-25162 CVE-2020-15778 CVE-2020-36777 CVE-2021-43618 CVE-2021-46934 CVE-2021-47013 CVE-2021-47055 CVE-2021-47118 CVE-2021-47153 CVE-2021-47171 CVE-2021-47185 CVE-2022-4645 CVE-2022-48627 CVE-2022-48669 CVE-2023-6004 CVE-2023-6240 CVE-2023-6597 CVE-2023-6918 CVE-2023-7008 CVE-2023-43785 CVE-2023-43786 CVE-2023-43787 CVE-2023-43788 CVE-2023-43789 CVE-2023-52439 CVE-2023-52445 CVE-2023-52477 CVE-2023-52513 CVE-2023-52520 CVE-2023-52528 CVE-2023-52565 CVE-2023-52578 CVE-2023-52594 CVE-2023-52595 CVE-2023-52598 CVE-2023-52606 CVE-2023-52607 CVE-2023-52610 CVE-2024-0340 CVE-2024-0450 CVE-2024-22365 CVE-2024-23307 CVE-2024-25062 CVE-2024-25744 CVE-2024-26458 CVE-2024-26461 CVE-2024-26593 CVE-2024-26603 CVE-2024-26610 CVE-2024-26615 CVE-2024-26642 CVE-2024-26643 CVE-2024-26659 CVE-2024-26664 CVE-2024-26693 CVE-2024-26694 CVE-2024-26743 CVE-2024-26744 CVE-2024-26779 CVE-2024-26872 CVE-2024-26892 CVE-2024-26987 CVE-2024-26901 CVE-2024-26919 CVE-2024-26933 CVE-2024-26934 CVE-2024-26964 CVE-2024-26973 CVE-2024-26993 CVE-2024-27014 CVE-2024-27048 CVE-2024-27052 CVE-2024-27056 CVE-2024-27059 CVE-2024-28834 CVE-2024-33599 CVE-2024-33600 CVE-2024-33601 CVE-2024-33602 1.3.2. Logging 5.7.14 This release includes OpenShift Logging Bug Fix 5.7.14 . 1.3.2.1. Bug fixes Before this update, an issue in the metrics collection code of the Logging Operator caused it to report stale telemetry metrics. With this update, the Logging Operator does not report stale telemetry metrics. ( LOG-5472 ) 1.3.2.2. CVEs CVE-2023-45288 CVE-2023-52425 CVE-2024-2961 CVE-2024-21011 CVE-2024-21012 CVE-2024-21068 CVE-2024-21085 CVE-2024-21094 CVE-2024-28834 1.3.3. Logging 5.7.13 This release includes OpenShift Logging Bug Fix 5.7.13 . 1.3.3.1. Enhancements Before this update, the Loki Operator configured Loki to use path-based style access for the Amazon Simple Storage Service (S3), which has been deprecated. With this update, the Loki Operator defaults to virtual-host style without users needing to change their configuration. ( LOG-5403 ) Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint used in the storage secret. With this update, the validation process ensures the S3 endpoint is a valid S3 URL, and the LokiStack status updates to indicate any invalid URLs. ( LOG-5393 ) 1.3.3.2. Bug fixes Before this update, the Elastisearch Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and certificate authority (CA) files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring specification on the ServiceMonitor configuration. With this update, the Elastisearch Operator ServiceMonitor in the openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring specifications in the Prometheus Operator to handle the Elastisearch Operator ServiceMonitor successfully. This enables Prometheus to scrape the Elastisearch Operator metrics. ( LOG-5243 ) Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint URL format used in the storage secret. With this update, the S3 endpoint URL goes through a validation step that reflects on the status of the LokiStack . ( LOG-5399 ) 1.3.3.3. CVEs CVE-2021-33631 CVE-2021-43618 CVE-2022-38096 CVE-2022-48624 CVE-2023-6546 CVE-2023-6931 CVE-2023-28322 CVE-2023-38546 CVE-2023-46218 CVE-2023-51042 CVE-2024-0565 CVE-2024-1086 1.3.4. Logging 5.7.12 This release includes OpenShift Logging Bug Fix 5.7.12 . 1.3.4.1. Bug fixes Before this update, the Loki Operator checked if the pods were running to decide if the LokiStack was ready. With this update, it also checks if the pods are ready, so that the readiness of the LokiStack reflects the state of its components. ( LOG-5172 ) Before this update, the Red Hat build pipeline didn't use the existing build details in Loki builds and omitted information such as revision, branch, and version. With this update, the Red Hat build pipeline now adds these details to the Loki builds, fixing the issue. ( LOG-5202 ) Before this update, the configuration of the Loki Operator's ServiceMonitor could match many Kubernetes services, resulting in the Loki Operator's metrics being collected multiple times. With this update, the configuration of ServiceMonitor now only matches the dedicated metrics service. ( LOG-5251 ) Before this update, the build pipeline did not include linker flags for the build date, causing Loki builds to show empty strings for buildDate and goVersion . With this update, adding the missing linker flags in the build pipeline fixes the issue. ( LOG-5275 ) Before this update, the Loki Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and CA files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring spec on the ServiceMonitor configuration. With this update, the Loki Operator ServiceMonitor in openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring spec in the Prometheus Operator to handle the Loki Operator ServiceMonitor successfully, enabling Prometheus to scrape the Loki Operator metrics. ( LOG-5241 ) 1.3.4.2. CVEs CVE-2021-35937 CVE-2021-35938 CVE-2021-35939 CVE-2022-3545 CVE-2022-41858 CVE-2023-1073 CVE-2023-1838 CVE-2023-2166 CVE-2023-2176 CVE-2023-4623 CVE-2023-4921 CVE-2023-5717 CVE-2023-6135 CVE-2023-6356 CVE-2023-6535 CVE-2023-6536 CVE-2023-6606 CVE-2023-6610 CVE-2023-6817 CVE-2023-7104 CVE-2023-27043 CVE-2023-40283 CVE-2023-45871 CVE-2023-46813 CVE-2023-48795 CVE-2023-51385 CVE-2024-0553 CVE-2024-0646 CVE-2024-24786 1.3.5. Logging 5.7.11 This release includes Logging Bug Fix 5.7.11 . 1.3.5.1. Bug fixes Before this update, when configured to read a custom S3 Certificate Authority, the Loki Operator would not automatically update the configuration when the name of the ConfigMap object or the contents changed. With this update, the Loki Operator now watches for changes to the ConfigMap object and automatically updates the generated configuration. ( LOG-4968 ) 1.3.5.2. CVEs CVE-2023-39326 1.3.6. Logging 5.7.10 This release includes OpenShift Logging Bug Fix Release 5.7.10 . 1.3.6.1. Bug fix Before this update, the LokiStack ruler pods would not format the IPv6 pod IP in HTTP URLs used for cross pod communication, causing querying rules and alerts through the Prometheus-compatible API to fail. With this update, the LokiStack ruler pods encapsulate the IPv6 pod IP in square brackets, resolving the issue. ( LOG-4891 ) 1.3.6.2. CVEs CVE-2007-4559 CVE-2021-43975 CVE-2022-3594 CVE-2022-3640 CVE-2022-4285 CVE-2022-4744 CVE-2022-28388 CVE-2022-38457 CVE-2022-40133 CVE-2022-40982 CVE-2022-41862 CVE-2022-42895 CVE-2022-45869 CVE-2022-45887 CVE-2022-48337 CVE-2022-48339 CVE-2023-0458 CVE-2023-0590 CVE-2023-0597 CVE-2023-1073 CVE-2023-1074 CVE-2023-1075 CVE-2023-1079 CVE-2023-1118 CVE-2023-1206 CVE-2023-1252 CVE-2023-1382 CVE-2023-1855 CVE-2023-1989 CVE-2023-1998 CVE-2023-2513 CVE-2023-3138 CVE-2023-3141 CVE-2023-3161 CVE-2023-3212 CVE-2023-3268 CVE-2023-3446 CVE-2023-3609 CVE-2023-3611 CVE-2023-3772 CVE-2023-3817 CVE-2023-4016 CVE-2023-4128 CVE-2023-4132 CVE-2023-4155 CVE-2023-4206 CVE-2023-4207 CVE-2023-4208 CVE-2023-4641 CVE-2023-4732 CVE-2023-5678 CVE-2023-22745 CVE-2023-23455 CVE-2023-26545 CVE-2023-28328 CVE-2023-28772 CVE-2023-30456 CVE-2023-31084 CVE-2023-31436 CVE-2023-31486 CVE-2023-33203 CVE-2023-33951 CVE-2023-33952 CVE-2023-35823 CVE-2023-35824 CVE-2023-35825 CVE-2023-38037 CVE-2024-0443 1.3.7. Logging 5.7.9 This release includes OpenShift Logging Bug Fix Release 5.7.9 . 1.3.7.1. Bug fixes Before this fix, IPv6 addresses would not be parsed correctly after evaluating a host or multiple hosts for placeholders. With this update, IPv6 addresses are correctly parsed. ( LOG-4281 ) Before this update, the Vector failed to start on IPv4-only nodes. As a result, it failed to create a listener for its metrics endpoint with the following error: Failed to start Prometheus exporter: TCP bind failed: Address family not supported by protocol (os error 97) . With this update, the Vector operates normally on IPv4-only nodes. ( LOG-4589 ) Before this update, during the process of creating index patterns, the default alias was missing from the initial index in each log output. As a result, Kibana users were unable to create index patterns by using OpenShift Elasticsearch Operator. This update adds the missing aliases to OpenShift Elasticsearch Operator, resolving the issue. Kibana users can now create index patterns that include the {app,infra,audit}-000001 indexes. ( LOG-4806 ) Before this update, the Loki Operator did not mount a custom CA bundle to the ruler pods. As a result, during the process to evaluate alerting or recording rules, object storage access failed. With this update, the Loki Operator mounts the custom CA bundle to all ruler pods. The ruler pods can download logs from object storage to evaluate alerting or recording rules. ( LOG-4837 ) Before this update, changing a LogQL query using controls such as time range or severity changed the label matcher operator as though it was defined like a regular expression. With this update, regular expression operators remain unchanged when updating the query. ( LOG-4842 ) Before this update, the Vector collector deployments relied upon the default retry and buffering behavior. As a result, the delivery pipeline backed up trying to deliver every message when the availability of an output was unstable. With this update, the Vector collector deployments limit the number of message retries and drop messages after the threshold has been exceeded. ( LOG-4536 ) 1.3.7.2. CVEs CVE-2007-4559 CVE-2021-43975 CVE-2022-3594 CVE-2022-3640 CVE-2022-4744 CVE-2022-28388 CVE-2022-38457 CVE-2022-40133 CVE-2022-40982 CVE-2022-41862 CVE-2022-42895 CVE-2022-45869 CVE-2022-45887 CVE-2022-48337 CVE-2022-48339 CVE-2023-0458 CVE-2023-0590 CVE-2023-0597 CVE-2023-1073 CVE-2023-1074 CVE-2023-1075 CVE-2023-1079 CVE-2023-1118 CVE-2023-1206 CVE-2023-1252 CVE-2023-1382 CVE-2023-1855 CVE-2023-1981 CVE-2023-1989 CVE-2023-1998 CVE-2023-2513 CVE-2023-3138 CVE-2023-3141 CVE-2023-3161 CVE-2023-3212 CVE-2023-3268 CVE-2023-3609 CVE-2023-3611 CVE-2023-3772 CVE-2023-4016 CVE-2023-4128 CVE-2023-4132 CVE-2023-4155 CVE-2023-4206 CVE-2023-4207 CVE-2023-4208 CVE-2023-4641 CVE-2023-4732 CVE-2023-22745 CVE-2023-23455 CVE-2023-26545 CVE-2023-28328 CVE-2023-28772 CVE-2023-30456 CVE-2023-31084 CVE-2023-31436 CVE-2023-31486 CVE-2023-32324 CVE-2023-33203 CVE-2023-33951 CVE-2023-33952 CVE-2023-34241 CVE-2023-35823 CVE-2023-35824 CVE-2023-35825 1.3.8. Logging 5.7.8 This release includes OpenShift Logging Bug Fix Release 5.7.8 . 1.3.8.1. Bug fixes Before this update, there was a potential conflict when the same name was used for the outputRefs and inputRefs parameters in the ClusterLogForwarder custom resource (CR). As a result, the collector pods entered in a CrashLoopBackOff status. With this update, the output labels contain the OUTPUT_ prefix to ensure a distinction between output labels and pipeline names. ( LOG-4383 ) Before this update, while configuring the JSON log parser, if you did not set the structuredTypeKey or structuredTypeName parameters for the Cluster Logging Operator, no alert would display about an invalid configuration. With this update, the Cluster Logging Operator informs you about the configuration issue. ( LOG-4441 ) Before this update, if the hecToken key was missing or incorrect in the secret specified for a Splunk output, the validation failed because the Vector forwarded logs to Splunk without a token. With this update, if the hecToken key is missing or incorrect, the validation fails with the A non-empty hecToken entry is required error message. ( LOG-4580 ) Before this update, selecting a date from the Custom time range for logs caused an error in the web console. With this update, you can select a date from the time range model in the web console successfully. ( LOG-4684 ) 1.3.8.2. CVEs CVE-2023-40217 CVE-2023-44487 1.3.9. Logging 5.7.7 This release includes OpenShift Logging Bug Fix Release 5.7.7 . 1.3.9.1. Bug fixes Before this update, FluentD normalized the logs emitted by the EventRouter differently from Vector. With this update, the Vector produces log records in a consistent format. ( LOG-4178 ) Before this update, there was an error in the query used for the FluentD Buffer Availability graph in the metrics dashboard created by the Cluster Logging Operator as it showed the minimum buffer usage. With this update, the graph shows the maximum buffer usage and is now renamed to FluentD Buffer Usage . ( LOG-4555 ) Before this update, deploying a LokiStack on IPv6-only or dual-stack OpenShift Container Platform clusters caused the LokiStack memberlist registration to fail. As a result, the distributor pods went into a crash loop. With this update, an administrator can enable IPv6 by setting the lokistack.spec.hashRing.memberlist.enableIPv6: value to true , which resolves the issue. ( LOG-4569 ) Before this update, the log collector relied on the default configuration settings for reading the container log lines. As a result, the log collector did not read the rotated files efficiently. With this update, there is an increase in the number of bytes read, which allows the log collector to efficiently process rotated files. ( LOG-4575 ) Before this update, the unused metrics in the Event Router caused the container to fail due to excessive memory usage. With this update, there is reduction in the memory usage of the Event Router by removing the unused metrics. ( LOG-4686 ) 1.3.9.2. CVEs CVE-2023-0800 CVE-2023-0801 CVE-2023-0802 CVE-2023-0803 CVE-2023-0804 CVE-2023-2002 CVE-2023-3090 CVE-2023-3390 CVE-2023-3776 CVE-2023-4004 CVE-2023-4527 CVE-2023-4806 CVE-2023-4813 CVE-2023-4863 CVE-2023-4911 CVE-2023-5129 CVE-2023-20593 CVE-2023-29491 CVE-2023-30630 CVE-2023-35001 CVE-2023-35788 1.3.10. Logging 5.7.6 This release includes OpenShift Logging Bug Fix Release 5.7.6 . 1.3.10.1. Bug fixes Before this update, the collector relied on the default configuration settings for reading the container log lines. As a result, the collector did not read the rotated files efficiently. With this update, there is an increase in the number of bytes read, which allows the collector to efficiently process rotated files. ( LOG-4501 ) Before this update, when users pasted a URL with predefined filters, some filters did not reflect. With this update, the UI reflects all the filters in the URL. ( LOG-4459 ) Before this update, forwarding to Loki using custom labels generated an error when switching from Fluentd to Vector. With this update, the Vector configuration sanitizes labels in the same way as Fluentd to ensure the collector starts and correctly processes labels. ( LOG-4460 ) Before this update, the Observability Logs console search field did not accept special characters that it should escape. With this update, it is escaping special characters properly in the query. ( LOG-4456 ) Before this update, the following warning message appeared while sending logs to Splunk: Timestamp was not found. With this update, the change overrides the name of the log field used to retrieve the Timestamp and sends it to Splunk without warning. ( LOG-4413 ) Before this update, the CPU and memory usage of Vector was increasing over time. With this update, the Vector configuration now contains the expire_metrics_secs=60 setting to limit the lifetime of the metrics and cap the associated CPU usage and memory footprint. ( LOG-4171 ) Before this update, the LokiStack gateway cached authorized requests very broadly. As a result, this caused wrong authorization results. With this update, LokiStack gateway caches on a more fine-grained basis which resolves this issue. ( LOG-4393 ) Before this update, the Fluentd runtime image included builder tools which were unnecessary at runtime. With this update, the builder tools are removed, resolving the issue. ( LOG-4467 ) 1.3.10.2. CVEs CVE-2023-3899 CVE-2023-4456 CVE-2023-32360 CVE-2023-34969 1.3.11. Logging 5.7.4 This release includes OpenShift Logging Bug Fix Release 5.7.4 . 1.3.11.1. Bug fixes Before this update, when forwarding logs to CloudWatch, a namespaceUUID value was not appended to the logGroupName field. With this update, the namespaceUUID value is included, so a logGroupName in CloudWatch appears as logGroupName: vectorcw.b443fb9e-bd4c-4b6a-b9d3-c0097f9ed286 . ( LOG-2701 ) Before this update, when forwarding logs over HTTP to an off-cluster destination, the Vector collector was unable to authenticate to the cluster-wide HTTP proxy even though correct credentials were provided in the proxy URL. With this update, the Vector log collector can now authenticate to the cluster-wide HTTP proxy. ( LOG-3381 ) Before this update, the Operator would fail if the Fluentd collector was configured with Splunk as an output, due to this configuration being unsupported. With this update, configuration validation rejects unsupported outputs, resolving the issue. ( LOG-4237 ) Before this update, when the Vector collector was updated an enabled = true value in the TLS configuration for AWS Cloudwatch logs and the GCP Stackdriver caused a configuration error. With this update, enabled = true value will be removed for these outputs, resolving the issue. ( LOG-4242 ) Before this update, the Vector collector occasionally panicked with the following error message in its log: thread 'vector-worker' panicked at 'all branches are disabled and there is no else branch', src/kubernetes/reflector.rs:26:9 . With this update, the error has been resolved. ( LOG-4275 ) Before this update, an issue in the Loki Operator caused the alert-manager configuration for the application tenant to disappear if the Operator was configured with additional options for that tenant. With this update, the generated Loki configuration now contains both the custom and the auto-generated configuration. ( LOG-4361 ) Before this update, when multiple roles were used to authenticate using STS with AWS Cloudwatch forwarding, a recent update caused the credentials to be non-unique. With this update, multiple combinations of STS roles and static credentials can once again be used to authenticate with AWS Cloudwatch. ( LOG-4368 ) Before this update, Loki filtered label values for active streams but did not remove duplicates, making Grafana's Label Browser unusable. With this update, Loki filters out duplicate label values for active streams, resolving the issue. ( LOG-4389 ) Pipelines with no name field specified in the ClusterLogForwarder custom resource (CR) stopped working after upgrading to OpenShift Logging 5.7. With this update, the error has been resolved. ( LOG-4120 ) 1.3.11.2. CVEs CVE-2022-25883 CVE-2023-22796 1.3.12. Logging 5.7.3 This release includes OpenShift Logging Bug Fix Release 5.7.3 . 1.3.12.1. Bug fixes Before this update, when viewing logs within the OpenShift Container Platform web console, cached files caused the data to not refresh. With this update the bootstrap files are not cached, resolving the issue. ( LOG-4100 ) Before this update, the Loki Operator reset errors in a way that made identifying configuration problems difficult to troubleshoot. With this update, errors persist until the configuration error is resolved. ( LOG-4156 ) Before this update, the LokiStack ruler did not restart after changes were made to the RulerConfig custom resource (CR). With this update, the Loki Operator restarts the ruler pods after the RulerConfig CR is updated. ( LOG-4161 ) Before this update, the vector collector terminated unexpectedly when input match label values contained a / character within the ClusterLogForwarder . This update resolves the issue by quoting the match label, enabling the collector to start and collect logs. ( LOG-4176 ) Before this update, the Loki Operator terminated unexpectedly when a LokiStack CR defined tenant limits, but not global limits. With this update, the Loki Operator can process LokiStack CRs without global limits, resolving the issue. ( LOG-4198 ) Before this update, Fluentd did not send logs to an Elasticsearch cluster when the private key provided was passphrase-protected. With this update, Fluentd properly handles passphrase-protected private keys when establishing a connection with Elasticsearch. ( LOG-4258 ) Before this update, clusters with more than 8,000 namespaces caused Elasticsearch to reject queries because the list of namespaces was larger than the http.max_header_size setting. With this update, the default value for header size has been increased, resolving the issue. ( LOG-4277 ) Before this update, label values containing a / character within the ClusterLogForwarder CR would cause the collector to terminate unexpectedly. With this update, slashes are replaced with underscores, resolving the issue. ( LOG-4095 ) Before this update, the Cluster Logging Operator terminated unexpectedly when set to an unmanaged state. With this update, a check to ensure that the ClusterLogging resource is in the correct Management state before initiating the reconciliation of the ClusterLogForwarder CR, resolving the issue. ( LOG-4177 ) Before this update, when viewing logs within the OpenShift Container Platform web console, selecting a time range by dragging over the histogram didn't work on the aggregated logs view inside the pod detail. With this update, the time range can be selected by dragging on the histogram in this view. ( LOG-4108 ) Before this update, when viewing logs within the OpenShift Container Platform web console, queries longer than 30 seconds timed out. With this update, the timeout value can be configured in the configmap/logging-view-plugin. ( LOG-3498 ) Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the more data available option loaded more log entries only the first time it was clicked. With this update, more entries are loaded with each click. ( OU-188 ) Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the streaming option would only display the streaming logs message without showing the actual logs. With this update, both the message and the log stream are displayed correctly. ( OU-166 ) 1.3.12.2. CVEs CVE-2020-24736 CVE-2022-48281 CVE-2023-1667 CVE-2023-2283 CVE-2023-24329 CVE-2023-26115 CVE-2023-26136 CVE-2023-26604 CVE-2023-28466 1.3.13. Logging 5.7.2 This release includes OpenShift Logging Bug Fix Release 5.7.2 . 1.3.13.1. Bug fixes Before this update, it was not possible to delete the openshift-logging namespace directly due to the presence of a pending finalizer. With this update, the finalizer is no longer utilized, enabling direct deletion of the namespace. ( LOG-3316 ) Before this update, the run.sh script would display an incorrect chunk_limit_size value if it was changed according to the OpenShift Container Platform documentation. However, when setting the chunk_limit_size via the environment variable USDBUFFER_SIZE_LIMIT , the script would show the correct value. With this update, the run.sh script now consistently displays the correct chunk_limit_size value in both scenarios. ( LOG-3330 ) Before this update, the OpenShift Container Platform web console's logging view plugin did not allow for custom node placement or tolerations. This update adds the ability to define node placement and tolerations for the logging view plugin. ( LOG-3749 ) Before this update, the Cluster Logging Operator encountered an Unsupported Media Type exception when trying to send logs to DataDog via the Fluentd HTTP Plugin. With this update, users can seamlessly assign the content type for log forwarding by configuring the HTTP header Content-Type. The value provided is automatically assigned to the content_type parameter within the plugin, ensuring successful log transmission. ( LOG-3784 ) Before this update, when the detectMultilineErrors field was set to true in the ClusterLogForwarder custom resource (CR), PHP multi-line errors were recorded as separate log entries, causing the stack trace to be split across multiple messages. With this update, multi-line error detection for PHP is enabled, ensuring that the entire stack trace is included in a single log message. ( LOG-3878 ) Before this update, ClusterLogForwarder pipelines containing a space in their name caused the Vector collector pods to continuously crash. With this update, all spaces, dashes (-), and dots (.) in pipeline names are replaced with underscores (_). ( LOG-3945 ) Before this update, the log_forwarder_output metric did not include the http parameter. This update adds the missing parameter to the metric. ( LOG-3997 ) Before this update, Fluentd did not identify some multi-line JavaScript client exceptions when they ended with a colon. With this update, the Fluentd buffer name is prefixed with an underscore, resolving the issue. ( LOG-4019 ) Before this update, when configuring log forwarding to write to a Kafka output topic which matched a key in the payload, logs dropped due to an error. With this update, Fluentd's buffer name has been prefixed with an underscore, resolving the issue.( LOG-4027 ) Before this update, the LokiStack gateway returned label values for namespaces without applying the access rights of a user. With this update, the LokiStack gateway applies permissions to label value requests, resolving the issue. ( LOG-4049 ) Before this update, the Cluster Logging Operator API required a certificate to be provided by a secret when the tls.insecureSkipVerify option was set to true . With this update, the Cluster Logging Operator API no longer requires a certificate to be provided by a secret in such cases. The following configuration has been added to the Operator's CR: tls.verify_certificate = false tls.verify_hostname = false ( LOG-3445 ) Before this update, the LokiStack route configuration caused queries running longer than 30 seconds to timeout. With this update, the LokiStack global and per-tenant queryTimeout settings affect the route timeout settings, resolving the issue. ( LOG-4052 ) Before this update, a prior fix to remove defaulting of the collection.type resulted in the Operator no longer honoring the deprecated specs for resource, node selections, and tolerations. This update modifies the Operator behavior to always prefer the collection.logs spec over those of collection . This varies from behavior that allowed using both the preferred fields and deprecated fields but would ignore the deprecated fields when collection.type was populated. ( LOG-4185 ) Before this update, the Vector log collector did not generate TLS configuration for forwarding logs to multiple Kafka brokers if the broker URLs were not specified in the output. With this update, TLS configuration is generated appropriately for multiple brokers. ( LOG-4163 ) Before this update, the option to enable passphrase for log forwarding to Kafka was unavailable. This limitation presented a security risk as it could potentially expose sensitive information. With this update, users now have a seamless option to enable passphrase for log forwarding to Kafka. ( LOG-3314 ) Before this update, Vector log collector did not honor the tlsSecurityProfile settings for outgoing TLS connections. After this update, Vector handles TLS connection settings appropriately. ( LOG-4011 ) Before this update, not all available output types were included in the log_forwarder_output_info metrics. With this update, metrics contain Splunk and Google Cloud Logging data which was missing previously. ( LOG-4098 ) Before this update, when follow_inodes was set to true , the Fluentd collector could crash on file rotation. With this update, the follow_inodes setting does not crash the collector. ( LOG-4151 ) Before this update, the Fluentd collector could incorrectly close files that should be watched because of how those files were tracked. With this update, the tracking parameters have been corrected. ( LOG-4149 ) Before this update, forwarding logs with the Vector collector and naming a pipeline in the ClusterLogForwarder instance audit , application or infrastructure resulted in collector pods staying in the CrashLoopBackOff state with the following error in the collector log: ERROR vector::cli: Configuration error. error=redefinition of table transforms.audit for key transforms.audit After this update, pipeline names no longer clash with reserved input names, and pipelines can be named audit , application or infrastructure . ( LOG-4218 ) Before this update, when forwarding logs to a syslog destination with the Vector collector and setting the addLogSource flag to true , the following extra empty fields were added to the forwarded messages: namespace_name= , container_name= , and pod_name= . With this update, these fields are no longer added to journal logs. ( LOG-4219 ) Before this update, when a structuredTypeKey was not found, and a structuredTypeName was not specified, log messages were still parsed into structured object. With this update, parsing of logs is as expected. ( LOG-4220 ) 1.3.13.2. CVEs CVE-2021-26341 CVE-2021-33655 CVE-2021-33656 CVE-2022-1462 CVE-2022-1679 CVE-2022-1789 CVE-2022-2196 CVE-2022-2663 CVE-2022-3028 CVE-2022-3239 CVE-2022-3522 CVE-2022-3524 CVE-2022-3564 CVE-2022-3566 CVE-2022-3567 CVE-2022-3619 CVE-2022-3623 CVE-2022-3625 CVE-2022-3627 CVE-2022-3628 CVE-2022-3707 CVE-2022-3970 CVE-2022-4129 CVE-2022-20141 CVE-2022-25147 CVE-2022-25265 CVE-2022-30594 CVE-2022-36227 CVE-2022-39188 CVE-2022-39189 CVE-2022-41218 CVE-2022-41674 CVE-2022-42703 CVE-2022-42720 CVE-2022-42721 CVE-2022-42722 CVE-2022-43750 CVE-2022-47929 CVE-2023-0394 CVE-2023-0461 CVE-2023-1195 CVE-2023-1582 CVE-2023-2491 CVE-2023-22490 CVE-2023-23454 CVE-2023-23946 CVE-2023-25652 CVE-2023-25815 CVE-2023-27535 CVE-2023-29007 1.3.14. Logging 5.7.1 This release includes: OpenShift Logging Bug Fix Release 5.7.1 . 1.3.14.1. Bug fixes Before this update, the presence of numerous noisy messages within the Cluster Logging Operator pod logs caused reduced log readability, and increased difficulty in identifying important system events. With this update, the issue is resolved by significantly reducing the noisy messages within Cluster Logging Operator pod logs. ( LOG-3482 ) Before this update, the API server would reset the value for the CollectorSpec.Type field to vector , even when the custom resource used a different value. This update removes the default for the CollectorSpec.Type field to restore the behavior. ( LOG-4086 ) Before this update, a time range could not be selected in the OpenShift Container Platform web console by clicking and dragging over the logs histogram. With this update, clicking and dragging can be used to successfully select a time range. ( LOG-4501 ) Before this update, clicking on the Show Resources link in the OpenShift Container Platform web console did not produce any effect. With this update, the issue is resolved by fixing the functionality of the "Show Resources" link to toggle the display of resources for each log entry. ( LOG-3218 ) 1.3.14.2. CVEs CVE-2023-21930 CVE-2023-21937 CVE-2023-21938 CVE-2023-21939 CVE-2023-21954 CVE-2023-21967 CVE-2023-21968 CVE-2023-28617 1.3.15. Logging 5.7.0 This release includes OpenShift Logging Bug Fix Release 5.7.0 . 1.3.15.1. Enhancements With this update, you can enable logging to detect multi-line exceptions and reassemble them into a single log entry. To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field, with a value of true . 1.3.15.2. Known Issues None. 1.3.15.3. Bug fixes Before this update, the nodeSelector attribute for the Gateway component of the LokiStack did not impact node scheduling. With this update, the nodeSelector attribute works as expected. ( LOG-3713 ) 1.3.15.4. CVEs CVE-2023-1999 CVE-2023-28617 1.4. Logging 5.6 Note Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Note The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y , where x.y represents the major and minor version of logging you have installed. For example, stable-5.7 . 1.4.1. Logging 5.6.27 This release includes RHBA-2024:10988 . 1.4.1.1. Bug fixes None. 1.4.1.2. CVEs CVE-2018-12699 CVE-2019-12900 CVE-2024-9287 CVE-2024-10041 CVE-2024-10963 CVE-2024-11168 CVE-2024-35195 CVE-2024-47875 CVE-2024-50602 1.4.2. Logging 5.6.26 This release includes RHBA-2024:10050 . 1.4.2.1. Bug fixes None. 1.4.2.2. CVEs CVE-2022-48773 CVE-2022-48936 CVE-2023-48161 CVE-2023-52492 CVE-2024-3596 CVE-2024-5535 CVE-2024-7006 CVE-2024-21208 CVE-2024-21210 CVE-2024-21217 CVE-2024-21235 CVE-2024-24857 CVE-2024-26851 CVE-2024-26924 CVE-2024-26976 CVE-2024-27017 CVE-2024-27062 CVE-2024-35839 CVE-2024-35898 CVE-2024-35939 CVE-2024-38540 CVE-2024-38541 CVE-2024-38586 CVE-2024-38608 CVE-2024-39503 CVE-2024-40924 CVE-2024-40961 CVE-2024-40983 CVE-2024-40984 CVE-2024-41009 CVE-2024-41042 CVE-2024-41066 CVE-2024-41092 CVE-2024-41093 CVE-2024-42070 CVE-2024-42079 CVE-2024-42244 CVE-2024-42284 CVE-2024-42292 CVE-2024-42301 CVE-2024-43854 CVE-2024-43880 CVE-2024-43889 CVE-2024-43892 CVE-2024-44935 CVE-2024-44989 CVE-2024-44990 CVE-2024-45018 CVE-2024-46826 CVE-2024-47668 1.4.3. Logging 5.6.25 This release includes OpenShift Logging Bug Fix Release 5.6.25 . 1.4.3.1. Bug fixes None. 1.4.3.2. CVEs CVE-2021-46984 CVE-2021-47097 CVE-2021-47101 CVE-2021-47287 CVE-2021-47289 CVE-2021-47321 CVE-2021-47338 CVE-2021-47352 CVE-2021-47383 CVE-2021-47384 CVE-2021-47385 CVE-2021-47386 CVE-2021-47393 CVE-2021-47412 CVE-2021-47432 CVE-2021-47441 CVE-2021-47455 CVE-2021-47466 CVE-2021-47497 CVE-2021-47527 CVE-2021-47560 CVE-2021-47582 CVE-2021-47609 CVE-2022-48619 CVE-2022-48754 CVE-2022-48760 CVE-2022-48804 CVE-2022-48836 CVE-2022-48866 CVE-2023-6040 CVE-2023-37920 CVE-2023-52470 CVE-2023-52476 CVE-2023-52478 CVE-2023-52522 CVE-2023-52605 CVE-2023-52683 CVE-2023-52798 CVE-2023-52800 CVE-2023-52809 CVE-2023-52817 CVE-2023-52840 CVE-2024-2398 CVE-2024-4032 CVE-2024-5535 CVE-2024-6232 CVE-2024-6345 CVE-2024-6923 CVE-2024-23848 CVE-2024-24791 CVE-2024-26595 CVE-2024-26600 CVE-2024-26638 CVE-2024-26645 CVE-2024-26649 CVE-2024-26665 CVE-2024-26717 CVE-2024-26720 CVE-2024-26769 CVE-2024-26846 CVE-2024-26855 CVE-2024-26880 CVE-2024-26894 CVE-2024-26923 CVE-2024-26939 CVE-2024-27013 CVE-2024-27042 CVE-2024-34155 CVE-2024-34156 CVE-2024-34158 CVE-2024-35809 CVE-2024-35877 CVE-2024-35884 CVE-2024-35944 CVE-2024-47101 CVE-2024-36883 CVE-2024-36901 CVE-2024-36902 CVE-2024-36919 CVE-2024-36920 CVE-2024-36922 CVE-2024-36939 CVE-2024-36953 CVE-2024-37356 CVE-2024-38558 CVE-2024-38559 CVE-2024-38570 CVE-2024-38579 CVE-2024-38581 CVE-2024-38619 CVE-2024-39471 CVE-2024-39499 CVE-2024-39501 CVE-2024-39506 CVE-2024-40901 CVE-2024-40904 CVE-2024-40911 CVE-2024-40912 CVE-2024-40929 CVE-2024-40931 CVE-2024-40941 CVE-2024-40954 CVE-2024-40958 CVE-2024-40959 CVE-2024-40960 CVE-2024-40972 CVE-2024-40977 CVE-2024-40978 CVE-2024-40988 CVE-2024-40989 CVE-2024-40995 CVE-2024-40997 CVE-2024-40998 CVE-2024-41005 CVE-2024-41007 CVE-2024-41008 CVE-2024-41012 CVE-2024-41013 CVE-2024-41014 CVE-2024-41023 CVE-2024-41035 CVE-2024-41038 CVE-2024-41039 CVE-2024-41040 CVE-2024-41041 CVE-2024-41044 CVE-2024-41055 CVE-2024-41056 CVE-2024-41060 CVE-2024-41064 CVE-2024-41065 CVE-2024-41071 CVE-2024-41076 CVE-2024-41090 CVE-2024-41091 CVE-2024-41097 CVE-2024-42084 CVE-2024-42090 CVE-2024-42094 CVE-2024-42096 CVE-2024-42114 CVE-2024-42124 CVE-2024-42131 CVE-2024-42152 CVE-2024-42154 CVE-2024-42225 CVE-2024-42226 CVE-2024-42228 CVE-2024-42237 CVE-2024-42238 CVE-2024-42240 CVE-2024-42246 CVE-2024-42265 CVE-2024-42322 CVE-2024-43830 CVE-2024-43871 CVE-2024-45490 CVE-2024-45491 CVE-2024-45492 Note For detailed information on Red Hat security ratings, review Severity ratings . 1.4.4. Logging 5.6.24 This release includes OpenShift Logging Bug Fix Release 5.6.24 . 1.4.4.1. Bug fixes None. 1.4.4.2. CVEs CVE-2024-2398 CVE-2024-4032 CVE-2024-6104 CVE-2024-6232 CVE-2024-6345 CVE-2024-6923 CVE-2024-30203 CVE-2024-30205 CVE-2024-39331 CVE-2024-45490 CVE-2024-45491 CVE-2024-45492 Note For detailed information on Red Hat security ratings, review Severity ratings . 1.4.5. Logging 5.6.23 This release includes OpenShift Logging Bug Fix Release 5.6.23 . 1.4.5.1. Bug fixes None. 1.4.5.2. CVEs CVE-2018-15209 CVE-2021-46939 CVE-2021-47018 CVE-2021-47257 CVE-2021-47284 CVE-2021-47304 CVE-2021-47373 CVE-2021-47408 CVE-2021-47461 CVE-2021-47468 CVE-2021-47491 CVE-2021-47548 CVE-2021-47579 CVE-2021-47624 CVE-2022-48632 CVE-2022-48743 CVE-2022-48747 CVE-2022-48757 CVE-2023-6228 CVE-2023-25433 CVE-2023-28746 CVE-2023-52356 CVE-2023-52451 CVE-2023-52463 CVE-2023-52469 CVE-2023-52471 CVE-2023-52486 CVE-2023-52530 CVE-2023-52619 CVE-2023-52622 CVE-2023-52623 CVE-2023-52648 CVE-2023-52653 CVE-2023-52658 CVE-2023-52662 CVE-2023-52679 CVE-2023-52707 CVE-2023-52730 CVE-2023-52756 CVE-2023-52762 CVE-2023-52764 CVE-2023-52775 CVE-2023-52777 CVE-2023-52784 CVE-2023-52791 CVE-2023-52796 CVE-2023-52803 CVE-2023-52811 CVE-2023-52832 CVE-2023-52834 CVE-2023-52845 CVE-2023-52847 CVE-2023-52864 CVE-2024-2201 CVE-2024-2398 CVE-2024-6345 CVE-2024-21131 CVE-2024-21138 CVE-2024-21140 CVE-2024-21144 CVE-2024-21145 CVE-2024-21147 CVE-2024-21823 CVE-2024-25739 CVE-2024-26586 CVE-2024-26614 CVE-2024-26640 CVE-2024-26660 CVE-2024-26669 CVE-2024-26686 CVE-2024-26698 CVE-2024-26704 CVE-2024-26733 CVE-2024-26740 CVE-2024-26772 CVE-2024-26773 CVE-2024-26802 CVE-2024-26810 CVE-2024-26837 CVE-2024-26840 CVE-2024-26843 CVE-2024-26852 CVE-2024-26853 CVE-2024-26870 CVE-2024-26878 CVE-2024-26908 CVE-2024-26921 CVE-2024-26925 CVE-2024-26940 CVE-2024-26958 CVE-2024-26960 CVE-2024-26961 CVE-2024-27010 CVE-2024-27011 CVE-2024-27019 CVE-2024-27020 CVE-2024-27025 CVE-2024-27065 CVE-2024-27388 CVE-2024-27395 CVE-2024-27434 CVE-2024-31076 CVE-2024-33621 CVE-2024-35790 CVE-2024-35801 CVE-2024-35807 CVE-2024-35810 CVE-2024-35814 CVE-2024-35823 CVE-2024-35824 CVE-2024-35847 CVE-2024-35876 CVE-2024-35893 CVE-2024-35896 CVE-2024-35897 CVE-2024-35899 CVE-2024-35900 CVE-2024-35910 CVE-2024-35912 CVE-2024-35924 CVE-2024-35925 CVE-2024-35930 CVE-2024-35937 CVE-2024-35938 CVE-2024-35946 CVE-2024-35947 CVE-2024-35952 CVE-2024-36000 CVE-2024-36005 CVE-2024-36006 CVE-2024-36010 CVE-2024-36016 CVE-2024-36017 CVE-2024-36020 CVE-2024-36025 CVE-2024-36270 CVE-2024-36286 CVE-2024-36489 CVE-2024-36886 CVE-2024-36889 CVE-2024-36896 CVE-2024-36904 CVE-2024-36905 CVE-2024-36917 CVE-2024-36921 CVE-2024-36927 CVE-2024-36929 CVE-2024-36933 CVE-2024-36940 CVE-2024-36941 CVE-2024-36945 CVE-2024-36950 CVE-2024-36954 CVE-2024-36960 CVE-2024-36971 CVE-2024-36978 CVE-2024-36979 CVE-2024-37370 CVE-2024-37371 CVE-2024-37891 CVE-2024-38428 CVE-2024-38538 CVE-2024-38555 CVE-2024-38573 CVE-2024-38575 CVE-2024-38596 CVE-2024-38598 CVE-2024-38615 CVE-2024-38627 CVE-2024-39276 CVE-2024-39472 CVE-2024-39476 CVE-2024-39487 CVE-2024-39502 CVE-2024-40927 CVE-2024-40974 1.4.6. Logging 5.6.22 This release includes OpenShift Logging Bug Fix 5.6.22 1.4.6.1. Bug fixes Before this update, the Loki Operator overwrote user annotations on the LokiStack Route resource, causing customizations to drop. With this update, the Loki Operator no longer overwrites Route annotations, fixing the issue. ( LOG-5947 ) 1.4.6.2. CVEs CVE-2023-2953 CVE-2024-3651 CVE-2024-24806 CVE-2024-28182 CVE-2024-35235 1.4.7. Logging 5.6.21 This release includes OpenShift Logging Bug Fix 5.6.21 1.4.7.1. Bug fixes Before this update, LokiStack was missing a route for the Volume API, which caused the following error: 404 not found . With this update, LokiStack exposes the Volume API, resolving the issue. ( LOG-5751 ) 1.4.7.2. CVEs CVE-2020-26555 CVE-2021-46909 CVE-2021-46972 CVE-2021-47069 CVE-2021-47073 CVE-2021-47236 CVE-2021-47310 CVE-2021-47311 CVE-2021-47353 CVE-2021-47356 CVE-2021-47456 CVE-2021-47495 CVE-2022-48624 CVE-2023-2953 CVE-2023-5090 CVE-2023-52464 CVE-2023-52560 CVE-2023-52615 CVE-2023-52626 CVE-2023-52667 CVE-2023-52669 CVE-2023-52675 CVE-2023-52686 CVE-2023-52700 CVE-2023-52703 CVE-2023-52781 CVE-2023-52813 CVE-2023-52835 CVE-2023-52877 CVE-2023-52878 CVE-2023-52881 CVE-2024-3651 CVE-2024-24790 CVE-2024-24806 CVE-2024-26583 CVE-2024-26584 CVE-2024-26585 CVE-2024-26656 CVE-2024-26675 CVE-2024-26735 CVE-2024-26759 CVE-2024-26801 CVE-2024-26804 CVE-2024-26826 CVE-2024-26859 CVE-2024-26906 CVE-2024-26907 CVE-2024-26974 CVE-2024-26982 CVE-2024-27397 CVE-2024-27410 CVE-2024-28182 CVE-2024-32002 CVE-2024-32004 CVE-2024-32020 CVE-2024-32021 CVE-2024-32465 CVE-2024-32487 CVE-2024-35235 CVE-2024-35789 CVE-2024-35835 CVE-2024-35838 CVE-2024-35845 CVE-2024-35852 CVE-2024-35853 CVE-2024-35854 CVE-2024-35855 CVE-2024-35888 CVE-2024-35890 CVE-2024-35958 CVE-2024-35959 CVE-2024-35960 CVE-2024-36004 CVE-2024-36007 1.4.8. Logging 5.6.20 This release includes OpenShift Logging Bug Fix 5.6.20 1.4.8.1. Bug fixes Before this update, there was a delay in restarting Ingesters when configuring LokiStack , because the Loki Operator sets the write-ahead log replay_memory_ceiling to zero bytes for the 1x.demo size. With this update, the minimum value used for the replay_memory_ceiling has been increased to avoid delays. ( LOG-5617 ) 1.4.8.2. CVEs CVE-2019-25162 CVE-2020-15778 CVE-2020-36777 CVE-2021-43618 CVE-2021-46934 CVE-2021-47013 CVE-2021-47055 CVE-2021-47118 CVE-2021-47153 CVE-2021-47171 CVE-2021-47185 CVE-2022-4645 CVE-2022-48627 CVE-2022-48669 CVE-2023-6004 CVE-2023-6240 CVE-2023-6597 CVE-2023-6918 CVE-2023-7008 CVE-2023-43785 CVE-2023-43786 CVE-2023-43787 CVE-2023-43788 CVE-2023-43789 CVE-2023-52439 CVE-2023-52445 CVE-2023-52477 CVE-2023-52513 CVE-2023-52520 CVE-2023-52528 CVE-2023-52565 CVE-2023-52578 CVE-2023-52594 CVE-2023-52595 CVE-2023-52598 CVE-2023-52606 CVE-2023-52607 CVE-2023-52610 CVE-2024-0340 CVE-2024-0450 CVE-2024-22365 CVE-2024-23307 CVE-2024-25062 CVE-2024-25744 CVE-2024-26458 CVE-2024-26461 CVE-2024-26593 CVE-2024-26603 CVE-2024-26610 CVE-2024-26615 CVE-2024-26642 CVE-2024-26643 CVE-2024-26659 CVE-2024-26664 CVE-2024-26693 CVE-2024-26694 CVE-2024-26743 CVE-2024-26744 CVE-2024-26779 CVE-2024-26872 CVE-2024-26892 CVE-2024-26987 CVE-2024-26901 CVE-2024-26919 CVE-2024-26933 CVE-2024-26934 CVE-2024-26964 CVE-2024-26973 CVE-2024-26993 CVE-2024-27014 CVE-2024-27048 CVE-2024-27052 CVE-2024-27056 CVE-2024-27059 CVE-2024-28834 CVE-2024-33599 CVE-2024-33600 CVE-2024-33601 CVE-2024-33602 1.4.9. Logging 5.6.19 This release includes OpenShift Logging Bug Fix 5.6.19 1.4.9.1. Bug fixes Before this update, an issue in the metrics collection code of the Logging Operator caused it to report stale telemetry metrics. With this update, the Logging Operator does not report stale telemetry metrics. ( LOG-5529 ) 1.4.9.2. CVEs CVE-2023-45288 CVE-2023-52425 CVE-2024-2961 CVE-2024-21011 CVE-2024-21012 CVE-2024-21068 CVE-2024-21085 CVE-2024-21094 CVE-2024-28834 1.4.10. Logging 5.6.18 This release includes OpenShift Logging Bug Fix 5.6.18 1.4.10.1. Enhancements Before this update, Loki Operator set up Loki to use path-based style access for the Amazon Simple Storage Service (S3), which has been deprecated. With this update, the Loki Operator defaults to virtual-host style without users needing to change their configuration. ( LOG-5404 ) Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint used in the storage secret. With this update, the validation process ensures the S3 endpoint is a valid S3 URL, and the LokiStack status updates to indicate any invalid URLs. ( LOG-5396 ) 1.4.10.2. Bug fixes Before this update, the Elastisearch Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and certificate authority (CA) files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring specification on the ServiceMonitor configuration. With this update, the Elastisearch Operator ServiceMonitor in the openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring specifications in the Prometheus Operator to handle the Elastisearch Operator ServiceMonitor successfully. This enables Prometheus to scrape the Elastisearch Operator metrics. ( LOG-5244 ) Before this update, the Loki Operator did not validate the Amazon Simple Storage Service (S3) endpoint URL format used in the storage secret. With this update, the S3 endpoint URL goes through a validation step that reflects on the status of the LokiStack . ( LOG-5400 ) 1.4.10.3. CVEs CVE-2021-33631 CVE-2021-43618 CVE-2022-38096 CVE-2022-48624 CVE-2023-6546 CVE-2023-6931 CVE-2023-28322 CVE-2023-38546 CVE-2023-46218 CVE-2023-51042 CVE-2024-0565 CVE-2024-1086 1.4.11. Logging 5.6.17 This release includes OpenShift Logging Bug Fix 5.6.17 1.4.11.1. Bug fixes Before this update, the Red Hat build pipeline did not use the existing build details in Loki builds and omitted information such as revision, branch, and version. With this update, the Red Hat build pipeline now adds these details to the Loki builds, fixing the issue. ( LOG-5203 ) Before this update, the configuration of the ServiceMonitor by Loki Operator could match many Kubernetes services, which led to Loki Operator's metrics being collected multiple times. With this update, the ServiceMonitor setup now only matches the dedicated metrics service. ( LOG-5252 ) Before this update, the build pipeline did not include linker flags for the build date, causing Loki builds to show empty strings for buildDate and goVersion . With this update, adding the missing linker flags in the build pipeline fixes the issue. ( LOG-5276 ) Before this update, the Loki Operator ServiceMonitor in the openshift-operators-redhat namespace used static token and CA files for authentication, causing errors in the Prometheus Operator in the User Workload Monitoring spec on the ServiceMonitor configuration. With this update, the Loki Operator ServiceMonitor in openshift-operators-redhat namespace now references a service account token secret by a LocalReference object. This approach allows the User Workload Monitoring spec in the Prometheus Operator to handle the Loki Operator ServiceMonitor successfully, enabling Prometheus to scrape the Loki Operator metrics. ( LOG-5242 ) 1.4.11.2. CVEs CVE-2021-35937 CVE-2021-35938 CVE-2021-35939 CVE-2024-24786 1.4.12. Logging 5.6.16 This release includes Logging Bug Fix 5.6.16 1.4.12.1. Bug fixes Before this update, when configured to read a custom S3 Certificate Authority the Loki Operator would not automatically update the configuration when the name of the ConfigMap or the contents changed. With this update, the Loki Operator is watching for changes to the ConfigMap and automatically updates the generated configuration. ( LOG-4967 ) 1.4.12.2. CVEs 1.4.13. Logging 5.6.15 This release includes OpenShift Logging Bug Fix Release 5.6.15 . 1.4.13.1. Bug fixes Before this update, the LokiStack ruler pods would not format the IPv6 pod IP in HTTP URLs used for cross pod communication, causing querying rules and alerts through the Prometheus-compatible API to fail. With this update, the LokiStack ruler pods encapsulate the IPv6 pod IP in square brackets, resolving the issue. ( LOG-4892 ) 1.4.13.2. CVEs CVE-2021-3468 CVE-2023-3446 CVE-2023-3817 CVE-2023-5678 CVE-2023-38469 CVE-2023-38470 CVE-2023-38471 CVE-2023-38472 CVE-2023-38473 1.4.14. Logging 5.6.14 This release includes OpenShift Logging Bug Fix Release 5.6.14 . 1.4.14.1. Bug fixes Before this update, during the process of creating index patterns, the default alias was missing from the initial index in each log output. As a result, Kibana users were unable to create index patterns by using OpenShift Elasticsearch Operator. This update adds the missing aliases to OpenShift Elasticsearch Operator, resolving the issue. Kibana users can now create index patterns that include the {app,infra,audit}-000001 indexes. ( LOG-4807 ) Before this update, the Loki Operator did not mount a custom CA bundle to the ruler pods. As a result, during the process to evaluate alerting or recording rules, object storage access failed. With this update, the Loki Operator mounts the custom CA bundle to all ruler pods. The ruler pods can download logs from object storage to evaluate alerting or recording rules. ( LOG-4838 ) 1.4.14.2. CVEs CVE-2007-4559 CVE-2021-43975 CVE-2022-3594 CVE-2022-3640 CVE-2022-4744 CVE-2022-28388 CVE-2022-38457 CVE-2022-40133 CVE-2022-40982 CVE-2022-41862 CVE-2022-42895 CVE-2022-45869 CVE-2022-45887 CVE-2022-48337 CVE-2022-48339 CVE-2023-0458 CVE-2023-0590 CVE-2023-0597 CVE-2023-1073 CVE-2023-1074 CVE-2023-1075 CVE-2023-1079 CVE-2023-1118 CVE-2023-1206 CVE-2023-1252 CVE-2023-1382 CVE-2023-1855 CVE-2023-1981 CVE-2023-1989 CVE-2023-1998 CVE-2023-2513 CVE-2023-3138 CVE-2023-3141 CVE-2023-3161 CVE-2023-3212 CVE-2023-3268 CVE-2023-3609 CVE-2023-3611 CVE-2023-3772 CVE-2023-4016 CVE-2023-4128 CVE-2023-4132 CVE-2023-4155 CVE-2023-4206 CVE-2023-4207 CVE-2023-4208 CVE-2023-4641 CVE-2023-4732 CVE-2023-22745 CVE-2023-23455 CVE-2023-26545 CVE-2023-28328 CVE-2023-28772 CVE-2023-30456 CVE-2023-31084 CVE-2023-31436 CVE-2023-31486 CVE-2023-32324 CVE-2023-33203 CVE-2023-33951 CVE-2023-33952 CVE-2023-34241 CVE-2023-35823 CVE-2023-35824 CVE-2023-35825 1.4.15. Logging 5.6.13 This release includes OpenShift Logging Bug Fix Release 5.6.13 . 1.4.15.1. Bug fixes None. 1.4.15.2. CVEs CVE-2023-40217 CVE-2023-44487 1.4.16. Logging 5.6.12 This release includes OpenShift Logging Bug Fix Release 5.6.12 . 1.4.16.1. Bug fixes Before this update, deploying a LokiStack on IPv6-only or dual-stack OpenShift Container Platform clusters caused the LokiStack memberlist registration to fail. As a result, the distributor pods went into a crash loop. With this update, an administrator can enable IPv6 by setting the lokistack.spec.hashRing.memberlist.enableIPv6: value to true , which resolves the issue. Currently, the log alert is not available on an IPv6-enabled cluster. ( LOG-4570 ) Before this update, there was an error in the query used for the FluentD Buffer Availability graph in the metrics dashboard created by the Cluster Logging Operator as it showed the minimum buffer usage. With this update, the graph shows the maximum buffer usage and is now renamed to FluentD Buffer Usage . ( LOG-4579 ) Before this update, the unused metrics in the Event Router caused the container to fail due to excessive memory usage. With this update, there is reduction in the memory usage of the Event Router by removing the unused metrics. ( LOG-4687 ) 1.4.16.2. CVEs CVE-2023-0800 CVE-2023-0801 CVE-2023-0802 CVE-2023-0803 CVE-2023-0804 CVE-2023-2002 CVE-2023-3090 CVE-2023-3390 CVE-2023-3776 CVE-2023-4004 CVE-2023-4527 CVE-2023-4806 CVE-2023-4813 CVE-2023-4863 CVE-2023-4911 CVE-2023-5129 CVE-2023-20593 CVE-2023-29491 CVE-2023-30630 CVE-2023-35001 CVE-2023-35788 1.4.17. Logging 5.6.11 This release includes OpenShift Logging Bug Fix Release 5.6.11 . 1.4.17.1. Bug fixes Before this update, the LokiStack gateway cached authorized requests very broadly. As a result, this caused wrong authorization results. With this update, LokiStack gateway caches on a more fine-grained basis which resolves this issue. ( LOG-4435 ) 1.4.17.2. CVEs CVE-2023-3899 CVE-2023-32360 CVE-2023-34969 1.4.18. Logging 5.6.9 This release includes OpenShift Logging Bug Fix Release 5.6.9 . 1.4.18.1. Bug fixes Before this update, when multiple roles were used to authenticate using STS with AWS Cloudwatch forwarding, a recent update caused the credentials to be non-unique. With this update, multiple combinations of STS roles and static credentials can once again be used to authenticate with AWS Cloudwatch. ( LOG-4084 ) Before this update, the Vector collector occasionally panicked with the following error message in its log: thread 'vector-worker' panicked at 'all branches are disabled and there is no else branch', src/kubernetes/reflector.rs:26:9 . With this update, the error has been resolved. ( LOG-4276 ) Before this update, Loki filtered label values for active streams but did not remove duplicates, making Grafana's Label Browser unusable. With this update, Loki filters out duplicate label values for active streams, resolving the issue. ( LOG-4390 ) 1.4.18.2. CVEs CVE-2020-24736 CVE-2022-48281 CVE-2023-1667 CVE-2023-2283 CVE-2023-24329 CVE-2023-26604 CVE-2023-28466 CVE-2023-32233 1.4.19. Logging 5.6.8 This release includes OpenShift Logging Bug Fix Release 5.6.8 . 1.4.19.1. Bug fixes Before this update, the vector collector terminated unexpectedly when input match label values contained a / character within the ClusterLogForwarder . This update resolves the issue by quoting the match label, enabling the collector to start and collect logs. ( LOG-4091 ) Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the more data available option loaded more log entries only the first time it was clicked. With this update, more entries are loaded with each click. ( OU-187 ) Before this update, when viewing logs within the OpenShift Container Platform web console, clicking the streaming option would only display the streaming logs message without showing the actual logs. With this update, both the message and the log stream are displayed correctly. ( OU-189 ) Before this update, the Loki Operator reset errors in a way that made identifying configuration problems difficult to troubleshoot. With this update, errors persist until the configuration error is resolved. ( LOG-4158 ) Before this update, clusters with more than 8,000 namespaces caused Elasticsearch to reject queries because the list of namespaces was larger than the http.max_header_size setting. With this update, the default value for header size has been increased, resolving the issue. ( LOG-4278 ) 1.4.19.2. CVEs CVE-2020-24736 CVE-2022-48281 CVE-2023-1667 CVE-2023-2283 CVE-2023-24329 CVE-2023-26604 CVE-2023-28466 1.4.20. Logging 5.6.5 This release includes OpenShift Logging Bug Fix Release 5.6.5 . 1.4.20.1. Bug fixes Before this update, the template definitions prevented Elasticsearch from indexing some labels and namespace_labels, causing issues with data ingestion. With this update, the fix replaces dots and slashes in labels to ensure proper ingestion, effectively resolving the issue. ( LOG-3419 ) Before this update, if the Logs page of the OpenShift Web Console failed to connect to the LokiStack, a generic error message was displayed, providing no additional context or troubleshooting suggestions. With this update, the error message has been enhanced to include more specific details and recommendations for troubleshooting. ( LOG-3750 ) Before this update, time range formats were not validated, leading to errors selecting a custom date range. With this update, time formats are now validated, enabling users to select a valid range. If an invalid time range format is selected, an error message is displayed to the user. ( LOG-3583 ) Before this update, when searching logs in Loki, even if the length of an expression did not exceed 5120 characters, the query would fail in many cases. With this update, query authorization label matchers have been optimized, resolving the issue. ( LOG-3480 ) Before this update, the Loki Operator failed to produce a memberlist configuration that was sufficient for locating all the components when using a memberlist for private IPs. With this update, the fix ensures that the generated configuration includes the advertised port, allowing for successful lookup of all components. ( LOG-4008 ) 1.4.20.2. CVEs CVE-2022-4269 CVE-2022-4378 CVE-2023-0266 CVE-2023-0361 CVE-2023-0386 CVE-2023-27539 CVE-2023-28120 1.4.21. Logging 5.6.4 This release includes OpenShift Logging Bug Fix Release 5.6.4 . 1.4.21.1. Bug fixes Before this update, when LokiStack was deployed as the log store, the logs generated by Loki pods were collected and sent to LokiStack. With this update, the logs generated by Loki are excluded from collection and will not be stored. ( LOG-3280 ) Before this update, when the query editor on the Logs page of the OpenShift Web Console was empty, the drop-down menus did not populate. With this update, if an empty query is attempted, an error message is displayed and the drop-down menus now populate as expected. ( LOG-3454 ) Before this update, when the tls.insecureSkipVerify option was set to true , the Cluster Logging Operator would generate incorrect configuration. As a result, the operator would fail to send data to Elasticsearch when attempting to skip certificate validation. With this update, the Cluster Logging Operator generates the correct TLS configuration even when tls.insecureSkipVerify is enabled. As a result, data can be sent successfully to Elasticsearch even when attempting to skip certificate validation. ( LOG-3475 ) Before this update, when structured parsing was enabled and messages were forwarded to multiple destinations, they were not deep copied. This resulted in some of the received logs including the structured message, while others did not. With this update, the configuration generation has been modified to deep copy messages before JSON parsing. As a result, all received messages now have structured messages included, even when they are forwarded to multiple destinations. ( LOG-3640 ) Before this update, if the collection field contained {} it could result in the Operator crashing. With this update, the Operator will ignore this value, allowing the operator to continue running smoothly without interruption. ( LOG-3733 ) Before this update, the nodeSelector attribute for the Gateway component of LokiStack did not have any effect. With this update, the nodeSelector attribute functions as expected. ( LOG-3783 ) Before this update, the static LokiStack memberlist configuration relied solely on private IP networks. As a result, when the OpenShift Container Platform cluster pod network was configured with a public IP range, the LokiStack pods would crashloop. With this update, the LokiStack administrator now has the option to use the pod network for the memberlist configuration. This resolves the issue and prevents the LokiStack pods from entering a crashloop state when the OpenShift Container Platform cluster pod network is configured with a public IP range. ( LOG-3814 ) Before this update, if the tls.insecureSkipVerify field was set to true , the Cluster Logging Operator would generate an incorrect configuration. As a result, the Operator would fail to send data to Elasticsearch when attempting to skip certificate validation. With this update, the Operator generates the correct TLS configuration even when tls.insecureSkipVerify is enabled. As a result, data can be sent successfully to Elasticsearch even when attempting to skip certificate validation. ( LOG-3838 ) Before this update, if the Cluster Logging Operator (CLO) was installed without the Elasticsearch Operator, the CLO pod would continuously display an error message related to the deletion of Elasticsearch. With this update, the CLO now performs additional checks before displaying any error messages. As a result, error messages related to Elasticsearch deletion are no longer displayed in the absence of the Elasticsearch Operator.( LOG-3763 ) 1.4.21.2. CVEs CVE-2022-4304 CVE-2022-4450 CVE-2023-0215 CVE-2023-0286 CVE-2023-0767 CVE-2023-23916 1.4.22. Logging 5.6.3 This release includes OpenShift Logging Bug Fix Release 5.6.3 . 1.4.22.1. Bug fixes Before this update, the operator stored gateway tenant secret information in a config map. With this update, the operator stores this information in a secret. ( LOG-3717 ) Before this update, the Fluentd collector did not capture OAuth login events stored in /var/log/auth-server/audit.log . With this update, Fluentd captures these OAuth login events, resolving the issue. ( LOG-3729 ) 1.4.22.2. CVEs CVE-2020-10735 CVE-2021-28861 CVE-2022-2873 CVE-2022-4415 CVE-2022-40897 CVE-2022-41222 CVE-2022-43945 CVE-2022-45061 CVE-2022-48303 1.4.23. Logging 5.6.2 This release includes OpenShift Logging Bug Fix Release 5.6.2 . 1.4.23.1. Bug fixes Before this update, the collector did not set level fields correctly based on priority for systemd logs. With this update, level fields are set correctly. ( LOG-3429 ) Before this update, the Operator incorrectly generated incompatibility warnings on OpenShift Container Platform 4.12 or later. With this update, the Operator max OpenShift Container Platform version value has been corrected, resolving the issue. ( LOG-3584 ) Before this update, creating a ClusterLogForwarder custom resource (CR) with an output value of default did not generate any errors. With this update, an error warning that this value is invalid generates appropriately. ( LOG-3437 ) Before this update, when the ClusterLogForwarder custom resource (CR) had multiple pipelines configured with one output set as default , the collector pods restarted. With this update, the logic for output validation has been corrected, resolving the issue. ( LOG-3559 ) Before this update, collector pods restarted after being created. With this update, the deployed collector does not restart on its own. ( LOG-3608 ) Before this update, patch releases removed versions of the Operators from the catalog. This made installing the old versions impossible. This update changes bundle configurations so that releases of the same minor version stay in the catalog. ( LOG-3635 ) 1.4.23.2. CVEs CVE-2022-23521 CVE-2022-40303 CVE-2022-40304 CVE-2022-41903 CVE-2022-47629 CVE-2023-21835 CVE-2023-21843 1.4.24. Logging 5.6.1 This release includes OpenShift Logging Bug Fix Release 5.6.1 . 1.4.24.1. Bug fixes Before this update, the compactor would report TLS certificate errors from communications with the querier when retention was active. With this update, the compactor and querier no longer communicate erroneously over HTTP. ( LOG-3494 ) Before this update, the Loki Operator would not retry setting the status of the LokiStack CR, which caused stale status information. With this update, the Operator retries status information updates on conflict. ( LOG-3496 ) Before this update, the Loki Operator Webhook server caused TLS errors when the kube-apiserver-operator Operator checked the webhook validity. With this update, the Loki Operator Webhook PKI is managed by the Operator Lifecycle Manager (OLM), resolving the issue. ( LOG-3510 ) Before this update, the LokiStack Gateway Labels Enforcer generated parsing errors for valid LogQL queries when using combined label filters with boolean expressions. With this update, the LokiStack LogQL implementation supports label filters with boolean expression and resolves the issue. ( LOG-3441 ), ( LOG-3397 ) Before this update, records written to Elasticsearch would fail if multiple label keys had the same prefix and some keys included dots. With this update, underscores replace dots in label keys, resolving the issue. ( LOG-3463 ) Before this update, the Red Hat OpenShift Logging Operator was not available for OpenShift Container Platform 4.10 clusters because of an incompatibility between OpenShift Container Platform console and the logging-view-plugin. With this update, the plugin is properly integrated with the OpenShift Container Platform 4.10 admin console. ( LOG-3447 ) Before this update the reconciliation of the ClusterLogForwarder custom resource would incorrectly report a degraded status of pipelines that reference the default logstore. With this update, the pipeline validates properly.( LOG-3477 ) 1.4.24.2. CVEs CVE-2021-46848 CVE-2022-3821 CVE-2022-35737 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680 CVE-2021-35065 CVE-2022-46175 1.4.25. Logging 5.6.0 This release includes OpenShift Logging Release 5.6 . 1.4.25.1. Deprecation notice In logging version 5.6, Fluentd is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to Fluentd, you can use Vector instead. 1.4.25.2. Enhancements With this update, Logging is compliant with OpenShift Container Platform cluster-wide cryptographic policies. ( LOG-895 ) With this update, you can declare per-tenant, per-stream, and global policies retention policies through the LokiStack custom resource, ordered by priority. ( LOG-2695 ) With this update, Splunk is an available output option for log forwarding. ( LOG-2913 ) With this update, Vector replaces Fluentd as the default Collector. ( LOG-2222 ) With this update, the Developer role can access the per-project workload logs they are assigned to within the Log Console Plugin on clusters running OpenShift Container Platform 4.11 and higher. ( LOG-3388 ) With this update, logs from any source contain a field openshift.cluster_id , the unique identifier of the cluster in which the Operator is deployed. You can view the clusterID value by using the following command: USD oc get clusterversion/version -o jsonpath='{.spec.clusterID}{"\n"}' ( LOG-2715 ) 1.4.25.3. Known Issues Before this update, Elasticsearch would reject logs if multiple label keys had the same prefix and some keys included the . character. This fixes the limitation of Elasticsearch by replacing . in the label keys with _ . As a workaround for this issue, remove the labels that cause errors, or add a namespace to the label. ( LOG-3463 ) 1.4.25.4. Bug fixes Before this update, if you deleted the Kibana Custom Resource, the OpenShift Container Platform web console continued displaying a link to Kibana. With this update, removing the Kibana Custom Resource also removes that link. ( LOG-2993 ) Before this update, a user was not able to view the application logs of namespaces they have access to. With this update, the Loki Operator automatically creates a cluster role and cluster role binding allowing users to read application logs. ( LOG-3072 ) Before this update, the Operator removed any custom outputs defined in the ClusterLogForwarder custom resource when using LokiStack as the default log storage. With this update, the Operator merges custom outputs with the default outputs when processing the ClusterLogForwarder custom resource. ( LOG-3090 ) Before this update, the CA key was used as the volume name for mounting the CA into Loki, causing error states when the CA Key included non-conforming characters, such as dots. With this update, the volume name is standardized to an internal string which resolves the issue. ( LOG-3331 ) Before this update, a default value set within the LokiStack Custom Resource Definition, caused an inability to create a LokiStack instance without a ReplicationFactor of 1 . With this update, the operator sets the actual value for the size used. ( LOG-3296 ) Before this update, Vector parsed the message field when JSON parsing was enabled without also defining structuredTypeKey or structuredTypeName values. With this update, a value is required for either structuredTypeKey or structuredTypeName when writing structured logs to Elasticsearch. ( LOG-3195 ) Before this update, the secret creation component of the Elasticsearch Operator modified internal secrets constantly. With this update, the existing secret is properly handled. ( LOG-3161 ) Before this update, the Operator could enter a loop of removing and recreating the collector daemonset while the Elasticsearch or Kibana deployments changed their status. With this update, a fix in the status handling of the Operator resolves the issue. ( LOG-3157 ) Before this update, Kibana had a fixed 24h OAuth cookie expiration time, which resulted in 401 errors in Kibana whenever the accessTokenInactivityTimeout field was set to a value lower than 24h . With this update, Kibana's OAuth cookie expiration time synchronizes to the accessTokenInactivityTimeout , with a default value of 24h . ( LOG-3129 ) Before this update, the Operators general pattern for reconciling resources was to try and create before attempting to get or update which would lead to constant HTTP 409 responses after creation. With this update, Operators first attempt to retrieve an object and only create or update it if it is either missing or not as specified. ( LOG-2919 ) Before this update, the .level and`.structure.level` fields in Fluentd could contain different values. With this update, the values are the same for each field. ( LOG-2819 ) Before this update, the Operator did not wait for the population of the trusted CA bundle and deployed the collector a second time once the bundle updated. With this update, the Operator waits briefly to see if the bundle has been populated before it continues the collector deployment. ( LOG-2789 ) Before this update, logging telemetry info appeared twice when reviewing metrics. With this update, logging telemetry info displays as expected. ( LOG-2315 ) Before this update, Fluentd pod logs contained a warning message after enabling the JSON parsing addition. With this update, that warning message does not appear. ( LOG-1806 ) Before this update, the must-gather script did not complete because oc needs a folder with write permission to build its cache. With this update, oc has write permissions to a folder, and the must-gather script completes successfully. ( LOG-3446 ) Before this update the log collector SCC could be superseded by other SCCs on the cluster, rendering the collector unusable. This update sets the priority of the log collector SCC so that it takes precedence over the others. ( LOG-3235 ) Before this update, Vector was missing the field sequence , which was added to fluentd as a way to deal with a lack of actual nanoseconds precision. With this update, the field openshift.sequence has been added to the event logs. ( LOG-3106 ) 1.4.25.5. CVEs CVE-2020-36518 CVE-2021-46848 CVE-2022-2879 CVE-2022-2880 CVE-2022-27664 CVE-2022-32190 CVE-2022-35737 CVE-2022-37601 CVE-2022-41715 CVE-2022-42003 CVE-2022-42004 CVE-2022-42010 CVE-2022-42011 CVE-2022-42012 CVE-2022-42898 CVE-2022-43680
[ "tls.verify_certificate = false tls.verify_hostname = false", "ERROR vector::cli: Configuration error. error=redefinition of table transforms.audit for key transforms.audit", "oc get clusterversion/version -o jsonpath='{.spec.clusterID}{\"\\n\"}'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/logging/release-notes
5.2. Deprecated Features
5.2. Deprecated Features This chapter provides an overview of features that have been deprecated in all minor releases of Red Hat Virtualization. Deprecated features continue to be supported for a minimum of two minor release cycles before being fully removed. For the most recent list of deprecated features within a particular release, refer to the latest version of release documentation. Note Although support for deprecated features is typically removed after a few release cycles, some tasks may still require use of a deprecated feature. These exceptions are noted in the description of the deprecated feature. The following table describes deprecated features to be removed in a future version of Red Hat Virtualization. Table 5.2. Deprecated Features Deprecated Feature Details OpenStack Glance Support for OpenStack Glance is now deprecated. This functionality will be removed in a future release. Remote engine database A remote engine database is now deprecated, whether implemented during deployment or by migrating after deployment. This functionality will be removed from the deployment script in a future release. Cisco Virtual Machine Fabric Extender (VM-FEX) Support for the Cisco Virtual Machine Fabric Extender (VM-FEX) is now deprecated. This functionality will be removed in a future release. Export Domains Use a data domain. Migrate data domains between data centers and Importing Virtual Machines from a Data Domain into the new data center. In Red Hat Virtualization 4.4, some tasks may still require the export domain. ISO domains Use a data domain. Upload images to data domains . In Red Hat Virtualization 4.4, some tasks may still require the ISO domain. ovirt-guest-agent The ovirt-guest-agent project is no longer supported. Use qemu-guest-agent version 2.12.0 or later. moVirt Mobile Android app for Red Hat Virtualization. OpenStack Networking (Neutron) Support for Red Hat OpenStack Networking (Neutron) as an external network provider is now deprecated, and was removed in Red Hat Virtualization 4.4.5. OpenStack block storage (Cinder) Support for Red Hat OpenStack block storage (Cinder) is now deprecated, and will be removed in a future release. instance types Support for instance types that can be used to define the hardware configuration of a virtual machine is now deprecated. This functionality will be removed in a future release. websocket proxy deployment on a remote host Support for third party websocket proxy deployment is now deprecated, and will be removed in a future release. SSO for virtual machines Since the ovirt-guest-agent package was deprecated, Single Sign-On (SSO) is deprecated for virtual machines running Red Hat Enterprise Linux version 7 or earlier. SSO is not supported for virtual machines running Red Hat Enterprise Linux 8 or later, or for Windows operating systems. GlusterFS Storage GlusterFS Storage is deprecated, and will no longer be supported in future releases. ovirt-engine extension-aaa-ldap and ovirt-engine extension-aaa-jdbc The engine extensions ovirt-engine extension-aaa-ldap and ovirt-engine extension-aaa-jdbc have been deprecated. For new installations, use Red Hat Single Sign On for authentication. For more information, see Installing and Configuring Red Hat Single Sign-On in the Administration Guide.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/release_notes/deprecated_features_rhv
probe::nfs.fop.open
probe::nfs.fop.open Name probe::nfs.fop.open - NFS client file open operation Synopsis nfs.fop.open Values flag file flag i_size file length in bytes dev device identifier file_name file name ino inode number
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfs-fop-open
B.64. pam
B.64. pam B.64.1. RHSA-2010:0891 - Moderate: pam security update Updated pam packages that fix three security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Pluggable Authentication Modules (PAM) provide a system whereby administrators can set up authentication policies without having to recompile programs that handle authentication. CVE-2010-3853 It was discovered that the pam_namespace module executed the external script namespace.init with an unchanged environment inherited from an application calling PAM. In cases where such an environment was untrusted (for example, when pam_namespace was configured for setuid applications such as su or sudo), a local, unprivileged user could possibly use this flaw to escalate their privileges. CVE-2010-3435 It was discovered that the pam_env and pam_mail modules used root privileges while accessing user's files. A local, unprivileged user could use this flaw to obtain information, from the lines that have the KEY=VALUE format expected by pam_env, from an arbitrary file. Also, in certain configurations, a local, unprivileged user using a service for which the pam_mail module was configured for, could use this flaw to obtain limited information about files or directories that they do not have access to. CVE-2010-3316 Note: As part of the fix for CVE-2010-3435 , this update changes the default value of pam_env's configuration option user_readenv to 0, causing the module to not read user's ~/.pam_environment configuration file by default, as reading it may introduce unexpected changes to the environment of the service using PAM, or PAM modules consulted after pam_env. It was discovered that the pam_xauth module did not verify the return values of the setuid() and setgid() system calls. A local, unprivileged user could use this flaw to execute the xauth command with root privileges and make it read an arbitrary input file. Red Hat would like to thank Sebastian Krahmer of the SuSE Security Team for reporting the CVE-2010-3435 issue. All pam users should upgrade to these updated packages, which contain backported patches to correct these issues.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/pam
2.2. Making Minimal Boot Media
2.2. Making Minimal Boot Media A piece of minimal boot media is a CD, DVD, or USB flash drive that contains the software to boot the system and launch the installation program, but which does not contain the software that must be transferred to the system to create a Red Hat Enterprise Linux installation. Use minimal boot media: to boot the system to install Red Hat Enterprise Linux over a network to boot the system to install Red Hat Enterprise Linux from a hard drive to use a kickstart file during installation (refer to Section 32.9.1, "Creating Kickstart Boot Media" to commence a network or hard-drive installation or to use an anaconda update or a kickstart file with a DVD installation. You can use minimal boot media to start the installation process on 32-bit x86 systems, AMD64 or Intel 64 systems, and Power Systems servers. The process by which you create minimal boot media for systems of these various types is identical except in the case of AMD64 and Intel 64 systems with UEFI firmware interfaces - refer to Section 2.2.2, "Minimal USB Boot Media for UEFI-based Systems" . To make minimal boot media for 32-bit x86 systems, BIOS-based AMD64 or Intel 64 systems, and Power Systems servers: Download the ISO image file named rhel- variant - version - architecture -boot.iso that is available at the same location as the images of the Red Hat Enterprise Linux 6.9 installation DVD - refer to Chapter 1, Obtaining Red Hat Enterprise Linux . Burn the .iso file to a blank CD or DVD using the same procedure detailed in Section 2.1, "Making an Installation DVD" for the installation disc. Alternatively, transfer the .iso file to a USB device with the dd command. As the .iso file is only around 200 MB in size, you do not need an especially large USB flash drive. 2.2.1. Minimal USB Boot Media for BIOS-based Systems Warning When you perform this procedure any data on the USB flash drive is destroyed with no warning. Make sure that you specify the correct USB flash drive, and make sure that this flash drive does not contain any data that you want to keep. Plug in your USB flash drive. Find the flash drive's device name. If the media has a volume name, use it to look up the device name in /dev/disk/by-label , or use the findfs command: If the media does not have a volume name or you do not know it, you can also use the dmesg command shortly after connecting the media to your computer. After running the command, the device name (such as sdb or sdc ) should appear in several lines towards the end of the output. Become root: Use the dd command to transfer the boot ISO image to the USB device: where path/image_name .iso is the boot ISO image file that you downloaded and device is the device name for the USB flash drive. Ensure you specify the device name (such as sdc ), not the partition name (such as sdc1 ). For example:
[ "findfs LABEL= MyLabel", "su -", "dd if= path/image_name .iso of=/dev/ device", "dd if=~/Downloads/RHEL6.9-Server-x86_64-boot.iso of=/dev/sdc" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/making_minimal_boot_media
Chapter 2. Creating a filtered Microsoft Azure integration
Chapter 2. Creating a filtered Microsoft Azure integration Note If you created an unfiltered Azure integration, do not complete the following steps. Your Azure integration is already complete. If you are using RHEL metering, after you integrate your data with cost management, go to Adding RHEL metering to a Microsoft Azure integration to finish configuring your integration for RHEL metering. Azure is a third-party product and its processes can change. The instructions for configuring third-party integrations are correct at the time of publishing. For the most up-to-date information, see Microsoft Azure's documentation . To share a subset of your billing data with Red Hat, you can configure a function script in Microsoft Azure. This script will filter your billing data and export it to object storage so that cost management can then access and read the filtered data. Add your Microsoft Azure integration to cost management from the Integrations page . To create an Azure integration, you will complete the following tasks: Create a storage account and resource group. Configure Storage Account Contributor and Reader roles for access. Create a function to filter the data that you want to send to Red Hat. 2.1. Selecting a scope and configuring roles In the Add a cloud integration wizard, you must select a scope to determine the level at which your cost data is collected and exported. If your scope requires the Billing account reader role, you must manually configure it in the Azure portal. The scope that you select determines where role-based access control permissions are applied. The most common Scope selection in the wizard is Subscription . The following list maps which role you need for each Scope selection: Cost Management Reader Azure RBAC role: Subscription Resource group Management group Billing account reader Azure RBAC role: Billing account Billing profile Invoice section Enrollment account If your scope requires the Cost Management reader role, you can run the commands as they are in the documentation. If your scope requires the Billing account reader role, see Assign Azure roles in the Azure portal to learn how to manually configure the role in the Azure portal. 2.2. Adding a Microsoft Azure account Add your Microsoft Azure account as an integration so that cost management can process the cost and usage data. Prerequisites You must have a Red Hat user account with Cloud Administrator entitlements. You must have a service account . Your service account must have the correct roles assigned in Hybrid Cloud Console to enable cost management access. For more information, see the User Access Configuration Guide . In cost management: Click Settings Menu > Integrations . In the Cloud tab, click Add integration . In the Add a cloud integration wizard, select Microsoft Azure and click . Enter a name for your integration and click . In the Select application step, select Cost management and click . In the Specify cost export scope step, select I wish to manually customize the data set sent to Cost Management . If you are registering RHEL usage billing, select Include RHEL usage . Otherwise, proceed to the step. Click . 2.3. Creating a Microsoft Azure resource group and storage account Create a storage account in Microsoft Azure to house your billing exports and a second storage account to house your filtered data. In your Microsoft Azure account : In the search bar, enter "storage" and click Storage accounts . On the Storage accounts page, click Create . In the Resource Group field, click Create new . Enter a name and click OK . In this example, use filtered-data-group . In the Instance details section, enter a name in the Storage account name field. For example, use filtereddata . Copy the names of the resource group and storage account so you can add them to Red Hat Hybrid Cloud Console later. Click Review . Review the storage account and click Create . In cost management: In the Add a cloud integration wizard, paste the resource group and storage account names that you copied into Resource group name and Storage account name . You will continue using the wizard in the following sections. 2.4. Creating a daily export in Microsoft Azure , set up an automatic export of your cost data to your Microsoft Azure storage account before you filter it for cost management. In your Microsoft Azure account : In the search bar, enter "cost exports" and click the result. Click Create . In Select a template , click Cost and usage (actual) to export your standard usage and purchase charges. Follow the steps in the Azure wizard: You can either create a new resource group and storage account or select existing ones. In this example, we use billingexportdata for the storage account and billinggroup for the resource group. You must set Format to CSV . Set Compression type to None or Gzip . Review the information and click Create . In cost management: Return to the Add a cloud integration wizard and complete the steps in Daily export Click . You will continue using the wizard in the following sections. 2.5. Finding your Microsoft Azure subscription ID Find your subscription_id in the Microsoft Azure Cloud Shell and add it to the Add a cloud integration wizard in cost management. In your Microsoft Azure account : Click Cloud Shell . Enter the following command to get your Subscription ID: az account show --query "{subscription_id: id }" Copy the value that is generated for subscription_id . Example response { "subscription_id": 00000000-0000-0000-000000000000 } In cost management: In the Subscription ID field of the Add a cloud integration wizard, paste the value that you copied in the step. Click . You will continue using the wizard in the following sections. 2.6. Creating Microsoft Azure roles for Red Hat access To grant Red Hat access to your data, you must configure dedicated roles in Microsoft Azure. If you have an additional resource under the same Azure subscription, you might not need to create a new service account. In cost management: In the Roles section of the Add a cloud integration wizard, copy the az ad sp create-for-rbac command to create a service principal with the Cost Management Storage Account Contributor role. In your Microsoft Azure account : Click Cloud Shell . In the cloud shell prompt, paste the command that you copied. Copy the values that are generated for the client ID, secret, and tenant: Example response { "client_id": "00000000-0000-0000-000000000000", "secret": "00000000-0000-0000-000000000000", "tenant": "00000000-0000-0000-000000000000" } In cost management: Return to the Add a cloud integration wizard and paste the values that you copied into their corresponding fields on the Roles page. Click . Review your information and click Add to complete your integration. In the pop-up screen that appears, copy the Source UUID for your function script. 2.7. Creating a function in Microsoft Azure Creating a function in Azure filters your data and adds it to the storage account that you created to share with Red Hat. You can use the example Python script in this section to gather and share the filtered cost data from your export. Prerequisites You must have Visual Studio Code installed on your device. You must have the Microsoft Azure functions extension installed in Visual Studio Code. To create an Azure function, Microsoft recommends that you use their Microsoft Visual Studio Code IDE to develop and deploy code. For more information about configuring Visual Studio Code, see Quickstart: Create a function in Azure with Python using Visual Studio Code . In your Microsoft Azure account : Enter functions in the search bar and select Function App . Click Create . Select a hosting option for your function and click Select . On the Create Function App page, add your resource group. In the Instance Details section, name your function app. In Runtime stack , select Python . In Version , select latest. Click Review + create : Click Create . Wait for the resource to be created and then click Go to resource to view. In Visual Studio Code: Click the Microsoft Azure tab and sign in to Azure. In the Workspaces drop-down, click Azure Functions which appears as an icon with an orange lightning bolt. Click Create Function . Follow the prompts to set a local location and select a language and version for your function. In this example, select Python , Model 2 and the latest version available. In Select a template for your function dialog, select Timer trigger , name the function, and then press enter. Set the cron expression to control when the function runs. In this example, use 0 9 * * * to run the function daily at 9 AM: Click Create . Click Open in the current window . In your requirements.txt file: After you create the function in your development environment, open the requirements.txt file, add the following requirements, and save the file: In init .py: Copy the Python script and paste it into` init .py`. Change the values in the section marked # Required vars to update to the values that correspond to your environment. The example script uses secrets from Azure Key Vaults to configure your service account client_id and client_secret as environment variables. You can alternatively enter your credentials directly into the script, although this is not best practice. The default script has built-in options for filtering your data or RHEL subscription filtering. You must uncomment the type of filtering you want to use or write your own custom filtering. Remove the comment from one of the following not both: filtered_data = hcs_filtering(df) filtered_data = rhel_filtering(df) If you want to write more customized filtering, you must include the following required columns: Some of the columns differ depending on the report type. The example script normalizes these columns and all filtered reports must follow this example. To filter the data, you must add dataframe filtering. For example: Exact matching: df.loc[(df["publishertype"] == "Marketplace")] Filters out all data that does not have a publisherType of Marketplace. Contains: df.loc[df["publishername"].astype(str).str.contains("Red Hat")] Filters all data that does not contain Red Hat in the publisherName . You can stack filters by using & (for AND) and | (for OR) with your df.loc clause. More useful filters: subscriptionid Filters specific subscriptions. resourcegroup Filters specific resource groups. resourcelocation Filters data in a specific region. You can use servicename , servicetier , metercategory and metersubcategory to filter specific service types. After you build your custom query, update the custom query in the example script under # custom filtering basic example # . Save the file. In Visual Studio Code: Right click the Function window and click Deploy to Function App . Select the function app that you created in the steps. 2.8. Setting up your credentials in Azure For help with steps in Azure, see Microsoft's documentation: Azure Key Vault . In your Microsoft Azure account : Navigate to Key Vaults . Click Create . Select the resource group that your function is in and follow the Azure wizard to create a new secret. In Key vault name , you can enter any name of your choosing. In Access policies , click Create new policy . Then, select Secret Management from the templates. In the tab Principal , search for and select your function as the principal. After you complete the wizard and click Create Vault , wait until Azure brings you to the successful deployment page. Then, click Go to resource to open the Key Vault page. In the Objects drop-down, select Secrets . You must create two new secrets for your service account: client_id and client_secret . Complete the following steps two times to create both: To create a secret, click Generate/import . In Create a secret , enter any name you want for your client_id or client_secret . Copy the value in Secret Identifier . You will use it later. You can also retrieve this value later. Repeat the three steps until you have a secret for both your client_id and client_secret . 2.9. Adding vault credentials to your function , go to your function in Microsoft Azure and enter information about your secrets. In your Microsoft Azure account : Navigate to your function. Select Settings Environment variables . Click Add . Use the following conventions and replace YOUR-CLIENT-ID-URI with the Secret Identifier value that you copied previously: Name: ClientIdFromVault Value: @Microsoft.KeyVault(SecretUri=YOUR-CLIENT-ID-URI) Click Save . Repeat the process for ClientSecretFromVault . Use the following conventions and replace YOUR-CLIENT-SECRET-URI with the Secret Identifier value that you copied previously: Value: @Microsoft.KeyVault(SecretUri=YOUR-CLIENT-SECRET-URI) 2.10. Configuring function roles in Microsoft Azure Configure dedicated credentials to grant your function blob access to Microsoft Azure cost data. These credentials enable your function to access, filter, and transfer the data from the original storage container to the filtered storage container. In your Microsoft Azure account : Enter functions in the search bar and select your function. In the Settings menu, click Identity . Complete the following set of steps twice , one time for each of the two storage accounts that you created in the section Creating a Microsoft Azure resource group and storage account: Click Azure role assignments . Click Add role assignment . In the Scope field, select Storage . In the Resource field, select one of your two storage accounts. Our examples used filtereddata and billingeportdata . In Role , select Storage Blob Data Contributor . Click Save . Click Add role assignment again. In the Scope field, select Storage . In the Resource field, select the same storage account again . This time, in Role , select Storage Queue Data Contributor . Click Save . Repeat this entire process for the other storage account that you created. After completing these steps, you have successfully set up your Azure integration. 2.11. Viewing your data You have now successfully created a filtered integration. To learn more about what you can do with your data, continue to steps for managing your costs .
[ "az account show --query \"{subscription_id: id }\"", "{ \"subscription_id\": 00000000-0000-0000-000000000000 }", "{ \"client_id\": \"00000000-0000-0000-000000000000\", \"secret\": \"00000000-0000-0000-000000000000\", \"tenant\": \"00000000-0000-0000-000000000000\" }", "azure-functions pandas requests azure-identity azure-storage-blob", "'additionalinfo', 'billingaccountid', 'billingaccountname', 'billingcurrencycode', 'billingperiodenddate', 'billingperiodstartdate', 'chargetype', 'consumedservice', 'costinbillingcurrency', 'date', 'effectiveprice', 'metercategory', 'meterid', 'metername', 'meterregion', 'metersubcategory', 'offerid', 'productname', 'publishername', 'publishertype', 'quantity', 'reservationid', 'reservationname', 'resourcegroup', 'resourceid', 'resourcelocation', 'resourcename', 'servicefamily', 'serviceinfo1', 'serviceinfo2', 'subscriptionid', 'tags', 'unitofmeasure', 'unitprice'", "column_translation = {\"billingcurrency\": \"billingcurrencycode\", \"currency\": \"billingcurrencycode\", \"instanceid\": \"resourceid\", \"instancename\": \"resourceid\", \"pretaxcost\": \"costinbillingcurrency\", \"product\": \"productname\", \"resourcegroupname\": \"resourcegroup\", \"subscriptionguid\": \"subscriptionid\", \"servicename\": \"metercategory\", \"usage_quantity\": \"quantity\"}" ]
https://docs.redhat.com/en/documentation/cost_management_service/1-latest/html/integrating_microsoft_azure_data_into_cost_management/assembly-adding-filtered-azure-int
2.2. HTTP Authentication
2.2. HTTP Authentication Any user with a Red Hat Virtualization account has access to the REST API. An API user submits a mandatory Red Hat Virtualization Manager user name and password with all requests to the API. Each request uses HTTP Basic Authentication [2] to encode these credentials. If a request does not include an appropriate Authorization header, the API sends a 401 Authorization Required as a result: Example 2.1. Access to the REST API without appropriate credentials Request are issued with an Authorization header for the specified realm. An API user encodes an appropriate Red Hat Virtualization Manager domain and user in the supplied credentials with the username@domain:password convention. The following table shows the process for encoding credentials in base64. Table 2.1. Encoding credentials for API access Item Value username rhevmadmin domain domain.example.com password 123456 unencoded credentials [email protected]:123456 base64 encoded credentials cmhldm1hZG1pbkBibGFjay5xdW1yYW5ldC5jb206MTIzNDU2 An API user provides the base64 encoded credentials as shown: Example 2.2. Access to the REST API with appropriate credentials Important Basic authentication involves potentially sensitive information, such as passwords, sent as plain text. REST API requires Hypertext Transfer Protocol Secure (HTTPS) for transport-level encryption of plain-text requests. Important Some base64 libraries break the result into multiple lines and terminate each line with a newline character. This breaks the header and causes a faulty request. The Authorization header requires the encoded credentials on a single line within the header. [2] Basic Authentication is described in RFC 2617 HTTP Authentication: Basic and Digest Access Authentication .
[ "HEAD [base] HTTP/1.1 Host: [host] HTTP/1.1 401 Authorization Required", "HEAD [base] HTTP/1.1 Host: [host] Authorization: Basic cmhldm1hZG1pbkBibGFjay5xdW1yYW5ldC5jb206MTIzNDU2 HTTP/1.1 200 OK" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/http_authentication_for_the_rest_api
31.4. State Transfer Between Sites
31.4. State Transfer Between Sites When an offline master site is back online, it is necessary to synchronize its state with the latest data from the backup site. State transfer allows state to be transferred from one site to another, meaning the master site is synchronized and made consistent with the backup site. Similarly, when a backup site becomes available, state transfer can be utilized to make it consistent with the master site. Consider a scenario of two sites - Master site A and Backup site B. Clients can originally access only Master site A whereas Backup Site B acts as an invisible backup. Cross Site State Transfer can be pushed bidirectionally. When the new backup site B goes online, in order to synchronize its state with the master site A, a State Transfer can be initiated to push the state from the Master site A to the Backup site B. Similarly, when the Master site A is brought back online, in order to synchronize it with the Backup site B, a State Transfer can be initiated to push the state from Backup site B to Master Site A. The use cases applies for both Active-Passive and Active-Active State Transfer. The difference is that during Active-Active State Transfer we assume that cache operations can be performed in the site, which consumes state. A system administrator or an authorized entity initiates the state transfer manually using JMX. The system administrator invokes the pushState(SiteName String) operation available in the XSiteAdminOperations MBean. The following interface shows the pushState(SiteName String) operation in JConsole: Figure 31.2. PushState Operation State transfer is also invoked using the Command Line Interface (CLI) by the site push sitename command. For example, when the master site is brought back online, the system administrator invokes the state transfer operation in the backup site, specifying the master site name that is to receive the state. The master site can be offline at the time of the push operation. On successful state transfer, the state data common to both the sites is overwritten on the master site. For example, if key A exists on the master site but not on the backup site, key A will not be deleted from the master site. Whereas, if key B exists on the backup as well as the master site, key B is overwritten on the master site. Note Updates on keys performed after initiating state transfer are not overwritten by incoming state transfer. Cross-site state transfer can be transactional and supports 1PC and 2PC transaction options. 1PC and 2PC options define whether data modified inside a transaction is backed up to a remote site in one or two phases. 2PC includes a prepare phase in which backup sites acknowledges that transaction has been successfully prepared. Both options are supported. Report a bug 31.4.1. Active-Passive State Transfer The active-passive state transfer is used when cross-site replication is used to back up the master site. The master site processes all the requests but if it goes offline, the backup site starts to handle them. When the master site is back online, it receives the state from the backup site and starts to handle the client requests. In Active-Passive state transfer mode, transactional writes happen concurrently with state transfer on the site which sends the state. In active-passive state transfer mode, the client read-write requests occurs only on the backup site. The master site acts as an invisible backup until the client requests are switched to it when the state transfer is completed. The active-passive state transfer mode is fully supported in cross-datacenter replication. When an Active-Passive State Transfer is interrupted by a network failure, the System Administrator invokes the JMX operation manually to resume the state transfer. To transfer the state, for example from Master site A to Backup site B, invoke the JMX operation on Master site A. Similarly, to transfer state from Backup site B to Master site A, invoke the JMX operation on the Backup site B. The JMX operation is invoked on the site from which the state is transferred to the other site that is online to synchronize the states. For example, there is a running backup site and the system administrator wants to bring back the master site online. To use active-passive state transfer, the system administrator will perform the following steps. Boot the Red Hat JBoss Data Grid cluster in the master site. Command the backup site to push state to the master site. Wait until the state transfer is complete. Make the clients aware that the master site is available to process the requests. Report a bug 31.4.2. Active-Active State Transfer In active-active state transfer mode, the client requests occur concurrently in both the sites while the state transfer is in progress. The current implementation supports handling requests in the new site while the state transfer is in progress, which may break the data consistency. Warning Active-active state transfer mode is not fully supported, as it may lead to data inconsistencies. Note In active-active state transfer mode, both the sites, the master and the backup sites share the same role. There is no clear distinction between the master and backup sites in the active-active state transfer mode For example, there is a running site and the system administrator wants to bring a new site online. To use active-active state transfer, the system administrator must perform the following steps. Boot the Red Hat JBoss Data Grid cluster in the new site. Command the running site to push state to the new site. Make the clients aware that the new site is available to process the requests. Report a bug 31.4.3. State Transfer Configuration State transfer between sites is not enabled or disabled but it allows to tune some parameters. The only configuration is done by the system administrator while configuring the load balancer to switch the request to the master site during or after the state transfer. The implementation handles a case in which a key is updated by a client before it receives the state, ignoring when it is delivered. The following are default parameter values: Report a bug
[ "<backups> <backup site=\"NYC\" strategy=\"SYNC\" failure-policy=\"FAIL\"> <state-transfer chunk-size=\"512\" timeout=\"1200000\" max-retries=\"30\" wait-time=\"2000\" /> </backup> </backups>" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-state_transfer_between_sites
19.6.2. Useful Websites
19.6.2. Useful Websites http://www.xinetd.org - The xinetd webpage. It contains a more detailed list of features and sample configuration files.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/controlling_access_to_services-additional_resources-useful_websites
Managing Content
Managing Content Red Hat Satellite 6.11 A guide to managing content from Red Hat and custom sources Red Hat Satellite Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_content/index
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_operations_guide/making-open-source-more-inclusive
Chapter 3. Keeping Your System Up-to-Date
Chapter 3. Keeping Your System Up-to-Date This chapter describes the process of keeping your system up-to-date, which involves planning and configuring the way security updates are installed, applying changes introduced by newly updated packages, and using the Red Hat Customer Portal for keeping track of security advisories. 3.1. Maintaining Installed Software As security vulnerabilities are discovered, the affected software must be updated in order to limit any potential security risks. If the software is a part of a package within a Red Hat Enterprise Linux distribution that is currently supported, Red Hat is committed to releasing updated packages that fix the vulnerabilities as soon as possible. Often, announcements about a given security exploit are accompanied with a patch (or source code) that fixes the problem. This patch is then applied to the Red Hat Enterprise Linux package and tested and released as an erratum update. However, if an announcement does not include a patch, Red Hat developers first work with the maintainer of the software to fix the problem. Once the problem is fixed, the package is tested and released as an erratum update. If an erratum update is released for software used on your system, it is highly recommended that you update the affected packages as soon as possible to minimize the amount of time the system is potentially vulnerable. 3.1.1. Planning and Configuring Security Updates All software contains bugs. Often, these bugs can result in a vulnerability that can expose your system to malicious users. Packages that have not been updated are a common cause of computer intrusions. Implement a plan for installing security patches in a timely manner to quickly eliminate discovered vulnerabilities, so they cannot be exploited. Test security updates when they become available and schedule them for installation. Additional controls need to be used to protect the system during the time between the release of the update and its installation on the system. These controls depend on the exact vulnerability, but may include additional firewall rules, the use of external firewalls, or changes in software settings. Bugs in supported packages are fixed using the errata mechanism. An erratum consists of one or more RPM packages accompanied by a brief explanation of the problem that the particular erratum deals with. All errata are distributed to customers with active subscriptions through the Red Hat Subscription Management service. Errata that address security issues are called Red Hat Security Advisories . For more information on working with security errata, see Section 3.2.1, "Viewing Security Advisories on the Customer Portal" . For detailed information about the Red Hat Subscription Management service, including instructions on how to migrate from RHN Classic , see the documentation related to this service: Red Hat Subscription Management . 3.1.1.1. Using the Security Features of Yum The Yum package manager includes several security-related features that can be used to search, list, display, and install security errata. These features also make it possible to use Yum to install nothing but security updates. To check for security-related updates available for your system, enter the following command as root : Note that the above command runs in a non-interactive mode, so it can be used in scripts for automated checking whether there are any updates available. The command returns an exit value of 100 when there are any security updates available and 0 when there are not. On encountering an error, it returns 1 . Analogously, use the following command to only install security-related updates: Use the updateinfo subcommand to display or act upon information provided by repositories about available updates. The updateinfo subcommand itself accepts a number of commands, some of which pertain to security-related uses. See Table 3.1, "Security-related commands usable with yum updateinfo" for an overview of these commands. Table 3.1. Security-related commands usable with yum updateinfo Command Description advisory [ advisories ] Displays information about one or more advisories. Replace advisories with an advisory number or numbers. cves Displays the subset of information that pertains to CVE ( Common Vulnerabilities and Exposures ). security or sec Displays all security-related information. severity [ severity_level ] or sev [ severity_level ] Displays information about security-relevant packages of the supplied severity_level . 3.1.2. Updating and Installing Packages When updating software on a system, it is important to download the update from a trusted source. An attacker can easily rebuild a package with the same version number as the one that is supposed to fix the problem but with a different security exploit and release it on the Internet. If this happens, using security measures, such as verifying files against the original RPM , does not detect the exploit. Thus, it is very important to only download RPMs from trusted sources, such as from Red Hat, and to check the package signatures to verify their integrity. See the Yum chapter of the Red Hat Enterprise Linux 7 System Administrator's Guide for detailed information on how to use the Yum package manager. 3.1.2.1. Verifying Signed Packages All Red Hat Enterprise Linux packages are signed with the Red Hat GPG key. GPG stands for GNU Privacy Guard , or GnuPG , a free software package used for ensuring the authenticity of distributed files. If the verification of a package signature fails, the package may be altered and therefore cannot be trusted. The Yum package manager allows for an automatic verification of all packages it installs or upgrades. This feature is enabled by default. To configure this option on your system, make sure the gpgcheck configuration directive is set to 1 in the /etc/yum.conf configuration file. Use the following command to manually verify package files on your filesystem: rpmkeys --checksig package_file.rpm See the Product Signing (GPG) Keys article on the Red Hat Customer Portal for additional information about Red Hat package-signing practices. 3.1.2.2. Installing Signed Packages To install verified packages (see Section 3.1.2.1, "Verifying Signed Packages" for information on how to verify packages) from your filesystem, use the yum install command as the root user as follows: yum install package_file.rpm Use a shell glob to install several packages at once. For example, the following commands installs all .rpm packages in the current directory: yum install *.rpm Important Before installing any security errata, be sure to read any special instructions contained in the erratum report and execute them accordingly. See Section 3.1.3, "Applying Changes Introduced by Installed Updates" for general instructions about applying changes made by errata updates. 3.1.3. Applying Changes Introduced by Installed Updates After downloading and installing security errata and updates, it is important to halt the usage of the old software and begin using the new software. How this is done depends on the type of software that has been updated. The following list itemizes the general categories of software and provides instructions for using updated versions after a package upgrade. Note In general, rebooting the system is the surest way to ensure that the latest version of a software package is used; however, this option is not always required, nor is it always available to the system administrator. Applications User-space applications are any programs that can be initiated by the user. Typically, such applications are used only when the user, a script, or an automated task utility launch them. Once such a user-space application is updated, halt any instances of the application on the system, and launch the program again to use the updated version. Kernel The kernel is the core software component for the Red Hat Enterprise Linux 7 operating system. It manages access to memory, the processor, and peripherals, and it schedules all tasks. Because of its central role, the kernel cannot be restarted without also rebooting the computer. Therefore, an updated version of the kernel cannot be used until the system is rebooted. KVM When the qemu-kvm and libvirt packages are updated, it is necessary to stop all guest virtual machines, reload relevant virtualization modules (or reboot the host system), and restart the virtual machines. Use the lsmod command to determine which modules from the following are loaded: kvm , kvm-intel , or kvm-amd . Then use the modprobe -r command to remove and subsequently the modprobe -a command to reload the affected modules. Fox example: Shared Libraries Shared libraries are units of code, such as glibc , that are used by a number of applications and services. Applications utilizing a shared library typically load the shared code when the application is initialized, so any applications using an updated library must be halted and relaunched. To determine which running applications link against a particular library, use the lsof command: lsof library For example, to determine which running applications link against the libwrap.so.0 library, type: This command returns a list of all the running programs that use TCP wrappers for host-access control. Therefore, any program listed must be halted and relaunched when the tcp_wrappers package is updated. systemd Services systemd services are persistent server programs usually launched during the boot process. Examples of systemd services include sshd or vsftpd . Because these programs usually persist in memory as long as a machine is running, each updated systemd service must be halted and relaunched after its package is upgraded. This can be done as the root user using the systemctl command: systemctl restart service_name Replace service_name with the name of the service you want to restart, such as sshd . Other Software Follow the instructions outlined by the resources linked below to correctly update the following applications. Red Hat Directory Server - See the Release Notes for the version of the Red Hat Directory Server in question at https://access.redhat.com/documentation/en-US/Red_Hat_Directory_Server/ . Red Hat Enterprise Virtualization Manager - See the Installation Guide for the version of the Red Hat Enterprise Virtualization in question at https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Virtualization/ .
[ "~]# yum check-update --security Loaded plugins: langpacks, product-id, subscription-manager rhel-7-workstation-rpms/x86_64 | 3.4 kB 00:00:00 No packages needed for security; 0 packages available", "~]# yum update --security", "~]# lsmod | grep kvm kvm_intel 143031 0 kvm 460181 1 kvm_intel ~]# modprobe -r kvm-intel ~]# modprobe -r kvm ~]# modprobe -a kvm kvm-intel", "~]# lsof /lib64/libwrap.so.0 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME pulseaudi 12363 test mem REG 253,0 42520 34121785 /usr/lib64/libwrap.so.0.7.6 gnome-set 12365 test mem REG 253,0 42520 34121785 /usr/lib64/libwrap.so.0.7.6 gnome-she 12454 test mem REG 253,0 42520 34121785 /usr/lib64/libwrap.so.0.7.6" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/security_guide/chap-keeping_your_system_up-to-date
Chapter 4. Configuring a Red Hat High Availability cluster on Microsoft Azure
Chapter 4. Configuring a Red Hat High Availability cluster on Microsoft Azure To create a cluster where RHEL nodes automatically redistribute their workloads if a node failure occurs, use the Red Hat High Availability Add-On. Such high availability (HA) clusters can also be hosted on public cloud platforms, including Microsoft Azure. Creating RHEL HA clusters on Azure is similar to creating HA clusters in non-cloud environments, with certain specifics. To configure a Red Hat HA cluster on Azure using Azure virtual machine (VM) instances as cluster nodes, see the following sections. The procedures in these sections assume that you are creating a custom image for Azure. You have a number of options for obtaining the RHEL 8 images you use for your cluster. See Red Hat Enterprise Linux Image Options on Azure for information on image options for Azure. The following sections provide: Prerequisite procedures for setting up your environment for Azure. After you set up your environment, you can create and configure Azure VM instances. Procedures specific to the creation of HA clusters, which transform individual nodes into a cluster of HA nodes on Azure. These include procedures for installing the High Availability packages and agents on each cluster node, configuring fencing, and installing Azure network resource agents. Prerequisites Sign up for a Red Hat Customer Portal account . Sign up for a Microsoft Azure account with administrator privileges. You need to install the Azure command-line interface (CLI). For more information, see Installing the Azure CLI . 4.1. The benefits of using high-availability clusters on public cloud platforms A high-availability (HA) cluster is a set of computers (called nodes ) that are linked together to run a specific workload. The purpose of HA clusters is to provide redundancy in case of a hardware or software failure. If a node in the HA cluster fails, the Pacemaker cluster resource manager distributes the workload to other nodes and no noticeable downtime occurs in the services that are running on the cluster. You can also run HA clusters on public cloud platforms. In this case, you would use virtual machine (VM) instances in the cloud as the individual cluster nodes. Using HA clusters on a public cloud platform has the following benefits: Improved availability: In case of a VM failure, the workload is quickly redistributed to other nodes, so running services are not disrupted. Scalability: Additional nodes can be started when demand is high and stopped when demand is low. Cost-effectiveness: With the pay-as-you-go pricing, you pay only for nodes that are running. Simplified management: Some public cloud platforms offer management interfaces to make configuring HA clusters easier. To enable HA on your Red Hat Enterprise Linux (RHEL) systems, Red Hat offers a High Availability Add-On. The High Availability Add-On provides all necessary components for creating HA clusters on RHEL systems. The components include high availability service management and cluster administration tools. Additional resources High Availability Add-On overview 4.2. Creating resources in Azure Complete the following procedure to create a region, resource group, storage account, virtual network, and availability set. You need these resources to set up a cluster on Microsoft Azure. Procedure Authenticate your system with Azure and log in. Note If a browser is available in your environment, the CLI opens your browser to the Azure sign-in page. Example: Create a resource group in an Azure region. Example: Create a storage account. Example: Get the storage account connection string. Example: Export the connection string by copying the connection string and pasting it into the following command. This string connects your system to the storage account. Example: Create the storage container. Example: Create a virtual network. All cluster nodes must be in the same virtual network. Example: Create an availability set. All cluster nodes must be in the same availability set. Example: Additional resources Sign in with Azure CLI SKU Types Azure Managed Disks Overview 4.3. Required system packages for High Availability The procedure assumes you are creating a VM image for Azure HA that uses Red Hat Enterprise Linux. To successfully complete the procedure, the following packages must be installed. Table 4.1. System packages Package Repository Description libvirt rhel-8-for-x86_64-appstream-rpms Open source API, daemon, and management tool for managing platform virtualization virt-install rhel-8-for-x86_64-appstream-rpms A command-line utility for building VMs libguestfs rhel-8-for-x86_64-appstream-rpms A library for accessing and modifying VM file systems libguestfs-tools rhel-8-for-x86_64-appstream-rpms System administration tools for VMs; includes the guestfish utility 4.4. Azure VM configuration settings Azure VMs must have the following configuration settings. Some of these settings are enabled during the initial VM creation. Other settings are set when provisioning the VM image for Azure. Keep these settings in mind as you move through the procedures. Refer to them as necessary. Table 4.2. VM configuration settings Setting Recommendation ssh ssh must be enabled to provide remote access to your Azure VMs. dhcp The primary virtual adapter should be configured for dhcp (IPv4 only). Swap Space Do not create a dedicated swap file or swap partition. You can configure swap space with the Windows Azure Linux Agent (WALinuxAgent). NIC Choose virtio for the primary virtual network adapter. encryption For custom images, use Network Bound Disk Encryption (NBDE) for full disk encryption on Azure. 4.5. Installing Hyper-V device drivers Microsoft provides network and storage device drivers as part of their Linux Integration Services (LIS) for Hyper-V package. You may need to install Hyper-V device drivers on the VM image prior to provisioning it as an Azure virtual machine (VM). Use the lsinitrd | grep hv command to verify that the drivers are installed. Procedure Enter the following grep command to determine if the required Hyper-V device drivers are installed. In the example below, all required drivers are installed. If all the drivers are not installed, complete the remaining steps. Note An hv_vmbus driver may exist in the environment. Even if this driver is present, complete the following steps. Create a file named hv.conf in /etc/dracut.conf.d . Add the following driver parameters to the hv.conf file. Note Note the spaces before and after the quotes, for example, add_drivers+=" hv_vmbus " . This ensures that unique drivers are loaded in the event that other Hyper-V drivers already exist in the environment. Regenerate the initramfs image. Verification Reboot the machine. Run the lsinitrd | grep hv command to verify that the drivers are installed. 4.6. Making configuration changes required for a Microsoft Azure deployment Before you deploy your custom base image to Azure, you must perform additional configuration changes to ensure that the virtual machine (VM) can properly operate in Azure. Procedure Log in to the VM. Register the VM, and enable the Red Hat Enterprise Linux 8 repository. Ensure that the cloud-init and hyperv-daemons packages are installed. Create cloud-init configuration files that are needed for integration with Azure services: To enable logging to the Hyper-V Data Exchange Service (KVP), create the /etc/cloud/cloud.cfg.d/10-azure-kvp.cfg configuration file and add the following lines to that file. To add Azure as a datasource, create the /etc/cloud/cloud.cfg.d/91-azure_datasource.cfg configuration file, and add the following lines to that file. To ensure that specific kernel modules are blocked from loading automatically, edit or create the /etc/modprobe.d/blocklist.conf file and add the following lines to that file. Modify udev network device rules: Remove the following persistent network device rules if present. To ensure that Accelerated Networking on Azure works as intended, create a new network device rule /etc/udev/rules.d/68-azure-sriov-nm-unmanaged.rules and add the following line to it. Set the sshd service to start automatically. Modify kernel boot parameters: Open the /etc/default/grub file, and ensure the GRUB_TIMEOUT line has the following value. Remove the following options from the end of the GRUB_CMDLINE_LINUX line if present. Ensure the /etc/default/grub file contains the following lines with all the specified options. Note If you do not plan to run your workloads on HDDs, add elevator=none to the end of the GRUB_CMDLINE_LINUX line. This sets the I/O scheduler to none , which improves I/O performance when running workloads on SSDs. Regenerate the grub.cfg file. On a BIOS-based machine: On a UEFI-based machine: If your system uses a non-default location for grub.cfg , adjust the command accordingly. Configure the Windows Azure Linux Agent ( WALinuxAgent ): Install and enable the WALinuxAgent package. To ensure that a swap partition is not used in provisioned VMs, edit the following lines in the /etc/waagent.conf file. Prepare the VM for Azure provisioning: Unregister the VM from Red Hat Subscription Manager. Clean up the existing provisioning details. Note This command generates warnings, which are expected because Azure handles the provisioning of VMs automatically. Clean the shell history and shut down the VM. 4.7. Creating an Azure Active Directory application Complete the following procedure to create an Azure Active Directory (AD) application. The Azure AD application authorizes and automates access for HA operations for all nodes in the cluster. Prerequisites The Azure Command Line Interface (CLI) is installed on your system. You are an Administrator or Owner for the Microsoft Azure subscription. You need this authorization to create an Azure AD application. Procedure On any node in the HA cluster, log in to your Azure account. Create a json configuration file for a custom role for the Azure fence agent. Use the following configuration, but replace <subscription-id> with your subscription IDs. { "Name": "Linux Fence Agent Role", "description": "Allows to power-off and start virtual machines", "assignableScopes": [ "/subscriptions/ <subscription-id> " ], "actions": [ "Microsoft.Compute/*/read", "Microsoft.Compute/virtualMachines/powerOff/action", "Microsoft.Compute/virtualMachines/start/action" ], "notActions": [], "dataActions": [], "notDataActions": [] } Define the custom role for the Azure fence agent. Use the json file created in the step to do this. In the Azure web console interface, select Virtual Machine Click Identity in the left-side menu. Select On Click Save click Yes to confirm. Click Azure role assignments Add role assignment . Select the Scope required for the role, for example Resource Group . Select the required Resource Group . Optional: Change the Subscription if necessary. Select the Linux Fence Agent Role role. Click Save . Verification Display nodes visible to Azure AD. If this command outputs all nodes on your cluster, the AD application has been configured successfully. Additional resources View the access a user has to Azure resources Create a custom role for the fence agent Assign Azure roles by using Azure CLI 4.8. Converting the image to a fixed VHD format All Microsoft Azure VM images must be in a fixed VHD format. The image must be aligned on a 1 MB boundary before it is converted to VHD. To convert the image from qcow2 to a fixed VHD format and align the image, see the following procedure. Once you have converted the image, you can upload it to Azure. Procedure Convert the image from qcow2 to raw format. Create a shell script with the following content. Run the script. This example uses the name align.sh . If the message "Your image is already aligned. You do not need to resize." displays, proceed to the following step. If a value displays, your image is not aligned. Use the following command to convert the file to a fixed VHD format. The sample uses qemu-img version 2.12.0. Once converted, the VHD file is ready to upload to Azure. If the raw image is not aligned, complete the following steps to align it. Resize the raw file by using the rounded value displayed when you ran the verification script. Convert the raw image file to a VHD format. The sample uses qemu-img version 2.12.0. Once converted, the VHD file is ready to upload to Azure. 4.9. Uploading and creating an Azure image Complete the following steps to upload the VHD file to your container and create an Azure custom image. Note The exported storage connection string does not persist after a system reboot. If any of the commands in the following steps fail, export the connection string again. Procedure Upload the VHD file to the storage container. It may take several minutes. To get a list of storage containers, enter the az storage container list command. Example: Get the URL for the uploaded VHD file to use in the following step. Example: Create the Azure custom image. Note The default hypervisor generation of the VM is V1. You can optionally specify a V2 hypervisor generation by including the option --hyper-v-generation V2 . Generation 2 VMs use a UEFI-based boot architecture. See Support for generation 2 VMs on Azure for information about generation 2 VMs. The command may return the error "Only blobs formatted as VHDs can be imported." This error may mean that the image was not aligned to the nearest 1 MB boundary before it was converted to VHD . Example: 4.10. Installing Red Hat HA packages and agents Complete the following steps on all nodes. Procedure Launch an SSH terminal session and connect to the VM by using the administrator name and public IP address. USD ssh administrator@PublicIP To get the public IP address for an Azure VM, open the VM properties in the Azure Portal or enter the following Azure CLI command. USD az vm list -g <resource-group> -d --output table Example: Register the VM with Red Hat. Note If the --auto-attach command fails, manually register the VM to your subscription. Disable all repositories. Enable the RHEL 8 Server HA repositories. Update all packages. Install the Red Hat High Availability Add-On software packages, along with the Azure fencing agent from the High Availability channel. The user hacluster was created during the pcs and pacemaker installation in the step. Create a password for hacluster on all cluster nodes. Use the same password for all nodes. Add the high availability service to the RHEL Firewall if firewalld.service is installed. Start the pcs service and enable it to start on boot. Verification Ensure the pcs service is running. # systemctl status pcsd.service pcsd.service - PCS GUI and remote configuration interface Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2018-02-23 11:00:58 EST; 1min 23s ago Docs: man:pcsd(8) man:pcs(8) Main PID: 46235 (pcsd) CGroup: /system.slice/pcsd.service └─46235 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null & 4.11. Creating a cluster Complete the following steps to create the cluster of nodes. Procedure On one of the nodes, enter the following command to authenticate the pcs user hacluster . In the command, specify the name of each node in the cluster. Example: Create the cluster. Example: Verification Enable the cluster. Start the cluster. 4.12. Fencing overview If communication with a single node in the cluster fails, then other nodes in the cluster must be able to restrict or release access to resources that the failed cluster node may have access to. This cannot be accomplished by contacting the cluster node itself as the cluster node may not be responsive. Instead, you must provide an external method, which is called fencing with a fence agent. A node that is unresponsive may still be accessing data. The only way to be certain that your data is safe is to fence the node by using STONITH. STONITH is an acronym for "Shoot The Other Node In The Head," and it protects your data from being corrupted by rogue nodes or concurrent access. Using STONITH, you can be certain that a node is truly offline before allowing the data to be accessed from another node. Additional resources Fencing in Red Hat High Availability Cluster (Red Hat Knowledgebase) 4.13. Creating a fencing device Complete the following steps to configure fencing. Complete these commands from any node in the cluster Prerequisites You need to set the cluster property stonith-enabled to true . Procedure Identify the Azure node name for each RHEL VM. You use the Azure node names to configure the fence device. # fence_azure_arm \ -l <AD-Application-ID> -p <AD-Password> \ --resourceGroup <MyResourceGroup> --tenantId <Tenant-ID> \ --subscriptionId <Subscription-ID> -o list Example: [root@node01 clouduser]# fence_azure_arm \ -l e04a6a49-9f00-xxxx-xxxx-a8bdda4af447 -p z/a05AwCN0IzAjVwXXXXXXXEWIoeVp0xg7QT//JE= --resourceGroup azrhelclirsgrp --tenantId 77ecefb6-cff0-XXXX-XXXX-757XXXX9485 --subscriptionId XXXXXXXX-38b4-4527-XXXX-012d49dfc02c -o list node01, node02, node03, View the options for the Azure ARM STONITH agent. Example: Warning For fence agents that provide a method option, do not specify a value of cycle as it is not supported and can cause data corruption. Some fence devices can fence only a single node, while other devices can fence multiple nodes. The parameters you specify when you create a fencing device depend on what your fencing device supports and requires. You can use the pcmk_host_list parameter when creating a fencing device to specify all of the machines that are controlled by that fencing device. You can use pcmk_host_map parameter when creating a fencing device to map host names to the specifications that comprehends the fence device. Create a fence device. To ensure immediate and complete fencing, disable ACPI Soft-Off on all cluster nodes. For information about disabling ACPI Soft-Off, see Disabling ACPI for use with integrated fence device . Verification Test the fencing agent for one of the other nodes. Example: Start the node that was fenced in the step. Check the status to verify the node started. Example: Additional resources Fencing in a Red Hat High Availability Cluster (Red Hat Knowledgebase) General properties of fencing devices 4.14. Creating an Azure internal load balancer The Azure internal load balancer removes cluster nodes that do not answer health probe requests. Perform the following procedure to create an Azure internal load balancer. Each step references a specific Microsoft procedure and includes the settings for customizing the load balancer for HA. Prerequisites Azure control panel Procedure Create a Basic load balancer . Select Internal load balancer , the Basic SKU , and Dynamic for the type of IP address assignment. Create a back-end address pool . Associate the backend pool to the availability set created while creating Azure resources in HA. Do not set any target network IP configurations. Create a health probe . For the health probe, select TCP and enter port 61000 . You can use TCP port number that does not interfere with another service. For certain HA product applications (for example, SAP HANA and SQL Server), you may need to work with Microsoft to identify the correct port to use. Create a load balancer rule . To create the load balancing rule, the default values are prepopulated. Ensure to set Floating IP (direct server return) to Enabled . 4.15. Configuring the load balancer resource agent After you have created the health probe, you must configure the load balancer resource agent. This resource agent runs a service that answers health probe requests from the Azure load balancer and removes cluster nodes that do not answer requests. Procedure Install the nmap-ncat resource agents on all nodes. Perform the following steps on a single node. Create the pcs resources and group. Use your load balancer FrontendIP for the IPaddr2 address. Configure the load balancer resource agent. Verification Run pcs status to see the results. Example output: 4.16. Configuring shared block storage To configure shared block storage for a Red Hat High Availability cluster with Microsoft Azure Shared Disks, use the following procedure. Note that this procedure is optional, and the steps below assume three Azure VMs (a three-node cluster) with a 1 TB shared disk. Note This is a stand-alone sample procedure for configuring block storage. The procedure assumes that you have not yet created your cluster. Prerequisites You must have installed the Azure CLI on your host system and created your SSH key(s). You must have created your cluster environment in Azure, which includes creating the following resources. Links are to the Microsoft Azure documentation. Resource group Virtual network Network security group(s) Network security group rules Subnet(s) Load balancer (optional) Storage account Proximity placement group Availability set Procedure Create a shared block volume by using the Azure command az disk create . For example, the following command creates a shared block volume named shared-block-volume.vhd in the resource group sharedblock within the Azure Availability Zone westcentralus . Verify that you have created the shared block volume by using the Azure command az disk show . For example, the following command shows details for the shared block volume shared-block-volume.vhd within the resource group sharedblock-rg . Create three network interfaces by using the Azure command az network nic create . Run the following command three times by using a different <nic_name> for each. For example, the following command creates a network interface with the name shareblock-nodea-vm-nic-protected . Create three VMs and attach the shared block volume by using the Azure command az vm create . Option values are the same for each VM except that each VM has its own <vm_name> , <new_vm_disk_name> , and <nic_name> . For example, the following command creates a VM named sharedblock-nodea-vm . Verification For each VM in your cluster, verify that the block device is available by using the ssh command with your VM's IP address. # ssh <ip_address> "hostname ; lsblk -d | grep ' 1T '" For example, the following command lists details including the host name and block device for the VM IP 198.51.100.3 . # ssh 198.51.100.3 "hostname ; lsblk -d | grep ' 1T '" nodea sdb 8:16 0 1T 0 disk Use the ssh command to verify that each VM in your cluster uses the same shared disk. # ssh <ip_address> "hostname ; lsblk -d | grep ' 1T ' | awk '{print \USD1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='" For example, the following command lists details including the host name and shared disk volume ID for the instance IP address 198.51.100.3 . # ssh 198.51.100.3 "hostname ; lsblk -d | grep ' 1T ' | awk '{print \USD1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='" nodea E: ID_SERIAL=3600224808dd8eb102f6ffc5822c41d89 After you have verified that the shared disk is attached to each VM, you can configure resilient storage for the cluster. Additional resources Configuring a GFS2 file system in a cluster Configuring GFS2 file systems 4.17. Additional resources Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members Configuring and Managing High Availability Clusters
[ "az login", "[clouduser@localhost]USD az login To sign in, use a web browser to open the page https://aka.ms/devicelogin and enter the code FDMSCMETZ to authenticate. [ { \"cloudName\": \"AzureCloud\", \"id\": \" Subscription ID \", \"isDefault\": true, \"name\": \" MySubscriptionName \", \"state\": \"Enabled\", \"tenantId\": \" Tenant ID \", \"user\": { \"name\": \" [email protected] \", \"type\": \"user\" } } ]", "az group create --name resource-group --location azure-region", "[clouduser@localhost]USD az group create --name azrhelclirsgrp --location southcentralus { \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp\", \"location\": \"southcentralus\", \"managedBy\": null, \"name\": \"azrhelclirsgrp\", \"properties\": { \"provisioningState\": \"Succeeded\" }, \"tags\": null }", "az storage account create -l azure-region -n storage-account-name -g resource-group --sku sku_type --kind StorageV2", "[clouduser@localhost]USD az storage account create -l southcentralus -n azrhelclistact -g azrhelclirsgrp --sku Standard_LRS --kind StorageV2 { \"accessTier\": null, \"creationTime\": \"2017-04-05T19:10:29.855470+00:00\", \"customDomain\": null, \"encryption\": null, \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Storage/storageAccounts/azrhelclistact\", \"kind\": \"StorageV2\", \"lastGeoFailoverTime\": null, \"location\": \"southcentralus\", \"name\": \"azrhelclistact\", \"primaryEndpoints\": { \"blob\": \"https://azrhelclistact.blob.core.windows.net/\", \"file\": \"https://azrhelclistact.file.core.windows.net/\", \"queue\": \"https://azrhelclistact.queue.core.windows.net/\", \"table\": \"https://azrhelclistact.table.core.windows.net/\" }, \"primaryLocation\": \"southcentralus\", \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"azrhelclirsgrp\", \"secondaryEndpoints\": null, \"secondaryLocation\": null, \"sku\": { \"name\": \"Standard_LRS\", \"tier\": \"Standard\" }, \"statusOfPrimary\": \"available\", \"statusOfSecondary\": null, \"tags\": {}, \"type\": \"Microsoft.Storage/storageAccounts\" }", "az storage account show-connection-string -n storage-account-name -g resource-group", "[clouduser@localhost]USD az storage account show-connection-string -n azrhelclistact -g azrhelclirsgrp { \"connectionString\": \"DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...==\" }", "export AZURE_STORAGE_CONNECTION_STRING=\" storage-connection-string \"", "[clouduser@localhost]USD export AZURE_STORAGE_CONNECTION_STRING=\"DefaultEndpointsProtocol=https;EndpointSuffix=core.windows.net;AccountName=azrhelclistact;AccountKey=NreGk...==\"", "az storage container create -n container-name", "[clouduser@localhost]USD az storage container create -n azrhelclistcont { \"created\": true }", "az network vnet create -g resource group --name vnet-name --subnet-name subnet-name", "[clouduser@localhost]USD az network vnet create --resource-group azrhelclirsgrp --name azrhelclivnet1 --subnet-name azrhelclisubnet1 { \"newVNet\": { \"addressSpace\": { \"addressPrefixes\": [ \"10.0.0.0/16\" ] }, \"dhcpOptions\": { \"dnsServers\": [] }, \"etag\": \"W/\\\"\\\"\", \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1\", \"location\": \"southcentralus\", \"name\": \"azrhelclivnet1\", \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"azrhelclirsgrp\", \"resourceGuid\": \"0f25efee-e2a6-4abe-a4e9-817061ee1e79\", \"subnets\": [ { \"addressPrefix\": \"10.0.0.0/24\", \"etag\": \"W/\\\"\\\"\", \"id\": \"/subscriptions//resourceGroups/azrhelclirsgrp/providers/Microsoft.Network/virtualNetworks/azrhelclivnet1/subnets/azrhelclisubnet1\", \"ipConfigurations\": null, \"name\": \"azrhelclisubnet1\", \"networkSecurityGroup\": null, \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"azrhelclirsgrp\", \"resourceNavigationLinks\": null, \"routeTable\": null } ], \"tags\": {}, \"type\": \"Microsoft.Network/virtualNetworks\", \"virtualNetworkPeerings\": null } }", "az vm availability-set create --name MyAvailabilitySet --resource-group MyResourceGroup", "[clouduser@localhost]USD az vm availability-set create --name rhelha-avset1 --resource-group azrhelclirsgrp { \"additionalProperties\": {}, \"id\": \"/subscriptions/.../resourceGroups/azrhelclirsgrp/providers/Microsoft.Compute/availabilitySets/rhelha-avset1\", \"location\": \"southcentralus\", \"name\": \"rhelha-avset1\", \"platformFaultDomainCount\": 2, \"platformUpdateDomainCount\": 5, [omitted]", "lsinitrd | grep hv", "lsinitrd | grep hv drwxr-xr-x 2 root root 0 Aug 12 14:21 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/hv -rw-r--r-- 1 root root 31272 Aug 11 08:45 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/hv/hv_vmbus.ko.xz -rw-r--r-- 1 root root 25132 Aug 11 08:46 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/net/hyperv/hv_netvsc.ko.xz -rw-r--r-- 1 root root 9796 Aug 11 08:45 usr/lib/modules/3.10.0-932.el8.x86_64/kernel/drivers/scsi/hv_storvsc.ko.xz", "add_drivers+=\" hv_vmbus \" add_drivers+=\" hv_netvsc \" add_drivers+=\" hv_storvsc \" add_drivers+=\" nvme \"", "dracut -f -v --regenerate-all", "subscription-manager register --auto-attach Installed Product Current Status: Product Name: Red Hat Enterprise Linux for x86_64 Status: Subscribed", "yum install cloud-init hyperv-daemons -y", "reporting: logging: type: log telemetry: type: hyperv", "datasource_list: [ Azure ] datasource: Azure: apply_network_config: False", "blacklist nouveau blacklist lbm-nouveau blacklist floppy blacklist amdgpu blacklist skx_edac blacklist intel_cstate", "rm -f /etc/udev/rules.d/70-persistent-net.rules rm -f /etc/udev/rules.d/75-persistent-net-generator.rules rm -f /etc/udev/rules.d/80-net-name-slot-rules", "SUBSYSTEM==\"net\", DRIVERS==\"hv_pci\", ACTION==\"add\", ENV{NM_UNMANAGED}=\"1\"", "systemctl enable sshd systemctl is-enabled sshd", "GRUB_TIMEOUT=10", "rhgb quiet", "GRUB_CMDLINE_LINUX=\"loglevel=3 crashkernel=auto console=tty1 console=ttyS0 earlyprintk=ttyS0 rootdelay=300\" GRUB_TIMEOUT_STYLE=countdown GRUB_TERMINAL=\"serial console\" GRUB_SERIAL_COMMAND=\"serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1\"", "grub2-mkconfig -o /boot/grub2/grub.cfg", "grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg", "yum install WALinuxAgent -y systemctl enable waagent", "Provisioning.DeleteRootPassword=y ResourceDisk.Format=n ResourceDisk.EnableSwap=n", "subscription-manager unregister", "waagent -force -deprovision", "export HISTSIZE=0 poweroff", "az login", "{ \"Name\": \"Linux Fence Agent Role\", \"description\": \"Allows to power-off and start virtual machines\", \"assignableScopes\": [ \"/subscriptions/ <subscription-id> \" ], \"actions\": [ \"Microsoft.Compute/*/read\", \"Microsoft.Compute/virtualMachines/powerOff/action\", \"Microsoft.Compute/virtualMachines/start/action\" ], \"notActions\": [], \"dataActions\": [], \"notDataActions\": [] }", "az role definition create --role-definition azure-fence-role.json { \"assignableScopes\": [ \"/subscriptions/ <my-subscription-id> \" ], \"description\": \"Allows to power-off and start virtual machines\", \"id\": \"/subscriptions/ <my-subscription-id> /providers/Microsoft.Authorization/roleDefinitions/ <role-id> \", \"name\": \" <role-id> \", \"permissions\": [ { \"actions\": [ \"Microsoft.Compute/*/read\", \"Microsoft.Compute/virtualMachines/powerOff/action\", \"Microsoft.Compute/virtualMachines/start/action\" ], \"dataActions\": [], \"notActions\": [], \"notDataActions\": [] } ], \"roleName\": \"Linux Fence Agent Role\", \"roleType\": \"CustomRole\", \"type\": \"Microsoft.Authorization/roleDefinitions\" }", "fence_azure_arm --msi -o list node1, node2, [...]", "qemu-img convert -f qcow2 -O raw <image-name> .qcow2 <image-name> .raw", "#!/bin/bash MB=USD((1024 * 1024)) size=USD(qemu-img info -f raw --output json \"USD1\" | gawk 'match(USD0, /\"virtual-size\": ([0-9]+),/, val) {print val[1]}') rounded_size=USD(((USDsize/USDMB + 1) * USDMB)) if [ USD((USDsize % USDMB)) -eq 0 ] then echo \"Your image is already aligned. You do not need to resize.\" exit 1 fi echo \"rounded size = USDrounded_size\" export rounded_size", "sh align.sh <image-xxx> .raw", "qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx> .raw <image.xxx> .vhd", "qemu-img resize -f raw <image-xxx> .raw <rounded-value>", "qemu-img convert -f raw -o subformat=fixed,force_size -O vpc <image-xxx> .raw <image.xxx> .vhd", "az storage blob upload --account-name <storage-account-name> --container-name <container-name> --type page --file <path-to-vhd> --name <image-name>.vhd", "[clouduser@localhost]USD az storage blob upload --account-name azrhelclistact --container-name azrhelclistcont --type page --file rhel-image-{ProductNumber}.vhd --name rhel-image-{ProductNumber}.vhd Percent complete: %100.0", "az storage blob url -c <container-name> -n <image-name>.vhd", "az storage blob url -c azrhelclistcont -n rhel-image-8.vhd \"https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd\"", "az image create -n <image-name> -g <resource-group> -l <azure-region> --source <URL> --os-type linux", "az image create -n rhel8 -g azrhelclirsgrp2 -l southcentralus --source https://azrhelclistact.blob.core.windows.net/azrhelclistcont/rhel-image-8.vhd --os-type linux", "ssh administrator@PublicIP", "az vm list -g <resource-group> -d --output table", "[clouduser@localhost ~] USD az vm list -g azrhelclirsgrp -d --output table Name ResourceGroup PowerState PublicIps Location ------ ---------------------- -------------- ------------- -------------- node01 azrhelclirsgrp VM running 192.98.152.251 southcentralus", "sudo -i subscription-manager register --auto-attach", "subscription-manager repos --disable= *", "subscription-manager repos --enable=rhel-8-for-x86_64-highavailability-rpms", "yum update -y", "yum install pcs pacemaker fence-agents-azure-arm", "passwd hacluster", "firewall-cmd --permanent --add-service=high-availability firewall-cmd --reload", "systemctl start pcsd.service systemctl enable pcsd.service Created symlink from /etc/systemd/system/multi-user.target.wants/pcsd.service to /usr/lib/systemd/system/pcsd.service.", "systemctl status pcsd.service pcsd.service - PCS GUI and remote configuration interface Loaded: loaded (/usr/lib/systemd/system/pcsd.service; enabled; vendor preset: disabled) Active: active (running) since Fri 2018-02-23 11:00:58 EST; 1min 23s ago Docs: man:pcsd(8) man:pcs(8) Main PID: 46235 (pcsd) CGroup: /system.slice/pcsd.service └─46235 /usr/bin/ruby /usr/lib/pcsd/pcsd > /dev/null &", "pcs host auth <hostname1> <hostname2> <hostname3>", "pcs host auth node01 node02 node03 Username: hacluster Password: node01: Authorized node02: Authorized node03: Authorized", "pcs cluster setup <cluster_name> <hostname1> <hostname2> <hostname3>", "pcs cluster setup new_cluster node01 node02 node03 [...] Synchronizing pcsd certificates on nodes node01, node02, node03 node02: Success node03: Success node01: Success Restarting pcsd on the nodes in order to reload the certificates node02: Success node03: Success node01: Success", "pcs cluster enable --all node02: Cluster Enabled node03: Cluster Enabled node01: Cluster Enabled", "pcs cluster start --all node02: Starting Cluster node03: Starting Cluster node01: Starting Cluster", "fence_azure_arm -l <AD-Application-ID> -p <AD-Password> --resourceGroup <MyResourceGroup> --tenantId <Tenant-ID> --subscriptionId <Subscription-ID> -o list", "fence_azure_arm -l e04a6a49-9f00-xxxx-xxxx-a8bdda4af447 -p z/a05AwCN0IzAjVwXXXXXXXEWIoeVp0xg7QT//JE= --resourceGroup azrhelclirsgrp --tenantId 77ecefb6-cff0-XXXX-XXXX-757XXXX9485 --subscriptionId XXXXXXXX-38b4-4527-XXXX-012d49dfc02c -o list node01, node02, node03,", "pcs stonith describe fence_azure_arm", "pcs stonith describe fence_apc Stonith options: password: Authentication key password_script: Script to run to retrieve password", "pcs stonith create clusterfence fence_azure_arm", "pcs stonith fence azurenodename", "pcs status Cluster name: newcluster Stack: corosync Current DC: node01 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Feb 23 11:44:35 2018 Last change: Fri Feb 23 11:21:01 2018 by root via cibadmin on node01 3 nodes configured 1 resource configured Online: [ node01 node03 ] OFFLINE: [ node02 ] Full list of resources: clusterfence (stonith:fence_azure_arm): Started node01 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled", "pcs cluster start <hostname>", "pcs status", "pcs status Cluster name: newcluster Stack: corosync Current DC: node01 (version 1.1.18-11.el7-2b07d5c5a9) - partition with quorum Last updated: Fri Feb 23 11:34:59 2018 Last change: Fri Feb 23 11:21:01 2018 by root via cibadmin on node01 3 nodes configured 1 resource configured Online: [ node01 node02 node03 ] Full list of resources: clusterfence (stonith:fence_azure_arm): Started node01 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled", "yum install nmap-ncat resource-agents", "pcs resource create resource-name IPaddr2 ip=\"10.0.0.7\" --group cluster-resources-group", "pcs resource create resource-loadbalancer-name azure-lb port= port-number --group cluster-resources-group", "pcs status", "Cluster name: clusterfence01 Stack: corosync Current DC: node02 (version 1.1.16-12.el7_4.7-94ff4df) - partition with quorum Last updated: Tue Jan 30 12:42:35 2018 Last change: Tue Jan 30 12:26:42 2018 by root via cibadmin on node01 3 nodes configured 3 resources configured Online: [ node01 node02 node03 ] Full list of resources: clusterfence (stonith:fence_azure_arm): Started node01 Resource Group: g_azure vip_azure (ocf::heartbeat:IPaddr2): Started node02 lb_azure (ocf::heartbeat:azure-lb): Started node02 Daemon Status: corosync: active/disabled pacemaker: active/disabled pcsd: active/enabled", "az disk create -g <resource_group> -n <shared_block_volume_name> --size-gb <disk_size> --max-shares <number_vms> -l <location>", "az disk create -g sharedblock-rg -n shared-block-volume.vhd --size-gb 1024 --max-shares 3 -l westcentralus { \"creationData\": { \"createOption\": \"Empty\", \"galleryImageReference\": null, \"imageReference\": null, \"sourceResourceId\": null, \"sourceUniqueId\": null, \"sourceUri\": null, \"storageAccountId\": null, \"uploadSizeBytes\": null }, \"diskAccessId\": null, \"diskIopsReadOnly\": null, \"diskIopsReadWrite\": 5000, \"diskMbpsReadOnly\": null, \"diskMbpsReadWrite\": 200, \"diskSizeBytes\": 1099511627776, \"diskSizeGb\": 1024, \"diskState\": \"Unattached\", \"encryption\": { \"diskEncryptionSetId\": null, \"type\": \"EncryptionAtRestWithPlatformKey\" }, \"encryptionSettingsCollection\": null, \"hyperVgeneration\": \"V1\", \"id\": \"/subscriptions/12345678910-12345678910/resourceGroups/sharedblock-rg/providers/Microsoft.Compute/disks/shared-block-volume.vhd\", \"location\": \"westcentralus\", \"managedBy\": null, \"managedByExtended\": null, \"maxShares\": 3, \"name\": \"shared-block-volume.vhd\", \"networkAccessPolicy\": \"AllowAll\", \"osType\": null, \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"sharedblock-rg\", \"shareInfo\": null, \"sku\": { \"name\": \"Premium_LRS\", \"tier\": \"Premium\" }, \"tags\": {}, \"timeCreated\": \"2020-08-27T15:36:56.263382+00:00\", \"type\": \"Microsoft.Compute/disks\", \"uniqueId\": \"cd8b0a25-6fbe-4779-9312-8d9cbb89b6f2\", \"zones\": null }", "az disk show -g <resource_group> -n <shared_block_volume_name>", "az disk show -g sharedblock-rg -n shared-block-volume.vhd { \"creationData\": { \"createOption\": \"Empty\", \"galleryImageReference\": null, \"imageReference\": null, \"sourceResourceId\": null, \"sourceUniqueId\": null, \"sourceUri\": null, \"storageAccountId\": null, \"uploadSizeBytes\": null }, \"diskAccessId\": null, \"diskIopsReadOnly\": null, \"diskIopsReadWrite\": 5000, \"diskMbpsReadOnly\": null, \"diskMbpsReadWrite\": 200, \"diskSizeBytes\": 1099511627776, \"diskSizeGb\": 1024, \"diskState\": \"Unattached\", \"encryption\": { \"diskEncryptionSetId\": null, \"type\": \"EncryptionAtRestWithPlatformKey\" }, \"encryptionSettingsCollection\": null, \"hyperVgeneration\": \"V1\", \"id\": \"/subscriptions/12345678910-12345678910/resourceGroups/sharedblock-rg/providers/Microsoft.Compute/disks/shared-block-volume.vhd\", \"location\": \"westcentralus\", \"managedBy\": null, \"managedByExtended\": null, \"maxShares\": 3, \"name\": \"shared-block-volume.vhd\", \"networkAccessPolicy\": \"AllowAll\", \"osType\": null, \"provisioningState\": \"Succeeded\", \"resourceGroup\": \"sharedblock-rg\", \"shareInfo\": null, \"sku\": { \"name\": \"Premium_LRS\", \"tier\": \"Premium\" }, \"tags\": {}, \"timeCreated\": \"2020-08-27T15:36:56.263382+00:00\", \"type\": \"Microsoft.Compute/disks\", \"uniqueId\": \"cd8b0a25-6fbe-4779-9312-8d9cbb89b6f2\", \"zones\": null }", "az network nic create -g <resource_group> -n <nic_name> --subnet <subnet_name> --vnet-name <virtual_network> --location <location> --network-security-group <network_security_group> --private-ip-address-version IPv4", "az network nic create -g sharedblock-rg -n sharedblock-nodea-vm-nic-protected --subnet sharedblock-subnet-protected --vnet-name sharedblock-vn --location westcentralus --network-security-group sharedblock-nsg --private-ip-address-version IPv4", "az vm create -n <vm_name> -g <resource_group> --attach-data-disks <shared_block_volume_name> --data-disk-caching None --os-disk-caching ReadWrite --os-disk-name <new-vm-disk-name> --os-disk-size-gb <disk_size> --location <location> --size <virtual_machine_size> --image <image_name> --admin-username <vm_username> --authentication-type ssh --ssh-key-values <ssh_key> --nics <nic_name> --availability-set <availability_set> --ppg <proximity_placement_group>", "az vm create -n sharedblock-nodea-vm -g sharedblock-rg --attach-data-disks shared-block-volume.vhd --data-disk-caching None --os-disk-caching ReadWrite --os-disk-name sharedblock-nodea-vm.vhd --os-disk-size-gb 64 --location westcentralus --size Standard_D2s_v3 --image /subscriptions/12345678910-12345678910/resourceGroups/sample-azureimagesgroupwestcentralus/providers/Microsoft.Compute/images/sample-azure-rhel-8.3.0-20200713.n.0.x86_64 --admin-username sharedblock-user --authentication-type ssh --ssh-key-values @sharedblock-key.pub --nics sharedblock-nodea-vm-nic-protected --availability-set sharedblock-as --ppg sharedblock-ppg { \"fqdns\": \"\", \"id\": \"/subscriptions/12345678910-12345678910/resourceGroups/sharedblock-rg/providers/Microsoft.Compute/virtualMachines/sharedblock-nodea-vm\", \"location\": \"westcentralus\", \"macAddress\": \"00-22-48-5D-EE-FB\", \"powerState\": \"VM running\", \"privateIpAddress\": \"198.51.100.3\", \"publicIpAddress\": \"\", \"resourceGroup\": \"sharedblock-rg\", \"zones\": \"\" }", "ssh <ip_address> \"hostname ; lsblk -d | grep ' 1T '\"", "ssh 198.51.100.3 \"hostname ; lsblk -d | grep ' 1T '\" nodea sdb 8:16 0 1T 0 disk", "ssh <ip_address> \"hostname ; lsblk -d | grep ' 1T ' | awk '{print \\USD1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='\"", "ssh 198.51.100.3 \"hostname ; lsblk -d | grep ' 1T ' | awk '{print \\USD1}' | xargs -i udevadm info --query=all --name=/dev/{} | grep '^E: ID_SERIAL='\" nodea E: ID_SERIAL=3600224808dd8eb102f6ffc5822c41d89" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_rhel_8_on_microsoft_azure/configuring-rhel-high-availability-on-azure_cloud-content-azure
25.4. Using the New Configuration Format
25.4. Using the New Configuration Format In rsyslog version 7, available for Red Hat Enterprise Linux 6 in the rsyslog7 package, a new configuration syntax is introduced. This new configuration format aims to be more powerful, more intuitive, and to prevent common mistakes by not permitting certain invalid constructs. The syntax enhancement is enabled by the new configuration processor that relies on RainerScript. The legacy format is still fully supported and it is used by default in the /etc/rsyslog.conf configuration file. To install rsyslog 7, see Section 25.1.1, "Upgrading to rsyslog version 7" . RainerScript is a scripting language designed for processing network events and configuring event processors such as rsyslog . The version of RainerScript in rsyslog version 5 is used to define expression-based filters, see Example 25.3, "Expression-based Filters" . The version of RainerScript in rsyslog version 7 implements the input() and ruleset() statements, which permit the /etc/rsyslog.conf configuration file to be written in the new syntax. The new syntax differs mainly in that it is much more structured; parameters are passed as arguments to statements, such as input, action, template, and module load. The scope of options is limited by blocks. This enhances readability and reduces the number of bugs caused by misconfiguration. There is also a significant performance gain. Some functionality is exposed in both syntaxes, some only in the new one. Compare the configuration written with legacy-style parameters: and the same configuration with the use of the new format statement: This significantly reduces the number of parameters used in configuration, improves readability, and also provides higher execution speed. For more information on RainerScript statements and parameters see the section called "Online Documentation" . 25.4.1. Rulesets Leaving special directives aside, rsyslog handles messages as defined by rules that consist of a filter condition and an action to be performed if the condition is true. With a traditionally written /etc/rsyslog.conf file, all rules are evaluated in order of appearance for every input message. This process starts with the first rule and continues until all rules have been processed or until the message is discarded by one of the rules. However, rules can be grouped into sequences called rulesets . With rulesets, you can limit the effect of certain rules only to selected inputs or enhance the performance of rsyslog by defining a distinct set of actions bound to a specific input. In other words, filter conditions that will be inevitably evaluated as false for certain types of messages can be skipped. The legacy ruleset definition in /etc/rsyslog.conf can look as follows: The rule ends when another rule is defined, or the default ruleset is called as follows: With the new configuration format in rsyslog 7, the input() and ruleset() statements are reserved for this operation. The new format ruleset definition in /etc/rsyslog.conf can look as follows: Replace rulesetname with an identifier for your ruleset. The ruleset name cannot start with RSYSLOG_ since this namespace is reserved for use by rsyslog . RSYSLOG_DefaultRuleset then defines the default set of rules to be performed if the message has no other ruleset assigned. With rule and rule2 you can define rules in filter-action format mentioned above. With the call parameter, you can nest rulesets by calling them from inside other ruleset blocks. After creating a ruleset, you need to specify what input it will apply to: input(type=" input_type " port=" port_num " ruleset=" rulesetname "); Here you can identify an input message by input_type , which is an input module that gathered the message, or by port_num - the port number. Other parameters such as file or tag can be specified for input() . Replace rulesetname with a name of the ruleset to be evaluated against the message. In case an input message is not explicitly bound to a ruleset, the default ruleset is triggered. You can also use the legacy format to define rulesets, for more information see the section called "Online Documentation" . Example 25.11. Using rulesets The following rulesets ensure different handling of remote messages coming from different ports. Add the following into /etc/rsyslog.conf : Rulesets shown in the above example define log destinations for the remote input from two ports, in case of port 601 , messages are sorted according to the facility. Then, the TCP input is enabled and bound to rulesets. Note that you must load the required modules (imtcp) for this configuration to work.
[ "USDInputFileName /tmp/inputfile USDInputFileTag tag1: USDInputFileStateFile inputfile-state USDInputRunFileMonitor", "input(type=\"imfile\" file=\"/tmp/inputfile\" tag=\"tag1:\" statefile=\"inputfile-state\")", "USDRuleSet rulesetname rule rule2", "USDRuleSet RSYSLOG_DefaultRuleset", "ruleset(name=\" rulesetname \") { rule rule2 call rulesetname2 ... }", "ruleset(name=\"remote-6514\") { action(type=\"omfile\" file=\"/var/log/remote-6514\") } ruleset(name=\"remote-601\") { cron.* action(type=\"omfile\" file=\"/var/log/remote-601-cron\") mail.* action(type=\"omfile\" file=\"/var/log/remote-601-mail\") } input(type=\"imtcp\" port=\"6514\" ruleset=\"remote-6514\"); input(type=\"imtcp\" port=\"601\" ruleset=\"remote-601\");" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-using_the_new_configuration_format
Managing, monitoring, and updating the kernel
Managing, monitoring, and updating the kernel Red Hat Enterprise Linux 9 A guide to managing the Linux kernel on Red Hat Enterprise Linux 9 Red Hat Customer Content Services
[ "dnf repoquery <package_name>", "dnf repoquery -l <package_name>", "dnf install kernel- {version}", "dnf update kernel", "grubby --set-default USDkernel_path", "grubby --info ALL | grep id grubby --set-default /boot/vmlinuz-<version>.<architecture>", "grub2-reboot <index|title|id>", "grubby --info=ALL | grep title", "title=\"Red Hat Enterprise Linux (5.14.0-1.el9.x86_64) 9.0 (Plow)\" title=\"Red Hat Enterprise Linux (0-rescue-0d772916a9724907a5d1350bcd39ac92) 9.0 (Plow)\"", "lsmod Module Size Used by fuse 126976 3 uinput 20480 1 xt_CHECKSUM 16384 1 ipt_MASQUERADE 16384 1 xt_conntrack 16384 1 ipt_REJECT 16384 1 nft_counter 16384 16 nf_nat_tftp 16384 0 nf_conntrack_tftp 16384 1 nf_nat_tftp tun 49152 1 bridge 192512 0 stp 16384 1 bridge llc 16384 2 bridge,stp nf_tables_set 32768 5 nft_fib_inet 16384 1 ...", "modinfo < KERNEL_MODULE_NAME >", "modinfo virtio_net filename: /lib/modules/5.14.0-1.el9.x86_64/kernel/drivers/net/virtio_net.ko.xz license: GPL description: Virtio network driver rhelversion: 9.0 srcversion: 8809CDDBE7202A1B00B9F1C alias: virtio:d00000001v* depends: net_failover retpoline: Y intree: Y name: virtio_net vermagic: 5.14.0-1.el9.x86_64 SMP mod_unload modversions ... parm: napi_weight:int parm: csum:bool parm: gso:bool parm: napi_tx:bool", "modprobe < MODULE_NAME >", "lsmod | grep < MODULE_NAME >", "lsmod | grep serio_raw serio_raw 16384 0", "lsmod", "modprobe -r < MODULE_NAME >", "lsmod | grep < MODULE_NAME >", "lsmod | grep serio_raw", "echo < MODULE_NAME > > /etc/modules-load.d/< MODULE_NAME >.conf", "lsmod | grep < MODULE_NAME >", "lsmod Module Size Used by tls 131072 0 uinput 20480 1 snd_seq_dummy 16384 0 snd_hrtimer 16384 1 ...", "ls /lib/modules/4.18.0-477.20.1.el8_8.x86_64/kernel/crypto/ ansi_cprng.ko.xz chacha20poly1305.ko.xz md4.ko.xz serpent_generic.ko.xz anubis.ko.xz cmac.ko.xz...", "touch /etc/modprobe.d/denylist.conf", "Prevents <KERNEL-MODULE-1> from being loaded blacklist <MODULE-NAME-1> install <MODULE-NAME-1> /bin/false Prevents <KERNEL-MODULE-2> from being loaded blacklist <MODULE-NAME-2> install <MODULE-NAME-2> /bin/false ...", "cp /boot/initramfs-USD(uname -r).img /boot/initramfs-USD(uname -r).bak.USD(date +%m-%d-%H%M%S).img", "cp /boot/initramfs- <VERSION> .img /boot/initramfs- <VERSION> .img.bak.USD(date +%m-%d-%H%M%S)", "dracut -f -v", "dracut -f -v /boot/initramfs- <TARGET-VERSION> .img <CORRESPONDING-TARGET-KERNEL-VERSION>", "reboot", "dnf install kernel-devel-USD(uname -r) gcc elfutils-libelf-devel", "#include <linux/module.h> #include <linux/kernel.h> int init_module(void) { printk(\"Hello World\\n This is a test\\n\"); return 0; } void cleanup_module(void) { printk(\"Good Bye World\"); } MODULE_LICENSE(\"GPL\");", "obj-m := test.o", "make -C /lib/modules/USD(uname -r)/build M=/root/testmodule modules make: Entering directory '/usr/src/kernels/5.14.0-70.17.1.el9_0.x86_64' CC [M] /root/testmodule/test.o MODPOST /root/testmodule/Module.symvers CC [M] /root/testmodule/test.mod.o LD [M] /root/testmodule/test.ko BTF [M] /root/testmodule/test.ko Skipping BTF generation for /root/testmodule/test.ko due to unavailability of vmlinux make: Leaving directory '/usr/src/kernels/5.14.0-70.17.1.el9_0.x86_64'", "ls -l /root/testmodule/ total 152 -rw-r- r--. 1 root root 16 Jul 26 08:19 Makefile -rw-r- r--. 1 root root 25 Jul 26 08:20 modules.order -rw-r- r--. 1 root root 0 Jul 26 08:20 Module.symvers -rw-r- r--. 1 root root 224 Jul 26 08:18 test.c -rw-r- r--. 1 root root 62176 Jul 26 08:20 test.ko -rw-r- r--. 1 root root 25 Jul 26 08:20 test.mod -rw-r- r--. 1 root root 849 Jul 26 08:20 test.mod.c -rw-r- r--. 1 root root 50936 Jul 26 08:20 test.mod.o -rw-r- r--. 1 root root 12912 Jul 26 08:20 test.o", "cp /root/testmodule/test.ko /lib/modules/USD(uname -r)/", "depmod -a", "modprobe -v test insmod /lib/modules/ 5.14.0-1.el9.x86_64 /test.ko", "lsmod | grep test test 16384 0", "dmesg [ 74422.545004 ] Hello World This is a test", "d8712ab6d4f14683c5625e87b52b6b6e-5.14.0-1.el9.x86_64.conf", "title Red Hat Enterprise Linux (5.14.0-1.el9.x86_64) 9.0 (Plow) version 5.14.0-1.el9.x86_64 linux /vmlinuz-5.14.0-1.el9.x86_64 initrd /initramfs-5.14.0-1.el9.x86_64.img options root=/dev/mapper/rhel_kvm--02--guest08-root ro crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/rhel_kvm--02--guest08-swap rd.lvm.lv=rhel_kvm-02-guest08/root rd.lvm.lv=rhel_kvm-02-guest08/swap console=ttyS0,115200 grub_users USDgrub_users grub_arg --unrestricted grub_class kernel", "grubby --update-kernel=ALL --args=\"< NEW_PARAMETER >\"", "zipl", "grubby --update-kernel=ALL --remove-args=\"< PARAMETER_TO_REMOVE >\"", "zipl", "grubby --update-kernel=/boot/vmlinuz-USD(uname -r) --args=\"< NEW_PARAMETER >\"", "zipl", "grubby --update-kernel=/boot/vmlinuz-USD(uname -r) --remove-args=\"< PARAMETER_TO_REMOVE >\"", "zipl", "linux (USDroot)/vmlinuz-5.14.0-63.el9.x86_64 root=/dev/mapper/rhel-root ro crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet emergency", "GRUB_TERMINAL=\"serial\" GRUB_SERIAL_COMMAND=\"serial --speed=9600 --unit=0 --word=8 --parity=no --stop=1\"", "grub2-mkconfig -o /boot/grub2/grub.cfg", "grub2-mkconfig -o /boot/grub2/grub.cfg", "GRUB_CMDLINE_LINUX=\"crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap\"", "grubby --update-kernel < PATH_TO_KERNEL > --args \"< NEW_ARGUMENTS >\"", "grubby --update-kernel /boot/vmlinuz-5.14.0-362.8.1.el9_3.x86_64 --args \"noapic\"", "grub2-mkconfig -o /boot/grub2/grub.cfg --update-bls-cmdline Generating grub configuration file ... Adding boot menu entry for UEFI Firmware Settings ... done", "BOOT_IMAGE=(hd0,gpt2)/vmlinuz-4.18.0-425.3.1.el8.x86_64 root=/dev/mapper/RHELCSB-Root ro vconsole.keymap=us crashkernel=auto rd.lvm.lv=RHELCSB/Root rd.luks.uuid=luks-d8a28c4c-96aa-4319-be26-96896272151d rhgb quiet noapic rd.luks.key=d8a28c4c-96aa-4319-be26-96896272151d=/keyfile:UUID=c47d962e-4be8-41d6-8216-8cf7a0d3b911 ipv6.disable=1", "sysctl -a", "sysctl <TUNABLE_CLASS>.<PARAMETER>=<TARGET_VALUE>", "sysctl -a", "sysctl -w <TUNABLE_CLASS>.<PARAMETER>=<TARGET_VALUE> >> /etc/sysctl.conf", "vim /etc/sysctl.d/< some_file.conf >", "< TUNABLE_CLASS >.< PARAMETER >=< TARGET_VALUE > < TUNABLE_CLASS >.< PARAMETER >=< TARGET_VALUE >", "sysctl -p /etc/sysctl.d/< some_file.conf >", "ls -l /proc/sys/< TUNABLE_CLASS >/", "echo < TARGET_VALUE > > /proc/sys/< TUNABLE_CLASS >/< PARAMETER >", "cat /proc/sys/< TUNABLE_CLASS >/< PARAMETER >", "--- - name: Configuring kernel settings hosts: managed-node-01.example.com tasks: - name: Configure hugepages, packet size for loopback device, and limits on simultaneously open files. ansible.builtin.include_role: name: rhel-system-roles.kernel_settings vars: kernel_settings_sysctl: - name: fs.file-max value: 400000 - name: kernel.threads-max value: 65536 kernel_settings_sysfs: - name: /sys/class/net/lo/mtu value: 65000 kernel_settings_transparent_hugepages: madvise kernel_settings_reboot_ok: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'sysctl fs.file-max kernel.threads-max net.ipv6.conf.lo.mtu' ansible managed-node-01.example.com -m command -a 'cat /sys/kernel/mm/transparent_hugepage/enabled'", "--- - name: Configuration and management of GRUB boot loader hosts: managed-node-01.example.com tasks: - name: Update existing boot loader entries ansible.builtin.include_role: name: rhel-system-roles.bootloader vars: bootloader_settings: - kernel: path: /boot/vmlinuz-5.14.0-362.24.1.el9_3.aarch64 options: - name: quiet state: present bootloader_reboot_ok: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.command -a 'grubby --info=ALL' managed-node-01.example.com | CHANGED | rc=0 >> index=1 kernel=\"/boot/vmlinuz-5.14.0-362.24.1.el9_3.aarch64\" args=\"ro crashkernel=1G-4G:256M,4G-64G:320M,64G-:576M rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap USDtuned_params quiet \" root=\"/dev/mapper/rhel-root\" initrd=\"/boot/initramfs-5.14.0-362.24.1.el9_3.aarch64.img USDtuned_initrd\" title=\"Red Hat Enterprise Linux (5.14.0-362.24.1.el9_3.aarch64) 9.4 (Plow)\" id=\"2c9ec787230141a9b087f774955795ab-5.14.0-362.24.1.el9_3.aarch64\"", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "pwd: <password>", "--- - name: Configuration and management of GRUB boot loader hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Set the bootloader password ansible.builtin.include_role: name: rhel-system-roles.bootloader vars: bootloader_password: \"{{ pwd }}\" bootloader_reboot_ok: true", "ansible-playbook --syntax-check --ask-vault-pass ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "--- - name: Configuration and management of the GRUB boot loader hosts: managed-node-01.example.com tasks: - name: Update the boot loader timeout ansible.builtin.include_role: name: rhel-system-roles.bootloader vars: bootloader_timeout: 10", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.reboot managed-node-01.example.com | CHANGED => { \"changed\": true, \"elapsed\": 21, \"rebooted\": true }", "ansible managed-node-01.example.com -m ansible.builtin.command -a \"grep 'timeout' /boot/grub2/grub.cfg\" managed-node-01.example.com | CHANGED | rc=0 >> if [ xUSDfeature_timeout_style = xy ] ; then set timeout_style=menu set timeout=10 Fallback normal timeout code in case the timeout_style feature is set timeout=10 if [ xUSDfeature_timeout_style = xy ] ; then set timeout_style=menu set timeout=10 set orig_timeout_style=USD{timeout_style} set orig_timeout=USD{timeout} # timeout_style=menu + timeout=0 avoids the countdown code keypress check set timeout_style=menu set timeout=10 set timeout_style=hidden set timeout=10 if [ xUSDfeature_timeout_style = xy ]; then if [ \"USD{menu_show_once_timeout}\" ]; then set timeout_style=menu set timeout=10 unset menu_show_once_timeout save_env menu_show_once_timeout", "--- - name: Configuration and management of GRUB boot loader hosts: managed-node-01.example.com tasks: - name: Gather information about the boot loader configuration ansible.builtin.include_role: name: rhel-system-roles.bootloader vars: bootloader_gather_facts: true - name: Display the collected boot loader configuration information debug: var: bootloader_facts", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "\"bootloader_facts\": [ { \"args\": \"ro crashkernel=1G-4G:256M,4G-64G:320M,64G-:576M rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap USDtuned_params quiet\", \"default\": true, \"id\": \"2c9ec787230141a9b087f774955795ab-5.14.0-362.24.1.el9_3.aarch64\", \"index\": \"1\", \"initrd\": \"/boot/initramfs-5.14.0-362.24.1.el9_3.aarch64.img USDtuned_initrd\", \"kernel\": \"/boot/vmlinuz-5.14.0-362.24.1.el9_3.aarch64\", \"root\": \"/dev/mapper/rhel-root\", \"title\": \"Red Hat Enterprise Linux (5.14.0-362.24.1.el9_3.aarch64) 9.4 (Plow)\" } ]", "uname -r 5.14.0-1.el9.x86_64", "dnf search USD(uname -r)", "dnf install \"kpatch-patch = USD(uname -r)\"", "kpatch list Loaded patch modules: kpatch_5_14_0_1_0_1 [enabled] Installed patch modules: kpatch_5_14_0_1_0_1 (5.14.0-1.el9.x86_64) ...", "rpm -qa | grep kpatch kpatch-dnf-0.4-3.el9.noarch kpatch-0.9.7-2.el9.noarch kpatch-patch-5_14_0-284_25_1-0-0.el9_2.x86_64", "dnf list installed | grep kernel Updating Subscription Management repositories. Installed Packages kernel-core.x86_64 5.14.0-1.el9 @beaker-BaseOS kernel-core.x86_64 5.14.0-2.el9 @@commandline uname -r 5.14.0-2.el9.x86_64", "dnf install kpatch-dnf", "dnf kpatch auto Updating Subscription Management repositories. Last metadata expiration check: 1:38:21 ago on Fri 17 Sep 2021 07:29:53 AM EDT. Dependencies resolved. ================================================== Package Architecture ================================================== Installing: kpatch-patch-5_14_0-1 x86_64 kpatch-patch-5_14_0-2 x86_64 Transaction Summary =================================================== Install 2 Packages ...", "kpatch list Loaded patch modules: kpatch_5_14_0_2_0_1 [enabled] Installed patch modules: kpatch_5_14_0_1_0_1 (5.14.0-1.el9.x86_64) kpatch_5_14_0_2_0_1 (5.14.0-2.el9.x86_64)", "rpm -qa | grep kpatch kpatch-dnf-0.4-3.el9.noarch kpatch-0.9.7-2.el9.noarch kpatch-patch-5_14_0-284_25_1-0-0.el9_2.x86_64", "dnf list installed | grep kernel Updating Subscription Management repositories. Installed Packages kernel-core.x86_64 5.14.0-1.el9 @beaker-BaseOS kernel-core.x86_64 5.14.0-2.el9 @@commandline uname -r 5.14.0-2.el9.x86_64", "dnf kpatch manual Updating Subscription Management repositories.", "yum kpatch status Updating Subscription Management repositories. Last metadata expiration check: 0:30:41 ago on Tue Jun 14 15:59:26 2022. Kpatch update setting: manual", "dnf update \"kpatch-patch = USD(uname -r)\"", "dnf update \"kpatch-patch \"", "dnf list installed | grep kpatch-patch kpatch-patch-5_14_0-1.x86_64 0-1.el9 @@commandline ...", "dnf remove kpatch-patch-5_14_0-1.x86_64", "dnf list installed | grep kpatch-patch", "kpatch list Loaded patch modules:", "kpatch list Loaded patch modules: kpatch_5_14_0_1_0_1 [enabled] Installed patch modules: kpatch_5_14_0_1_0_1 (5.14.0-1.el9.x86_64) ...", "kpatch uninstall kpatch_5_14_0_1_0_1 uninstalling kpatch_5_14_0_1_0_1 (5.14.0-1.el9.x86_64)", "kpatch list Loaded patch modules: kpatch_5_14_0_1_0_1 [enabled] Installed patch modules: < NO_RESULT >", "kpatch list Loaded patch modules: ...", "systemctl is-enabled kpatch.service enabled", "systemctl disable kpatch.service Removed /etc/systemd/system/multi-user.target.wants/kpatch.service.", "kpatch list Loaded patch modules: kpatch_5_14_0_1_0_1 [enabled] Installed patch modules: kpatch_5_14_0_1_0_1 (5.14.0-1.el9.x86_64)", "systemctl status kpatch.service ● kpatch.service - \"Apply kpatch kernel patches\" Loaded: loaded (/usr/lib/systemd/system/kpatch.service; disabled; vendor preset: disabled) Active: inactive (dead)", "kpatch list Loaded patch modules: Installed patch modules: kpatch_5_14_0_1_0_1 (5.14.0-1.el9.x86_64)", "sysctl kernel.printk kernel.printk = 7 4 1 7", "grub2-install /dev/sda", "reboot", "yum reinstall grub2-efi shim", "reboot", "bootlist -m normal -o sda1", "grub2-install partition", "reboot", "rm /etc/grub.d/ * rm /etc/sysconfig/grub", "yum reinstall grub2-tools", "yum reinstall grub2-efi shim grub2-tools", "grub2-mkconfig -o /boot/grub2/grub.cfg", "grub2-mkconfig -o /boot/grub2/grub.cfg", "rpm -q kexec-tools", "kexec-tools-2.0.22-13.el9.x86_64", "package kexec-tools is not installed", "dnf install kexec-tools", "makedumpfile --mem-usage /proc/kcore TYPE PAGES EXCLUDABLE DESCRIPTION ------------------------------------------------------------- ZERO 501635 yes Pages filled with zero CACHE 51657 yes Cache pages CACHE_PRIVATE 5442 yes Cache pages + private USER 16301 yes User process pages FREE 77738211 yes Free pages KERN_DATA 1333192 no Dumpable kernel data", "crashkernel=1G-4G:192M,4G-64G:256M,64G:512M", "kdumpctl reset-crashkernel --kernel=ALL", "crashkernel=192M", "crashkernel=1G-4G:192M,2G-64G:256M", "crashkernel=192M@16M", "grubby --update-kernel ALL --args \"crashkernel= <custom-value> \"", "reboot", "echo c > /proc/sysrq-trigger", "path /var/crash", "kdump_post <path_to_kdump_post.sh>", "*grep -v ^# /etc/kdump.conf | grep -v ^USD* ext4 /dev/mapper/vg00-varcrashvol path /var/crash core_collector makedumpfile -c --message-level 1 -d 31", "path /usr/local/cores", "ext4 UUID=03138356-5e61-4ab3-b58e-27507ac41937", "raw /dev/sdb1", "nfs penguin.example.com:/export/cores", "sudo systemctl restart kdump.service", "ssh [email protected] sshkey /root/.ssh/mykey", "core_collector makedumpfile -l --message-level 1 -d 31", "core_collector makedumpfile -l --message-level 1 -d 31", "core_collector makedumpfile -c -d 31 --message-level 1", "core_collector makedumpfile -p -d 31 --message-level 1", "failure_action poweroff", "KDUMP_COMMANDLINE_REMOVE=\"hugepages hugepagesz slub_debug quiet log_buf_len swiotlb\"", "KDUMP_COMMANDLINE_APPEND=\"cgroup_disable=memory\"", "kdumpctl restart", "kdumpctl status kdump:Kdump is operational", "echo c > /proc/sysrq-trigger", "systemctl enable kdump.service", "systemctl start kdump.service", "systemctl stop kdump.service", "systemctl disable kdump.service", "lsmod Module Size Used by fuse 126976 3 xt_CHECKSUM 16384 1 ipt_MASQUERADE 16384 1 uinput 20480 1 xt_conntrack 16384 1", "KDUMP_COMMANDLINE_APPEND=\"rd.driver.blacklist= hv_vmbus,hv_storvsc,hv_utils,hv_netvsc,hid-hyperv \"", "KDUMP_COMMANDLINE_APPEND=\"modprobe.blacklist= emcp modprobe.blacklist= bnx2fc modprobe.blacklist= libfcoe modprobe.blacklist= fcoe \"", "systemctl restart kdump", "*kdumpctl estimate* Encrypted kdump target requires extra memory, assuming using the keyslot with minimum memory requirement Reserved crashkernel: 256M Recommended crashkernel: 652M Kernel image size: 47M Kernel modules size: 8M Initramfs size: 20M Runtime reservation: 64M LUKS required size: 512M Large modules: <none> WARNING: Current crashkernel size is lower than recommended size 652M.", "sudo grubby --update-kernel ALL --args crashkernel=512M", "grubby --update-kernel=ALL --args=\"crashkernel=xxM\"", "systemctl enable --now kdump.service", "systemctl status kdump.service ○ kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor preset: disabled) Active: active (live)", "ls -a /boot/vmlinuz- * /boot/vmlinuz-0-rescue-2930657cd0dc43c2b75db480e5e5b4a9 /boot/vmlinuz-4.18.0-330.el8.x86_64 /boot/vmlinuz-4.18.0-330.rt7.111.el8.x86_64", "grubby --update-kernel= vmlinuz-4.18.0-330.el8.x86_64 --args=\"crashkernel= xxM \"", "systemctl enable --now kdump.service", "systemctl status kdump.service ○ kdump.service - Crash recovery kernel arming Loaded: loaded (/usr/lib/systemd/system/kdump.service; enabled; vendor preset: disabled) Active: active (live)", "systemctl stop kdump.service", "systemctl disable kdump.service", "uname -m", "kdumpctl restart", "kdumpctl restart", "error: ../../grub-core/kern/mm.c:376:out of memory. Press any key to continue...", "kdumpctl reset-crashkernel --fadump=on --kernel=ALL", "grubby --update-kernel ALL --args=\"fadump=on crashkernel=xxM\"", "reboot", "kernel.panic=0 kernel.unknown_nmi_panic=1", "failure_action shell", "subscription-manager repos --enable baseos repository", "subscription-manager repos --enable appstream repository", "subscription-manager repos --enable rhel-9-for-x86_64-baseos-debug-rpms", "dnf install crash", "dnf install kernel-debuginfo", "crash /usr/lib/debug/lib/modules/5.14.0-1.el9.x86_64/vmlinux /var/crash/127.0.0.1-2021-09-13-14:05:33/vmcore", "WARNING: kernel relocated [202MB]: patching 90160 gdb minimal_symbol values KERNEL: /usr/lib/debug/lib/modules/5.14.0-1.el9.x86_64/vmlinux DUMPFILE: /var/crash/127.0.0.1-2021-09-13-14:05:33/vmcore [PARTIAL DUMP] CPUS: 2 DATE: Mon Sep 13 14:05:16 2021 UPTIME: 01:03:57 LOAD AVERAGE: 0.00, 0.00, 0.00 TASKS: 586 NODENAME: localhost.localdomain RELEASE: 5.14.0-1.el9.x86_64 VERSION: #1 SMP Wed Aug 29 11:51:55 UTC 2018 MACHINE: x86_64 (2904 Mhz) MEMORY: 2.9 GB PANIC: \"sysrq: SysRq : Trigger a crash\" PID: 10635 COMMAND: \"bash\" TASK: ffff8d6c84271800 [THREAD_INFO: ffff8d6c84271800] CPU: 1 STATE: TASK_RUNNING (SYSRQ) crash>", "crash> exit ~]#", "crash> log ... several lines omitted EIP: 0060:[<c068124f>] EFLAGS: 00010096 CPU: 2 EIP is at sysrq_handle_crash+0xf/0x20 EAX: 00000063 EBX: 00000063 ECX: c09e1c8c EDX: 00000000 ESI: c0a09ca0 EDI: 00000286 EBP: 00000000 ESP: ef4dbf24 DS: 007b ES: 007b FS: 00d8 GS: 00e0 SS: 0068 Process bash (pid: 5591, ti=ef4da000 task=f196d560 task.ti=ef4da000) Stack: c068146b c0960891 c0968653 00000003 00000000 00000002 efade5c0 c06814d0 <0> fffffffb c068150f b7776000 f2600c40 c0569ec4 ef4dbf9c 00000002 b7776000 <0> efade5c0 00000002 b7776000 c0569e60 c051de50 ef4dbf9c f196d560 ef4dbfb4 Call Trace: [<c068146b>] ? __handle_sysrq+0xfb/0x160 [<c06814d0>] ? write_sysrq_trigger+0x0/0x50 [<c068150f>] ? write_sysrq_trigger+0x3f/0x50 [<c0569ec4>] ? proc_reg_write+0x64/0xa0 [<c0569e60>] ? proc_reg_write+0x0/0xa0 [<c051de50>] ? vfs_write+0xa0/0x190 [<c051e8d1>] ? sys_write+0x41/0x70 [<c0409adc>] ? syscall_call+0x7/0xb Code: a0 c0 01 0f b6 41 03 19 d2 f7 d2 83 e2 03 83 e0 cf c1 e2 04 09 d0 88 41 03 f3 c3 90 c7 05 c8 1b 9e c0 01 00 00 00 0f ae f8 89 f6 <c6> 05 00 00 00 00 01 c3 89 f6 8d bc 27 00 00 00 00 8d 50 d0 83 EIP: [<c068124f>] sysrq_handle_crash+0xf/0x20 SS:ESP 0068:ef4dbf24 CR2: 0000000000000000", "crash> bt PID: 5591 TASK: f196d560 CPU: 2 COMMAND: \"bash\" #0 [ef4dbdcc] crash_kexec at c0494922 #1 [ef4dbe20] oops_end at c080e402 #2 [ef4dbe34] no_context at c043089d #3 [ef4dbe58] bad_area at c0430b26 #4 [ef4dbe6c] do_page_fault at c080fb9b #5 [ef4dbee4] error_code (via page_fault) at c080d809 EAX: 00000063 EBX: 00000063 ECX: c09e1c8c EDX: 00000000 EBP: 00000000 DS: 007b ESI: c0a09ca0 ES: 007b EDI: 00000286 GS: 00e0 CS: 0060 EIP: c068124f ERR: ffffffff EFLAGS: 00010096 #6 [ef4dbf18] sysrq_handle_crash at c068124f #7 [ef4dbf24] __handle_sysrq at c0681469 #8 [ef4dbf48] write_sysrq_trigger at c068150a #9 [ef4dbf54] proc_reg_write at c0569ec2 #10 [ef4dbf74] vfs_write at c051de4e #11 [ef4dbf94] sys_write at c051e8cc #12 [ef4dbfb0] system_call at c0409ad5 EAX: ffffffda EBX: 00000001 ECX: b7776000 EDX: 00000002 DS: 007b ESI: 00000002 ES: 007b EDI: b7776000 SS: 007b ESP: bfcb2088 EBP: bfcb20b4 GS: 0033 CS: 0073 EIP: 00edc416 ERR: 00000004 EFLAGS: 00000246", "crash> ps PID PPID CPU TASK ST %MEM VSZ RSS COMM > 0 0 0 c09dc560 RU 0.0 0 0 [swapper] > 0 0 1 f7072030 RU 0.0 0 0 [swapper] 0 0 2 f70a3a90 RU 0.0 0 0 [swapper] > 0 0 3 f70ac560 RU 0.0 0 0 [swapper] 1 0 1 f705ba90 IN 0.0 2828 1424 init ... several lines omitted 5566 1 1 f2592560 IN 0.0 12876 784 auditd 5567 1 2 ef427560 IN 0.0 12876 784 auditd 5587 5132 0 f196d030 IN 0.0 11064 3184 sshd > 5591 5587 2 f196d560 RU 0.0 5084 1648 bash", "crash> vm PID: 5591 TASK: f196d560 CPU: 2 COMMAND: \"bash\" MM PGD RSS TOTAL_VM f19b5900 ef9c6000 1648k 5084k VMA START END FLAGS FILE f1bb0310 242000 260000 8000875 /lib/ld-2.12.so f26af0b8 260000 261000 8100871 /lib/ld-2.12.so efbc275c 261000 262000 8100873 /lib/ld-2.12.so efbc2a18 268000 3ed000 8000075 /lib/libc-2.12.so efbc23d8 3ed000 3ee000 8000070 /lib/libc-2.12.so efbc2888 3ee000 3f0000 8100071 /lib/libc-2.12.so efbc2cd4 3f0000 3f1000 8100073 /lib/libc-2.12.so efbc243c 3f1000 3f4000 100073 efbc28ec 3f6000 3f9000 8000075 /lib/libdl-2.12.so efbc2568 3f9000 3fa000 8100071 /lib/libdl-2.12.so efbc2f2c 3fa000 3fb000 8100073 /lib/libdl-2.12.so f26af888 7e6000 7fc000 8000075 /lib/libtinfo.so.5.7 f26aff2c 7fc000 7ff000 8100073 /lib/libtinfo.so.5.7 efbc211c d83000 d8f000 8000075 /lib/libnss_files-2.12.so efbc2504 d8f000 d90000 8100071 /lib/libnss_files-2.12.so efbc2950 d90000 d91000 8100073 /lib/libnss_files-2.12.so f26afe00 edc000 edd000 4040075 f1bb0a18 8047000 8118000 8001875 /bin/bash f1bb01e4 8118000 811d000 8101873 /bin/bash f1bb0c70 811d000 8122000 100073 f26afae0 9fd9000 9ffa000 100073 ... several lines omitted", "crash> files PID: 5591 TASK: f196d560 CPU: 2 COMMAND: \"bash\" ROOT: / CWD: /root FD FILE DENTRY INODE TYPE PATH 0 f734f640 eedc2c6c eecd6048 CHR /pts/0 1 efade5c0 eee14090 f00431d4 REG /proc/sysrq-trigger 2 f734f640 eedc2c6c eecd6048 CHR /pts/0 10 f734f640 eedc2c6c eecd6048 CHR /pts/0 255 f734f640 eedc2c6c eecd6048 CHR /pts/0", "systemctl is-enabled kdump.service && systemctl is-active kdump.service enabled active", "dracut -f --add earlykdump", "grubby --update-kernel=/boot/vmlinuz-USD(uname -r) --args=\"rd.earlykdump\"", "reboot", "cat /proc/cmdline BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-1.el9.x86_64 root=/dev/mapper/rhel-root ro crashkernel=auto resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet rd.earlykdump journalctl -x | grep early-kdump Sep 13 15:46:11 redhat dracut-cmdline[304]: early-kdump is enabled. Sep 13 15:46:12 redhat dracut-cmdline[304]: kexec: loaded early-kdump kernel", "dnf install pesign openssl kernel-devel mokutil keyutils", "efikeygen --dbdir /etc/pki/pesign --self-sign --module --common-name 'CN= Organization signing key ' --nickname ' Custom Secure Boot key '", "efikeygen --dbdir /etc/pki/pesign --self-sign --kernel --common-name 'CN= Organization signing key ' --nickname ' Custom Secure Boot key '", "efikeygen --dbdir /etc/pki/pesign --self-sign --kernel --common-name 'CN= Organization signing key ' --nickname ' Custom Secure Boot key ' --token 'NSS FIPS 140-2 Certificate DB'", "keyctl list %:.builtin_trusted_keys 6 keys in keyring: ...asymmetric: Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87 ...asymmetric: Red Hat Secure Boot (CA key 1): 4016841644ce3a810408050766e8f8a29 ...asymmetric: Microsoft Corporation UEFI CA 2011: 13adbf4309bd82709c8cd54f316ed ...asymmetric: Microsoft Windows Production PCA 2011: a92902398e16c49778cd90f99e ...asymmetric: Red Hat Enterprise Linux kernel signing key: 4249689eefc77e95880b ...asymmetric: Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b7 keyctl list %:.platform 4 keys in keyring: ...asymmetric: VMware, Inc.: 4ad8da0472073 ...asymmetric: Red Hat Secure Boot CA 5: cc6fafe72 ...asymmetric: Microsoft Windows Production PCA 2011: a929f298e1 ...asymmetric: Microsoft Corporation UEFI CA 2011: 13adbf4e0bd82 keyctl list %:.blacklist 4 keys in keyring: ...blacklist: bin:f5ff83a ...blacklist: bin:0dfdbec ...blacklist: bin:38f1d22 ...blacklist: bin:51f831f", "dmesg | egrep 'integrity.*cert' [1.512966] integrity: Loading X.509 certificate: UEFI:db [1.513027] integrity: Loaded X.509 cert 'Microsoft Windows Production PCA 2011: a929023 [1.513028] integrity: Loading X.509 certificate: UEFI:db [1.513057] integrity: Loaded X.509 cert 'Microsoft Corporation UEFI CA 2011: 13adbf4309 [1.513298] integrity: Loading X.509 certificate: UEFI:MokListRT (MOKvar table) [1.513549] integrity: Loaded X.509 cert 'Red Hat Secure Boot CA 5: cc6fa5e72868ba494e93", "certutil -d /etc/pki/pesign -n ' Custom Secure Boot key ' -Lr > sb_cert.cer", "mokutil --import sb_cert.cer", "pesign --certificate ' Custom Secure Boot key ' --in vmlinuz- version --sign --out vmlinuz- version .signed", "pesign --show-signature --in vmlinuz- version .signed", "mv vmlinuz- version .signed vmlinuz- version", "zcat vmlinuz- version > vmlinux- version", "pesign --certificate ' Custom Secure Boot key ' --in vmlinux- version --sign --out vmlinux- version .signed", "pesign --show-signature --in vmlinux- version .signed", "gzip --to-stdout vmlinux- version .signed > vmlinuz- version", "rm vmlinux- version *", "pesign --in /boot/efi/EFI/redhat/grubx64.efi --out /boot/efi/EFI/redhat/grubx64.efi.signed --certificate ' Custom Secure Boot key ' --sign", "pesign --in /boot/efi/EFI/redhat/grubx64.efi.signed --show-signature", "mv /boot/efi/EFI/redhat/grubx64.efi.signed /boot/efi/EFI/redhat/grubx64.efi", "pesign --in /boot/efi/EFI/redhat/grubaa64.efi --out /boot/efi/EFI/redhat/grubaa64.efi.signed --certificate ' Custom Secure Boot key ' --sign", "pesign --in /boot/efi/EFI/redhat/grubaa64.efi.signed --show-signature", "mv /boot/efi/EFI/redhat/grubaa64.efi.signed /boot/efi/EFI/redhat/grubaa64.efi", "certutil -d /etc/pki/pesign -n ' Custom Secure Boot key ' -Lr > sb_cert.cer", "pk12util -o sb_cert.p12 -n ' Custom Secure Boot key ' -d /etc/pki/pesign", "openssl pkcs12 -in sb_cert.p12 -out sb_cert.priv -nocerts -noenc", "/usr/src/kernels/USD(uname -r)/scripts/sign-file sha256 sb_cert.priv sb_cert.cer my_module .ko", "modinfo my_module .ko | grep signer signer: Your Name Key", "insmod my_module .ko", "modprobe -r my_module .ko", "dnf -y install kernel-modules-extra", "keyctl list %:.platform", "cp my_module .ko /lib/modules/USD(uname -r)/extra/", "depmod -a", "modprobe -v my_module", "echo \" my_module \" > /etc/modules-load.d/ my_module .conf", "lsmod | grep my_module", "fwupdmgr get-devices", "fwupdmgr enable-remote lvfs", "fwupdmgr refresh", "fwupdmgr update", "fwupdmgr get-devices", "fwupdmgr get-devices", "ls /usr/share/dbxtool/", "DBXUpdate- date - architecture .cab", "fwupdmgr install /usr/share/dbxtool/DBXUpdate- date - architecture .cab", "fwupdmgr get-devices", "cat /sys/class/tpm/tpm0/tpm_version_major 2", "TPM_DEVICE=/dev/tpm0 tsscreateprimary -hi o -st Handle 80000000 TPM_DEVICE=/dev/tpm0 tssevictcontrol -hi o -ho 80000000 -hp 81000001", "tpm2_createprimary --key-algorithm=rsa2048 --key-context=key.ctxt name-alg: value: sha256 raw: 0xb ... sym-keybits: 128 rsa: xxxxxx... tpm2_evictcontrol -c key.ctxt 0x81000001 persistentHandle: 0x81000001 action: persisted", "keyctl add trusted kmk \"new 32 keyhandle=0x81000001\" @u 642500861", "keyctl show Session Keyring -3 --alswrv 500 500 keyring: ses 97833714 --alswrv 500 -1 \\ keyring: uid.1000 642500861 --alswrv 500 500 \\ trusted: kmk", "keyctl pipe 642500861 > kmk.blob", "keyctl add trusted kmk \"load `cat kmk.blob`\" @u 268728824", "keyctl add encrypted encr-key \"new trusted:kmk 32\" @u 159771175", "keyctl add user kmk-user \"USD(dd if=/dev/urandom bs=1 count=32 2>/dev/null)\" @u 427069434", "keyctl add encrypted encr-key \"new user:kmk-user 32\" @u 1012412758", "keyctl list @u 2 keys in keyring: 427069434: --alswrv 1000 1000 user: kmk-user 1012412758: --alswrv 1000 1000 encrypted: encr-key", "mount securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)", "grep < options > pattern < files >", "dmesg | grep -i -e EVM -e IMA -w [ 0.943873] ima: No TPM chip found, activating TPM-bypass! [ 0.944566] ima: Allocated hash algorithm: sha256 [ 0.944579] ima: No architecture policies found [ 0.944601] evm: Initialising EVM extended attributes: [ 0.944602] evm: security.selinux [ 0.944604] evm: security.SMACK64 (disabled) [ 0.944605] evm: security.SMACK64EXEC (disabled) [ 0.944607] evm: security.SMACK64TRANSMUTE (disabled) [ 0.944608] evm: security.SMACK64MMAP (disabled) [ 0.944609] evm: security.apparmor (disabled) [ 0.944611] evm: security.ima [ 0.944612] evm: security.capability [ 0.944613] evm: HMAC attrs: 0x1 [ 1.314520] systemd[1]: systemd 252-18.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) [ 1.717675] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. [ 4.799436] systemd[1]: systemd 252-18.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)", "grubby --update-kernel=/boot/vmlinuz-USD(uname -r) --args=\"ima_policy=appraise_tcb ima_appraise=fix evm=fix\"", "cat /proc/cmdline BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-1.el9.x86_64 root=/dev/mapper/rhel-root ro crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet ima_policy=appraise_tcb ima_appraise=fix evm=fix", "keyctl add user kmk \"USD(dd if=/dev/urandom bs=1 count=32 2> /dev/null)\" @u 748544121", "keyctl add encrypted evm-key \"new user:kmk 64\" @u 641780271", "mkdir -p /etc/keys/", "keyctl pipe USD(keyctl search @u user kmk) > /etc/keys/kmk", "keyctl pipe USD(keyctl search @u encrypted evm-key) > /etc/keys/evm-key", "keyctl show Session Keyring 974575405 --alswrv 0 0 keyring: ses 299489774 --alswrv 0 65534 \\ keyring: uid.0 748544121 --alswrv 0 0 \\ user: kmk 641780271 --alswrv 0 0 \\_ encrypted: evm-key ls -l /etc/keys/ total 8 -rw-r--r--. 1 root root 246 Jun 24 12:44 evm-key -rw-r--r--. 1 root root 32 Jun 24 12:43 kmk", "keyctl add user kmk \"USD(cat /etc/keys/kmk)\" @u 451342217", "keyctl add encrypted evm-key \"load USD(cat /etc/keys/evm-key)\" @u 924537557", "echo 1 > /sys/kernel/security/evm", "find / -fstype xfs -type f -uid 0 -exec head -n 1 '{}' >/dev/null \\;", "dmesg | tail -1 [... ] evm: key initialized", "echo < Test_text > > test_file", "getfattr -m . -d test_file file: test_file security.evm=0sAnDIy4VPA0HArpPO/EqiutnNyBql security.ima=0sAQOEDeuUnWzwwKYk+n66h/vby3eD", "dnf install rpm-plugin-ima -y", "dnf reinstall '*' -y", "getfattr -m security.ima -d /usr/bin/bash", "'security.ima=0sAwIE0zIESQBnMGUCMFhf0iBeM7NjjhCCHVt4/ORx1eCegjrWSHzFbJMCsAhR9bYU2hNGjiWUYT2IIqWaaAIxALFGUkqGP5vDLuxQXibO9g7HFcfyZzRBY4rbKPsXcAIZRtDHVS5dQBZqM3hyS5v1MA=='", "evmctl ima_verify -k /usr/share/doc/kernel-keys/USD(uname -r)/ima.cer /usr/bin/bash", "'key 1: d3320449 /usr/share/doc/kernel-keys/5.14.0-359.el9.x86-64/ima.cer /usr/bin/bash:' verification is OK", "mkdir -p /etc/keys/ima cp /usr/share/doc/kernel-keys/USD(uname -r)/ima.cer /etc/ima/keys", "keyctl padd asymmetric RedHat-IMA %:.ima < /etc/ima/keys/ima.cer", "PROC_SUPER_MAGIC = 0x9fa0 dont_appraise fsmagic=0x9fa0 SYSFS_MAGIC = 0x62656572 dont_appraise fsmagic=0x62656572 DEBUGFS_MAGIC = 0x64626720 dont_appraise fsmagic=0x64626720 TMPFS_MAGIC = 0x01021994 dont_appraise fsmagic=0x1021994 RAMFS_MAGIC dont_appraise fsmagic=0x858458f6 DEVPTS_SUPER_MAGIC=0x1cd1 dont_appraise fsmagic=0x1cd1 BINFMTFS_MAGIC=0x42494e4d dont_appraise fsmagic=0x42494e4d SECURITYFS_MAGIC=0x73636673 dont_appraise fsmagic=0x73636673 SELINUX_MAGIC=0xf97cff8c dont_appraise fsmagic=0xf97cff8c SMACK_MAGIC=0x43415d53 dont_appraise fsmagic=0x43415d53 NSFS_MAGIC=0x6e736673 dont_appraise fsmagic=0x6e736673 EFIVARFS_MAGIC dont_appraise fsmagic=0xde5e81e4 CGROUP_SUPER_MAGIC=0x27e0eb dont_appraise fsmagic=0x27e0eb CGROUP2_SUPER_MAGIC=0x63677270 dont_appraise fsmagic=0x63677270 appraise func=BPRM_CHECK appraise func=FILE_MMAP mask=MAY_EXEC", "echo /etc/sysconfig/ima-policy > /sys/kernel/security/ima/policy echo USD? 0", "echo 'add_dracutmodules+=\" integrity \"' > /etc/dracut.conf.d/98-integrity.conf dracut -f", "openssl req -new -x509 -utf8 -sha256 -days 3650 -batch -config ima_ca.conf -outform DER -out custom_ima_ca.der -keyout custom_ima_ca.priv", "cat ima_ca.conf [ req ] default_bits = 2048 distinguished_name = req_distinguished_name prompt = no string_mask = utf8only x509_extensions = ca [ req_distinguished_name ] O = YOUR_ORG CN = YOUR_COMMON_NAME IMA CA emailAddress = YOUR_EMAIL [ ca ] basicConstraints=critical,CA:TRUE subjectKeyIdentifier=hash authorityKeyIdentifier=keyid:always,issuer keyUsage=critical,keyCertSign,cRLSign", "openssl req -new -utf8 -sha256 -days 365 -batch -config ima.conf -out custom_ima.csr -keyout custom_ima.priv", "cat ima.conf [ req ] default_bits = 2048 distinguished_name = req_distinguished_name prompt = no string_mask = utf8only x509_extensions = code_signing [ req_distinguished_name ] O = YOUR_ORG CN = YOUR_COMMON_NAME IMA signing key emailAddress = YOUR_EMAIL [ code_signing ] basicConstraints=critical,CA:FALSE keyUsage=digitalSignature subjectKeyIdentifier=hash authorityKeyIdentifier=keyid:always,issuer", "openssl x509 -req -in custom_ima.csr -days 365 -extfile ima.conf -extensions code_signing -CA custom_ima_ca.der -CAkey custom_ima_ca.priv -CAcreateserial -outform DER -out ima.der", "evmctl ima_sign /etc/sysconfig/ima-policy -k < PATH_TO_YOUR_CUSTOM_IMA_KEY > Place your public certificate under /etc/keys/ima/ and add it to the .ima keyring", "keyctl padd asymmetric CUSTOM_IMA1 %:.ima < /etc/ima/keys/my_ima.cer", "echo /etc/sysconfig/ima-policy > /sys/kernel/security/ima/policy echo USD? 0", "systemctl show --property < unit file option > < service name >", "systemctl set-property < service name > < unit file option >=< value >", "systemctl show --property < unit file option > < service name >", "<name> .service", "<name> .scope", "<parent-name> .slice", "Control group /: -.slice ├─user.slice │ ├─user-42.slice │ │ ├─session-c1.scope │ │ │ ├─ 967 gdm-session-worker [pam/gdm-launch-environment] │ │ │ ├─1035 /usr/libexec/gdm-x-session gnome-session --autostart /usr/share/gdm/greeter/autostart │ │ │ ├─1054 /usr/libexec/Xorg vt1 -displayfd 3 -auth /run/user/42/gdm/Xauthority -background none -noreset -keeptty -verbose 3 │ │ │ ├─1212 /usr/libexec/gnome-session-binary --autostart /usr/share/gdm/greeter/autostart │ │ │ ├─1369 /usr/bin/gnome-shell │ │ │ ├─1732 ibus-daemon --xim --panel disable │ │ │ ├─1752 /usr/libexec/ibus-dconf │ │ │ ├─1762 /usr/libexec/ibus-x11 --kill-daemon │ │ │ ├─1912 /usr/libexec/gsd-xsettings │ │ │ ├─1917 /usr/libexec/gsd-a11y-settings │ │ │ ├─1920 /usr/libexec/gsd-clipboard ... ├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 18 └─system.slice ├─rngd.service │ └─800 /sbin/rngd -f ├─systemd-udevd.service │ └─659 /usr/lib/systemd/systemd-udevd ├─chronyd.service │ └─823 /usr/sbin/chronyd ├─auditd.service │ ├─761 /sbin/auditd │ └─763 /usr/sbin/sedispatch ├─accounts-daemon.service │ └─876 /usr/libexec/accounts-daemon ├─example.service │ ├─ 929 /bin/bash /home/jdoe/example.sh │ └─4902 sleep 1 ...", "systemctl UNIT LOAD ACTIVE SUB DESCRIPTION ... init.scope loaded active running System and Service Manager session-2.scope loaded active running Session 2 of user jdoe abrt-ccpp.service loaded active exited Install ABRT coredump hook abrt-oops.service loaded active running ABRT kernel log watcher abrt-vmcore.service loaded active exited Harvest vmcores for ABRT abrt-xorg.service loaded active running ABRT Xorg log watcher ... -.slice loaded active active Root Slice machine.slice loaded active active Virtual Machine and Container Slice system-getty.slice loaded active active system-getty.slice system-lvm2\\x2dpvscan.slice loaded active active system-lvm2\\x2dpvscan.slice system-sshd\\x2dkeygen.slice loaded active active system-sshd\\x2dkeygen.slice system-systemd\\x2dhibernate\\x2dresume.slice loaded active active system-systemd\\x2dhibernate\\x2dresume> system-user\\x2druntime\\x2ddir.slice loaded active active system-user\\x2druntime\\x2ddir.slice system.slice loaded active active System Slice user-1000.slice loaded active active User Slice of UID 1000 user-42.slice loaded active active User Slice of UID 42 user.slice loaded active active User and Session Slice ...", "systemctl --all", "systemctl --type service,masked", "systemd-cgls Control group /: -.slice ├─user.slice │ ├─user-42.slice │ │ ├─session-c1.scope │ │ │ ├─ 965 gdm-session-worker [pam/gdm-launch-environment] │ │ │ ├─1040 /usr/libexec/gdm-x-session gnome-session --autostart /usr/share/gdm/greeter/autostart ... ├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 18 └─system.slice ... ├─example.service │ ├─6882 /bin/bash /home/jdoe/example.sh │ └─6902 sleep 1 ├─systemd-journald.service └─629 /usr/lib/systemd/systemd-journald ...", "systemd-cgls memory Controller memory; Control group /: ├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 18 ├─user.slice │ ├─user-42.slice │ │ ├─session-c1.scope │ │ │ ├─ 965 gdm-session-worker [pam/gdm-launch-environment] ... └─system.slice | ... ├─chronyd.service │ └─844 /usr/sbin/chronyd ├─example.service │ ├─8914 /bin/bash /home/jdoe/example.sh │ └─8916 sleep 1 ...", "systemctl status example.service ● example.service - My example service Loaded: loaded (/usr/lib/systemd/system/example.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2019-04-16 12:12:39 CEST; 3s ago Main PID: 17737 (bash) Tasks: 2 (limit: 11522) Memory: 496.0K (limit: 1.5M) CGroup: /system.slice/example.service ├─17737 /bin/bash /home/jdoe/example.sh └─17743 sleep 1 Apr 16 12:12:39 redhat systemd[1]: Started My example service. Apr 16 12:12:39 redhat bash[17737]: The current time is Tue Apr 16 12:12:39 CEST 2019 Apr 16 12:12:40 redhat bash[17737]: The current time is Tue Apr 16 12:12:40 CEST 2019", "cat /proc/2467/cgroup 0::/system.slice/example.service", "cat /sys/fs/cgroup/system.slice/example.service/cgroup.controllers memory pids ls /sys/fs/cgroup/system.slice/example.service/ cgroup.controllers cgroup.events ... cpu.pressure cpu.stat io.pressure memory.current memory.events ... pids.current pids.events pids.max", "systemd-cgtop Control Group Tasks %CPU Memory Input/s Output/s / 607 29.8 1.5G - - /system.slice 125 - 428.7M - - /system.slice/ModemManager.service 3 - 8.6M - - /system.slice/NetworkManager.service 3 - 12.8M - - /system.slice/accounts-daemon.service 3 - 1.8M - - /system.slice/boot.mount - - 48.0K - - /system.slice/chronyd.service 1 - 2.0M - - /system.slice/cockpit.socket - - 1.3M - - /system.slice/colord.service 3 - 3.5M - - /system.slice/crond.service 1 - 1.8M - - /system.slice/cups.service 1 - 3.1M - - /system.slice/dev-hugepages.mount - - 244.0K - - /system.slice/dev-mapper-rhel\\x2dswap.swap - - 912.0K - - /system.slice/dev-mqueue.mount - - 48.0K - - /system.slice/example.service 2 - 2.0M - - /system.slice/firewalld.service 2 - 28.8M - -", "... [Service] MemoryMax=1500K ...", "systemctl daemon-reload", "systemctl restart example.service", "cat /sys/fs/cgroup/system.slice/example.service/memory.max 1536000", "systemctl show --property <CPU affinity configuration option> <service name>", "systemctl set-property <service name> CPUAffinity= <value>", "systemctl restart <service name>", "systemctl daemon-reload", "systemctl show --property <NUMA policy configuration option> <service name>", "systemctl set-property <service name> NUMAPolicy= <value>", "systemctl restart <service name>", "systemd daemon-reload", "systemd-run --unit= <name> --slice= <name> .slice <command>", "Running as unit <name> .service", "systemd-run --unit= <name> --slice= <name> .slice --remain-after-exit <command>", "systemctl stop < name >.service", "systemctl kill < name >.service --kill-who= PID,... --signal=< signal >", "mkdir /sys/fs/cgroup/Example/", "ll /sys/fs/cgroup/Example/ -r- r- r--. 1 root root 0 Jun 1 10:33 cgroup.controllers -r- r- r--. 1 root root 0 Jun 1 10:33 cgroup.events -rw-r- r--. 1 root root 0 Jun 1 10:33 cgroup.freeze -rw-r- r--. 1 root root 0 Jun 1 10:33 cgroup.procs ... -rw-r- r--. 1 root root 0 Jun 1 10:33 cgroup.subtree_control -r- r- r--. 1 root root 0 Jun 1 10:33 memory.events.local -rw-r- r--. 1 root root 0 Jun 1 10:33 memory.high -rw-r- r--. 1 root root 0 Jun 1 10:33 memory.low ... -r- r- r--. 1 root root 0 Jun 1 10:33 pids.current -r- r- r--. 1 root root 0 Jun 1 10:33 pids.events -rw-r- r--. 1 root root 0 Jun 1 10:33 pids.max", "cat /sys/fs/cgroup/cgroup.controllers cpuset cpu io memory hugetlb pids rdma", "echo \"+cpu\" >> /sys/fs/cgroup/cgroup.subtree_control echo \"+cpuset\" >> /sys/fs/cgroup/cgroup.subtree_control", "echo \"+cpu +cpuset\" >> /sys/fs/cgroup/Example/cgroup.subtree_control", "mkdir /sys/fs/cgroup/Example/tasks/", "ll /sys/fs/cgroup/Example/tasks -r- r- r--. 1 root root 0 Jun 1 11:45 cgroup.controllers -r- r- r--. 1 root root 0 Jun 1 11:45 cgroup.events -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.freeze -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.max.depth -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.max.descendants -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.procs -r- r- r--. 1 root root 0 Jun 1 11:45 cgroup.stat -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.subtree_control -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.threads -rw-r- r--. 1 root root 0 Jun 1 11:45 cgroup.type -rw-r- r--. 1 root root 0 Jun 1 11:45 cpu.max -rw-r- r--. 1 root root 0 Jun 1 11:45 cpu.pressure -rw-r- r--. 1 root root 0 Jun 1 11:45 cpuset.cpus -r- r- r--. 1 root root 0 Jun 1 11:45 cpuset.cpus.effective -rw-r- r--. 1 root root 0 Jun 1 11:45 cpuset.cpus.partition -rw-r- r--. 1 root root 0 Jun 1 11:45 cpuset.mems -r- r- r--. 1 root root 0 Jun 1 11:45 cpuset.mems.effective -r- r- r--. 1 root root 0 Jun 1 11:45 cpu.stat -rw-r- r--. 1 root root 0 Jun 1 11:45 cpu.weight -rw-r- r--. 1 root root 0 Jun 1 11:45 cpu.weight.nice -rw-r- r--. 1 root root 0 Jun 1 11:45 io.pressure -rw-r- r--. 1 root root 0 Jun 1 11:45 memory.pressure", "cat /sys/fs/cgroup/Example/tasks/cgroup.controllers cpuset cpu", "... ├── Example │ ├── g1 │ ├── g2 │ └── g3 ...", "echo \"150\" > /sys/fs/cgroup/Example/g1/cpu.weight echo \"100\" > /sys/fs/cgroup/Example/g2/cpu.weight echo \"50\" > /sys/fs/cgroup/Example/g3/cpu.weight", "echo \"33373\" > /sys/fs/cgroup/Example/g1/cgroup.procs echo \"33374\" > /sys/fs/cgroup/Example/g2/cgroup.procs echo \"33377\" > /sys/fs/cgroup/Example/g3/cgroup.procs", "cat /proc/33373/cgroup /proc/33374/cgroup /proc/33377/cgroup 0::/Example/g1 0::/Example/g2 0::/Example/g3", "top top - 05:17:18 up 1 day, 18:25, 1 user, load average: 3.03, 3.03, 3.00 Tasks: 95 total, 4 running, 91 sleeping, 0 stopped, 0 zombie %Cpu(s): 18.1 us, 81.6 sy, 0.0 ni, 0.0 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st MiB Mem : 3737.0 total, 3233.7 free, 132.8 used, 370.5 buff/cache MiB Swap: 4060.0 total, 4060.0 free, 0.0 used. 3373.1 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 33373 root 20 0 18720 1748 1460 R 49.5 0.0 415:05.87 sha1sum 33374 root 20 0 18720 1756 1464 R 32.9 0.0 412:58.33 sha1sum 33377 root 20 0 18720 1860 1568 R 16.3 0.0 411:03.12 sha1sum 760 root 20 0 416620 28540 15296 S 0.3 0.7 0:10.23 tuned 1 root 20 0 186328 14108 9484 S 0.0 0.4 0:02.00 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:00.01 kthread", "grubby --update-kernel=/boot/vmlinuz-USD(uname -r) --args=\"systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller\"", "grubby --update-kernel=ALL --args=\"systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller\"", "mount -l | grep cgroup tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,size=4096k,nr_inodes=1024,mode=755,inode64) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpu,cpuacct) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_cls,net_prio) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices) cgroup on /sys/fs/cgroup/misc type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,misc) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer) cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,rdma)", "ll /sys/fs/cgroup/ dr-xr-xr-x. 10 root root 0 Mar 16 09:34 blkio lrwxrwxrwx. 1 root root 11 Mar 16 09:34 cpu -> cpu,cpuacct lrwxrwxrwx. 1 root root 11 Mar 16 09:34 cpuacct -> cpu,cpuacct dr-xr-xr-x. 10 root root 0 Mar 16 09:34 cpu,cpuacct dr-xr-xr-x. 2 root root 0 Mar 16 09:34 cpuset dr-xr-xr-x. 10 root root 0 Mar 16 09:34 devices dr-xr-xr-x. 2 root root 0 Mar 16 09:34 freezer dr-xr-xr-x. 2 root root 0 Mar 16 09:34 hugetlb dr-xr-xr-x. 10 root root 0 Mar 16 09:34 memory dr-xr-xr-x. 2 root root 0 Mar 16 09:34 misc lrwxrwxrwx. 1 root root 16 Mar 16 09:34 net_cls -> net_cls,net_prio dr-xr-xr-x. 2 root root 0 Mar 16 09:34 net_cls,net_prio lrwxrwxrwx. 1 root root 16 Mar 16 09:34 net_prio -> net_cls,net_prio dr-xr-xr-x. 2 root root 0 Mar 16 09:34 perf_event dr-xr-xr-x. 10 root root 0 Mar 16 09:34 pids dr-xr-xr-x. 2 root root 0 Mar 16 09:34 rdma dr-xr-xr-x. 11 root root 0 Mar 16 09:34 systemd", "grubby --update-kernel=/boot/vmlinuz-USD(uname -r) --args=\"systemd.unified_cgroup_hierarchy=0 systemd.legacy_systemd_cgroup_controller\"", "top top - 11:34:09 up 11 min, 1 user, load average: 0.51, 0.27, 0.22 Tasks: 267 total, 3 running, 264 sleeping, 0 stopped, 0 zombie %Cpu(s): 49.0 us, 3.3 sy, 0.0 ni, 47.5 id, 0.0 wa, 0.2 hi, 0.0 si, 0.0 st MiB Mem : 1826.8 total, 303.4 free, 1046.8 used, 476.5 buff/cache MiB Swap: 1536.0 total, 1396.0 free, 140.0 used. 616.4 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6955 root 20 0 228440 1752 1472 R 99.3 0.1 0:32.71 sha1sum 5760 jdoe 20 0 3603868 205188 64196 S 3.7 11.0 0:17.19 gnome-shell 6448 jdoe 20 0 743648 30640 19488 S 0.7 1.6 0:02.73 gnome-terminal- 1 root 20 0 245300 6568 4116 S 0.3 0.4 0:01.87 systemd 505 root 20 0 0 0 0 I 0.3 0.0 0:00.75 kworker/u4:4-events_unbound", "mkdir /sys/fs/cgroup/cpu/Example/", "ll /sys/fs/cgroup/cpu/Example/ -rw-r- r--. 1 root root 0 Mar 11 11:42 cgroup.clone_children -rw-r- r--. 1 root root 0 Mar 11 11:42 cgroup.procs -r- r- r--. 1 root root 0 Mar 11 11:42 cpuacct.stat -rw-r- r--. 1 root root 0 Mar 11 11:42 cpuacct.usage -r- r- r--. 1 root root 0 Mar 11 11:42 cpuacct.usage_all -r- r- r--. 1 root root 0 Mar 11 11:42 cpuacct.usage_percpu -r- r- r--. 1 root root 0 Mar 11 11:42 cpuacct.usage_percpu_sys -r- r- r--. 1 root root 0 Mar 11 11:42 cpuacct.usage_percpu_user -r- r- r--. 1 root root 0 Mar 11 11:42 cpuacct.usage_sys -r- r- r--. 1 root root 0 Mar 11 11:42 cpuacct.usage_user -rw-r- r--. 1 root root 0 Mar 11 11:42 cpu.cfs_period_us -rw-r- r--. 1 root root 0 Mar 11 11:42 cpu.cfs_quota_us -rw-r- r--. 1 root root 0 Mar 11 11:42 cpu.rt_period_us -rw-r- r--. 1 root root 0 Mar 11 11:42 cpu.rt_runtime_us -rw-r- r--. 1 root root 0 Mar 11 11:42 cpu.shares -r- r- r--. 1 root root 0 Mar 11 11:42 cpu.stat -rw-r- r--. 1 root root 0 Mar 11 11:42 notify_on_release -rw-r- r--. 1 root root 0 Mar 11 11:42 tasks", "echo \"1000000\" > /sys/fs/cgroup/cpu/Example/cpu.cfs_period_us echo \"200000\" > /sys/fs/cgroup/cpu/Example/cpu.cfs_quota_us", "cat /sys/fs/cgroup/cpu/Example/cpu.cfs_period_us /sys/fs/cgroup/cpu/Example/cpu.cfs_quota_us 1000000 200000", "echo \"6955\" > /sys/fs/cgroup/cpu/Example/cgroup.procs", "cat /proc/6955/cgroup 12:cpuset:/ 11:hugetlb:/ 10:net_cls,net_prio:/ 9:memory:/user.slice/user-1000.slice/[email protected] 8:devices:/user.slice 7:blkio:/ 6:freezer:/ 5:rdma:/ 4:pids:/user.slice/user-1000.slice/[email protected] 3:perf_event:/ 2:cpu,cpuacct:/Example 1:name=systemd:/user.slice/user-1000.slice/[email protected]/gnome-terminal-server.service", "top top - 12:28:42 up 1:06, 1 user, load average: 1.02, 1.02, 1.00 Tasks: 266 total, 6 running, 260 sleeping, 0 stopped, 0 zombie %Cpu(s): 11.0 us, 1.2 sy, 0.0 ni, 87.5 id, 0.0 wa, 0.2 hi, 0.0 si, 0.2 st MiB Mem : 1826.8 total, 287.1 free, 1054.4 used, 485.3 buff/cache MiB Swap: 1536.0 total, 1396.7 free, 139.2 used. 608.3 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 6955 root 20 0 228440 1752 1472 R 20.6 0.1 47:11.43 sha1sum 5760 jdoe 20 0 3604956 208832 65316 R 2.3 11.2 0:43.50 gnome-shell 6448 jdoe 20 0 743836 31736 19488 S 0.7 1.7 0:08.25 gnome-terminal- 505 root 20 0 0 0 0 I 0.3 0.0 0:03.39 kworker/u4:4-events_unbound 4217 root 20 0 74192 1612 1320 S 0.3 0.1 0:01.19 spice-vdagentd", "dnf install bcc-tools", "ls -l /usr/share/bcc/tools/ -rwxr-xr-x. 1 root root 4198 Dec 14 17:53 dcsnoop -rwxr-xr-x. 1 root root 3931 Dec 14 17:53 dcstat -rwxr-xr-x. 1 root root 20040 Dec 14 17:53 deadlock_detector -rw-r--r--. 1 root root 7105 Dec 14 17:53 deadlock_detector.c drwxr-xr-x. 3 root root 8192 Mar 11 10:28 doc -rwxr-xr-x. 1 root root 7588 Dec 14 17:53 execsnoop -rwxr-xr-x. 1 root root 6373 Dec 14 17:53 ext4dist -rwxr-xr-x. 1 root root 10401 Dec 14 17:53 ext4slower", "/usr/share/bcc/tools/execsnoop", "ls /usr/share/bcc/tools/doc/", "PCOMM PID PPID RET ARGS ls 8382 8287 0 /usr/bin/ls --color=auto /usr/share/bcc/tools/doc/", "/usr/share/bcc/tools/opensnoop -n uname", "uname", "PID COMM FD ERR PATH 8596 uname 3 0 /etc/ld.so.cache 8596 uname 3 0 /lib64/libc.so.6 8596 uname 3 0 /usr/lib/locale/locale-archive", "/usr/share/bcc/tools/biotop 30", "dd if=/dev/vda of=/dev/zero", "PID COMM D MAJ MIN DISK I/O Kbytes AVGms 9568 dd R 252 0 vda 16294 14440636.0 3.69 48 kswapd0 W 252 0 vda 1763 120696.0 1.65 7571 gnome-shell R 252 0 vda 834 83612.0 0.33 1891 gnome-shell R 252 0 vda 1379 19792.0 0.15 7515 Xorg R 252 0 vda 280 9940.0 0.28 7579 llvmpipe-1 R 252 0 vda 228 6928.0 0.19 9515 gnome-control-c R 252 0 vda 62 6444.0 0.43 8112 gnome-terminal- R 252 0 vda 67 2572.0 1.54 7807 gnome-software R 252 0 vda 31 2336.0 0.73 9578 awk R 252 0 vda 17 2228.0 0.66 7578 llvmpipe-0 R 252 0 vda 156 2204.0 0.07 9581 pgrep R 252 0 vda 58 1748.0 0.42 7531 InputThread R 252 0 vda 30 1200.0 0.48 7504 gdbus R 252 0 vda 3 1164.0 0.30 1983 llvmpipe-1 R 252 0 vda 39 724.0 0.08 1982 llvmpipe-0 R 252 0 vda 36 652.0 0.06", "/usr/share/bcc/tools/xfsslower 1", "vim text", "TIME COMM PID T BYTES OFF_KB LAT(ms) FILENAME 13:07:14 b'bash' 4754 R 256 0 7.11 b'vim' 13:07:14 b'vim' 4754 R 832 0 4.03 b'libgpm.so.2.1.0' 13:07:14 b'vim' 4754 R 32 20 1.04 b'libgpm.so.2.1.0' 13:07:14 b'vim' 4754 R 1982 0 2.30 b'vimrc' 13:07:14 b'vim' 4754 R 1393 0 2.52 b'getscriptPlugin.vim' 13:07:45 b'vim' 4754 S 0 0 6.71 b'text' 13:07:45 b'pool' 2588 R 16 0 5.58 b'text'" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/managing_monitoring_and_updating_the_kernel/index
Chapter 14. ConfigMap reference for Cluster Monitoring Operator
Chapter 14. ConfigMap reference for Cluster Monitoring Operator 14.1. Cluster Monitoring configuration reference Parts of Cluster Monitoring are configurable. The API is accessible through parameters defined in various ConfigMaps. Depending on which part of the stack you want to configure, edit the following: The configuration of OpenShift Container Platform monitoring components in a ConfigMap called cluster-monitoring-config in the openshift-monitoring namespace. Defined by ClusterMonitoringConfiguration . The configuration of components that monitor user-defined projects in a ConfigMap called user-workload-monitoring-config in the openshift-user-workload-monitoring namespace. Defined by UserWorkloadConfiguration . The configuration file itself is always defined under the config.yaml key within the ConfigMap data. Note Not all configuration parameters are exposed. Configuring Cluster Monitoring is optional. If the configuration does not exist or is empty or malformed, defaults are used. 14.2. AdditionalAlertmanagerConfig 14.2.1. Description AdditionalAlertmanagerConfig defines configuration on how a component should communicate with aditional Alertmanager instances. 14.2.2. Required apiVersion Appears in: PrometheusK8sConfig , PrometheusRestrictedConfig , ThanosRulerConfig Property Type Description apiVersion string APIVersion defines the api version of Alertmanager. bearerToken v1.SecretKeySelector BearerToken defines the bearer token to use when authenticating to Alertmanager. pathPrefix string PathPrefix defines the path prefix to add in front of the push endpoint path. scheme string Scheme the URL scheme to use when talking to Alertmanagers. staticConfigs array(string) StaticConfigs a list of statically configured Alertmanagers. timeout string Timeout defines the timeout used when sending alerts. tlsConfig TLSConfig TLSConfig defines the TLS Config to use for alertmanager connection. 14.3. AlertmanagerMainConfig 14.3.1. Description AlertmanagerMainConfig defines configuration related with the main Alertmanager instance. Appears in: ClusterMonitoringConfiguration Property Type Description enabled bool Enabled a boolean flag to enable or disable the main Alertmanager instance under openshift-monitoring default: true enableUserAlertmanagerConfig bool EnableUserAlertManagerConfig boolean flag to enable or disable user-defined namespaces to be selected for AlertmanagerConfig lookup, by default Alertmanager only looks for configuration in the namespace where it was deployed to. This will only work if the UWM Alertmanager instance is not enabled. default: false logLevel string LogLevel defines the log level for Alertmanager. Possible values are: error, warn, info, debug. default: info nodeSelector map[string]string NodeSelector defines which Nodes the Pods are scheduled on. resources v1.ResourceRequirements Resources define resources requests and limits for single Pods. tolerations array( v1.Toleration ) Tolerations defines the Pods tolerations. volumeClaimTemplate monv1.EmbeddedPersistentVolumeClaim VolumeClaimTemplate defines persistent storage for Alertmanager. It's possible to configure storageClass and size of volume. 14.4. ClusterMonitoringConfiguration 14.4.1. Description ClusterMonitoringConfiguration defines configuration that allows users to customise the platform monitoring stack through the cluster-monitoring-config ConfigMap in the openshift-monitoring namespace Property Type Description alertmanagerMain AlertmanagerMainConfig AlertmanagerMainConfig defines configuration related with the main Alertmanager instance. enableUserWorkload bool UserWorkloadEnabled boolean flag to enable monitoring for user-defined projects. k8sPrometheusAdapter K8sPrometheusAdapter K8sPrometheusAdapter defines configuration related with prometheus-adapter kubeStateMetrics KubeStateMetricsConfig KubeStateMetricsConfig defines configuration related with kube-state-metrics agent prometheusK8s PrometheusK8sConfig PrometheusK8sConfig defines configuration related with prometheus prometheusOperator PrometheusOperatorConfig PrometheusOperatorConfig defines configuration related with prometheus-operator openshiftStateMetrics OpenShiftStateMetricsConfig OpenShiftMetricsConfig defines configuration related with openshift-state-metrics agent thanosQuerier ThanosQuerierConfig ThanosQuerierConfig defines configuration related with the Thanos Querier component 14.5. K8sPrometheusAdapter 14.5.1. Description K8sPrometheusAdapter defines configuration related with Prometheus Adapater Appears in: ClusterMonitoringConfiguration Property Type Description audit Audit Audit defines the audit configuration to be used by the prometheus adapter instance. Possible profile values are: "metadata, request, requestresponse, none". default: metadata nodeSelector map[string]string NodeSelector defines which Nodes the Pods are scheduled on. tolerations array( v1.Toleration ) Tolerations defines the Pods tolerations. 14.6. KubeStateMetricsConfig 14.6.1. Description KubeStateMetricsConfig defines configuration related with the kube-state-metrics agent. Appears in: ClusterMonitoringConfiguration Property Type Description nodeSelector map[string]string NodeSelector defines which Nodes the Pods are scheduled on. tolerations array( v1.Toleration ) Tolerations defines the Pods tolerations. 14.7. OpenShiftStateMetricsConfig 14.7.1. Description OpenShiftStateMetricsConfig holds configuration related to openshift-state-metrics agent. Appears in: ClusterMonitoringConfiguration Property Type Description nodeSelector map[string]string NodeSelector defines which Nodes the Pods are scheduled on. tolerations array( v1.Toleration ) Tolerations defines the Pods tolerations. 14.8. PrometheusK8sConfig 14.8.1. Description PrometheusK8sConfig holds configuration related to the Prometheus component. Appears in: ClusterMonitoringConfiguration Property Type Description additionalAlertmanagerConfigs array( AdditionalAlertmanagerConfig ) AlertmanagerConfigs holds configuration about how the Prometheus component should communicate with aditional Alertmanager instances. default: nil externalLabels map[string]string ExternalLabels defines labels to be added to any time series or alerts when communicating with external systems (federation, remote storage, Alertmanager). default: nil logLevel string LogLevel defines the log level for Prometheus. Possible values are: error, warn, info, debug. default: info nodeSelector map[string]string NodeSelector defines which Nodes the Pods are scheduled on. queryLogFile string QueryLogFile specifies the file to which PromQL queries are logged. Suports both just a filename in which case they will be saved to an emptyDir volume at /var/log/prometheus , if a full path is given an emptyDir volume will be mounted at that location. Relative paths not supported, also not supported writing to linux std streams. default: "" remoteWrite array( remotewritespec ) RemoteWrite Holds the remote write configuration, everything from url, authorization to relabeling resources v1.ResourceRequirements Resources define resources requests and limits for single Pods. retention string Retention defines the Time duration Prometheus shall retain data for. Must match the regular expression [0-9]+(ms|s|m|h|d|w|y) (milliseconds seconds minutes hours days weeks years). default: 15d tolerations array( v1.Toleration ) Tolerations defines the Pods tolerations. volumeClaimTemplate monv1.EmbeddedPersistentVolumeClaim VolumeClaimTemplate defines persistent storage for Prometheus. It's possible to configure storageClass and size of volume. 14.9. PrometheusOperatorConfig 14.9.1. Description PrometheusOperatorConfig holds configuration related to Prometheus Operator. Appears in: ClusterMonitoringConfiguration , UserWorkloadConfiguration Property Type Description logLevel string LogLevel defines the log level for Prometheus Operator. Possible values are: error, warn, info, debug. default: info nodeSelector map[string]string NodeSelector defines which Nodes the Pods are scheduled on. tolerations array( v1.Toleration ) Tolerations defines the Pods tolerations. 14.10. PrometheusRestrictedConfig 14.10.1. Description PrometheusRestrictedConfig defines configuration related to the Prometheus component that will monitor user-defined projects. Appears in: UserWorkloadConfiguration Property Type Description additionalAlertmanagerConfigs array( additionalalertmanagerconfig ) AlertmanagerConfigs holds configuration about how the Prometheus component should communicate with aditional Alertmanager instances. default: nil enforcedSampleLimit uint64 EnforcedSampleLimit defines a global limit on the number of scraped samples that will be accepted. This overrides any SampleLimit set per ServiceMonitor or/and PodMonitor. It is meant to be used by admins to enforce the SampleLimit to keep the overall number of samples/series under the desired limit. Note that if SampleLimit is lower that value will be taken instead. default: 0 enforcedTargetLimit uint64 EnforcedTargetLimit defines a global limit on the number of scraped targets. This overrides any TargetLimit set per ServiceMonitor or/and PodMonitor. It is meant to be used by admins to enforce the TargetLimit to keep the overall number of targets under the desired limit. Note that if TargetLimit is lower, that value will be taken instead, except if either value is zero, in which case the non-zero value will be used. If both values are zero, no limit is enforced. default: 0 externalLabels map[string]string ExternalLabels defines labels to be added to any time series or alerts when communicating with external systems (federation, remote storage, Alertmanager). default: nil logLevel string LogLevel defines the log level for Prometheus. Possible values are: error, warn, info, debug. default: info nodeSelector map[string]string NodeSelector defines which Nodes the Pods are scheduled on. queryLogFile string QueryLogFile specifies the file to which PromQL queries are logged. Suports both just a filename in which case they will be saved to an emptyDir volume at /var/log/prometheus, if a full path is given an emptyDir volume will be mounted at that location. Relative paths not supported, also not supported writing to linux std streams. default: "" remoteWrite array( remotewritespec ) RemoteWrite Holds the remote write configuration, everything from url, authorization to relabeling resources v1.ResourceRequirements Resources define resources requests and limits for single Pods. retention string Retention defines the Time duration Prometheus shall retain data for. Must match the regular expression [0-9]+(ms|s|m|h|d|w|y) (milliseconds seconds minutes hours days weeks years). default: 15d tolerations array( v1.Toleration ) Tolerations defines the Pods tolerations. volumeClaimTemplate monv1.EmbeddedPersistentVolumeClaim VolumeClaimTemplate defines persistent storage for Prometheus. It's possible to configure storageClass and size of volume. 14.11. RemoteWriteSpec 14.11.1. Description RemoteWriteSpec is an almost identical copy of monv1.RemoteWriteSpec but with the BearerToken field removed. In the future other fields might be added here. 14.11.2. Required url Appears in: PrometheusK8sConfig , PrometheusRestrictedConfig Property Type Description authorization monv1.SafeAuthorization Authorization defines the authorization section for remote write basicAuth monv1.BasicAuth BasicAuth defines configuration for basic authentication for the URL. bearerTokenFile string BearerTokenFile defines the file where the bearer token for remote write resides. headers map[string]string Headers custom HTTP headers to be sent along with each remote write request. Be aware that headers that are set by Prometheus itself can't be overwritten. metadataConfig monv1.MetadataConfig MetadataConfig configures the sending of series metadata to remote storage. name string Name defines the name of the remote write queue, must be unique if specified. The name is used in metrics and logging in order to differentiate queues. oauth2 monv1.OAuth2 OAuth2 configures OAuth2 authentication for remote write. proxyUrl string ProxyURL defines an optional proxy URL queueConfig monv1.QueueConfig QueueConfig allows tuning of the remote write queue parameters. remoteTimeout string RemoteTimeout defines the timeout for requests to the remote write endpoint. sigv4 monv1.Sigv4 Sigv4 allows to configures AWS's Signature Verification 4 tlsConfig monv1.SafeTLSConfig TLSConfig defines the TLS configuration to use for remote write. url string URL defines the URL of the endpoint to send samples to. writeRelabelConfigs array( monv1.RelabelConfig ) WriteRelabelConfigs defines the list of remote write relabel configurations. 14.12. TLSConfig 14.12.1. Description TLSConfig configures the options for TLS connections. 14.12.2. Required insecureSkipVerify Appears in: AdditionalAlertmanagerConfig Property Type Description ca v1.SecretKeySelector CA defines the CA cert in the Prometheus container to use for the targets. cert v1.SecretKeySelector Cert defines the client cert in the Prometheus container to use for the targets. key v1.SecretKeySelector Key defines the client key in the Prometheus container to use for the targets. serverName string ServerName used to verify the hostname for the targets. insecureSkipVerify bool InsecureSkipVerify disable target certificate validation. 14.13. ThanosQuerierConfig 14.13.1. Description ThanosQuerierConfig holds configuration related to Thanos Querier component. Appears in: ClusterMonitoringConfiguration Property Type Description enableRequestLogging bool EnableRequestLogging boolean flag to enable or disable request logging default: false logLevel string LogLevel defines the log level for Thanos Querier. Possible values are: error, warn, info, debug. default: info nodeSelector map[string]string NodeSelector defines which Nodes the Pods are scheduled on. resources v1.ResourceRequirements Resources define resources requests and limits for single Pods. tolerations array( v1.Toleration ) Tolerations defines the Pods tolerations. 14.14. ThanosRulerConfig 14.14.1. Description ThanosRulerConfig defines configuration for the Thanos Ruler instance for user-defined projects. Appears in: UserWorkloadConfiguration Property Type Description additionalAlertmanagerConfigs array( additionalalertmanagerconfig ) AlertmanagerConfigs holds configuration about how the Thanos Ruler component should communicate with aditional Alertmanager instances. default: nil logLevel string LogLevel defines the log level for Thanos Ruler. Possible values are: error, warn, info, debug. default: info nodeSelector map[string]string NodeSelector defines which Nodes the Pods are scheduled on. resources v1.ResourceRequirements Resources define resources requests and limits for single Pods. retention string Retention defines the time duration Thanos Ruler shall retain data for. Must match the regular expression [0-9]+(ms|s|m|h|d|w|y) (milliseconds seconds minutes hours days weeks years). default: 15d tolerations array( v1.Toleration ) Tolerations defines the Pods tolerations. volumeClaimTemplate monv1.EmbeddedPersistentVolumeClaim VolumeClaimTemplate defines persistent storage for Thanos Ruler. It's possible to configure storageClass and size of volume. 14.15. UserWorkloadConfiguration 14.15.1. Description UserWorkloadConfiguration defines configuration that allows users to customise the monitoring stack responsible for user-defined projects through the user-workload-monitoring-config ConfigMap in the openshift-user-workload-monitoring namespace Property Type Description prometheus PrometheusRestrictedConfig Prometheus defines configuration for Prometheus component. prometheusOperator PrometheusOperatorConfig PrometheusOperator defines configuration for prometheus-operator component. thanosRuler ThanosRulerConfig ThanosRuler defines configuration for the Thanos Ruler component
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/monitoring/configmap-reference-for-cluster-monitoring-operator
Chapter 60. Manipulating Interceptor Chains on the Fly
Chapter 60. Manipulating Interceptor Chains on the Fly Abstract Interceptors can reconfigure an endpoint's interceptor chain as part of its message processing logic. It can add new interceptors, remove interceptors, reorder interceptors, and even suspend the interceptor chain. Any on-the-fly manipulation is invocation-specific, so the original chain is used each time an endpoint is involved in a message exchange. Overview Interceptor chains only live as long as the message exchange that sparked their creation. Each message contains a reference to the interceptor chain responsible for processing it. Developers can use this reference to alter the message's interceptor chain. Because the chain is per-exchange, any changes made to a message's interceptor chain will not effect other message exchanges. Chain life-cycle Interceptor chains and the interceptors in the chain are instantiated on a per-invocation basis. When an endpoint is invoked to participate in a message exchange, the required interceptor chains are instantiated along with instances of its interceptors. When the message exchange that caused the creation of the interceptor chain is completed, the chain and its interceptor instances are destroyed. This means that any changes you make to the interceptor chain or to the fields of an interceptor do not persist across message exchanges. So, if an interceptor places another interceptor in the active chain only the active chain is effected. Any future message exchanges will be created from a pristine state as determined by the endpoint's configuration. It also means that a developer cannot set flags in an interceptor that will alter future message processing. If an interceptor needs to pass information along to future instances, it can set a property in the message context. The context does persist across message exchanges. Getting the interceptor chain The first step in changing a message's interceptor chain is getting the interceptor chain. This is done using the Message.getInterceptorChain() method shown in Example 60.1, "Method for getting an interceptor chain" . The interceptor chain is returned as a org.apache.cxf.interceptor.InterceptorChain object. Example 60.1. Method for getting an interceptor chain InterceptorChain getInterceptorChain Adding interceptors The InterceptorChain object has two methods, shown in Example 60.2, "Methods for adding interceptors to an interceptor chain" , for adding interceptors to an interceptor chain. One allows you to add a single interceptor and the other allows you to add multiple interceptors. Example 60.2. Methods for adding interceptors to an interceptor chain add Interceptor<? extends Message> i add Collection<Interceptor<? extends Message>> i Example 60.3, "Adding an interceptor to an interceptor chain on-the-fly" shows code for adding a single interceptor to a message's interceptor chain. Example 60.3. Adding an interceptor to an interceptor chain on-the-fly The code in Example 60.3, "Adding an interceptor to an interceptor chain on-the-fly" does the following: Instantiates a copy of the interceptor to be added to the chain. Important The interceptor being added to the chain should be in either the same phase as the current interceptor or a latter phase than the current interceptor. Gets the interceptor chain for the current message. Adds the new interceptor to the chain. Removing interceptors The InterceptorChain object has one method, shown in Example 60.4, "Methods for removing interceptors from an interceptor chain" , for removing an interceptor from an interceptor chain. Example 60.4. Methods for removing interceptors from an interceptor chain remove Interceptor<? extends Message> i Example 60.5, "Removing an interceptor from an interceptor chain on-the-fly" shows code for removing an interceptor from a message's interceptor chain. Example 60.5. Removing an interceptor from an interceptor chain on-the-fly Where InterceptorClassName is the class name of the interceptor you want to remove from the chain.
[ "void handleMessage(Message message) { AddledIntereptor addled = new AddledIntereptor(); InterceptorChain chain = message.getInterceptorChain(); chain.add(addled); }", "void handleMessage(Message message) { Iterator<Interceptor<? extends Message>> iterator = message.getInterceptorChain().iterator(); Interceptor<?> removeInterceptor = null; for (; iterator.hasNext(); ) { Interceptor<?> interceptor = iterator.next(); if (interceptor.getClass().getName().equals(\" InterceptorClassName \")) { removeInterceptor = interceptor; break; } } if (removeInterceptor != null) { log.debug(\"Removing interceptor {}\",removeInterceptor.getClass().getName()); message.getInterceptorChain().remove(removeInterceptor); } }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/CXFInterceptorChainManipulation
Chapter 48. MariaDB Sink
Chapter 48. MariaDB Sink Send data to a MariaDB Database. This Kamelet expects a JSON as body. The mapping between the JSON fields and parameters is done by key, so if you have the following query: 'INSERT INTO accounts (username,city) VALUES (:#username,:#city)' The Kamelet needs to receive as input something like: '{ "username":"oscerd", "city":"Rome"}' 48.1. Configuration Options The following table summarizes the configuration options available for the mariadb-sink Kamelet: Property Name Description Type Default Example databaseName * Database Name The Database Name we are pointing string password * Password The password to use for accessing a secured MariaDB Database string query * Query The Query to execute against the MariaDB Database string "INSERT INTO accounts (username,city) VALUES (:#username,:#city)" serverName * Server Name Server Name for the data source string "localhost" username * Username The username to use for accessing a secured MariaDB Database string serverPort Server Port Server Port for the data source string 3306 Note Fields marked with an asterisk (*) are mandatory. 48.2. Dependencies At runtime, the mariadb-sink Kamelet relies upon the presence of the following dependencies: camel:jackson camel:kamelet camel:sql mvn:org.apache.commons:commons-dbcp2:2.7.0.redhat-00001 mvn:org.mariadb.jdbc:mariadb-java-client 48.3. Usage This section describes how you can use the mariadb-sink . 48.3.1. Knative Sink You can use the mariadb-sink Kamelet as a Knative sink by binding it to a Knative object. mariadb-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mariadb-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mariadb-sink properties: databaseName: "The Database Name" password: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city)" serverName: "localhost" username: "The Username" 48.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 48.3.1.2. Procedure for using the cluster CLI Save the mariadb-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f mariadb-sink-binding.yaml 48.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel mariadb-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username" This command creates the KameletBinding in the current namespace on the cluster. 48.3.2. Kafka Sink You can use the mariadb-sink Kamelet as a Kafka sink by binding it to a Kafka topic. mariadb-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mariadb-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mariadb-sink properties: databaseName: "The Database Name" password: "The Password" query: "INSERT INTO accounts (username,city) VALUES (:#username,:#city)" serverName: "localhost" username: "The Username" 48.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 48.3.2.2. Procedure for using the cluster CLI Save the mariadb-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f mariadb-sink-binding.yaml 48.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mariadb-sink -p "sink.databaseName=The Database Name" -p "sink.password=The Password" -p "sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)" -p "sink.serverName=localhost" -p "sink.username=The Username" This command creates the KameletBinding in the current namespace on the cluster. 48.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/mariadb-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mariadb-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mariadb-sink properties: databaseName: \"The Database Name\" password: \"The Password\" query: \"INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" serverName: \"localhost\" username: \"The Username\"", "apply -f mariadb-sink-binding.yaml", "kamel bind channel:mychannel mariadb-sink -p \"sink.databaseName=The Database Name\" -p \"sink.password=The Password\" -p \"sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" -p \"sink.serverName=localhost\" -p \"sink.username=The Username\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: mariadb-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: mariadb-sink properties: databaseName: \"The Database Name\" password: \"The Password\" query: \"INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" serverName: \"localhost\" username: \"The Username\"", "apply -f mariadb-sink-binding.yaml", "kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic mariadb-sink -p \"sink.databaseName=The Database Name\" -p \"sink.password=The Password\" -p \"sink.query=INSERT INTO accounts (username,city) VALUES (:#username,:#city)\" -p \"sink.serverName=localhost\" -p \"sink.username=The Username\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/mariadb-sink
Chapter 22. Creating and restoring container checkpoints
Chapter 22. Creating and restoring container checkpoints Checkpoint/Restore In Userspace (CRIU) is a software that enables you to set a checkpoint on a running container or an individual application and store its state to disk. You can use data saved to restore the container after a reboot at the same point in time it was checkpointed. Warning The kernel does not support pre-copy checkpointing on AArch64. 22.1. Creating and restoring a container checkpoint locally This example is based on a Python based web server which returns a single integer which is incremented after each request. Prerequisites The container-tools module is installed. Procedure Create a Python based server: Create a container with the following definition: The container is based on the Universal Base Image (UBI 8) and uses a Python based server. Build the container: Files counter.py and Containerfile are the input for the container build process ( podman build ). The built image is stored locally and tagged with the tag counter . Start the container as root: To list all running containers, enter: Display IP address of the container: Send requests to the container: Create a checkpoint for the container: Reboot the system. Restore the container: Send requests to the container: The result now does not start at 0 again, but continues at the value. This way you can easily save the complete container state through a reboot. Additional resources Podman checkpoint 22.2. Reducing startup time using container restore You can use container migration to reduce startup time of containers which require a certain time to initialize. Using a checkpoint, you can restore the container multiple times on the same host or on different hosts. This example is based on the container from the Creating and restoring a container checkpoint locally . Prerequisites The container-tools module is installed. Procedure Create a checkpoint of the container, and export the checkpoint image to a tar.gz file: Restore the container from the tar.gz file: The --name ( -n ) option specifies a new name for containers restored from the exported checkpoint. Display ID and name of each container: Display IP address of each container: Send requests to each container: Note, that the result is 4 in all cases, because you are working with different containers restored from the same checkpoint. Using this approach, you can quickly start up stateful replicas of the initially checkpointed container. Additional resources Container migration with Podman on RHEL 22.3. Migrating containers among systems You can migrate the running containers from one system to another, without losing the state of the applications running in the container. This example is based on the container from the Creating and restoring a container checkpoint locally section tagged with counter . Important Migrating containers among systems with the podman container checkpoint and podman container restore commands is supported only when the configurations of the systems match completely, as shown below: Podman version OCI runtime (runc/crun) Network stack (CNI/Netavark) Cgroups version kernel version CPU features You can migrate to a CPU with more features, but not to a CPU which does not have a certain feature that you are using. The low-level tool doing the checkpointing (CRIU) has the possibility to check for CPU feature compatibility: https://criu.org/Cpuinfo . Prerequisites The container-tools module is installed. The following steps are not necessary if the container is pushed to a registry as Podman will automatically download the container from a registry if it is not available locally. This example does not use a registry, you have to export previously built and tagged container (see Creating and restoring a container checkpoint locally ). Export previously built container: Copy exported container image to the destination system ( other_host ): Import exported container on the destination system: Now the destination system of this container migration has the same container image stored in its local container storage. Procedure Start the container as root: Display IP address of the container: Send requests to the container: Create a checkpoint of the container, and export the checkpoint image to a tar.gz file: Copy the checkpoint archive to the destination host: Restore the checkpoint on the destination host ( other_host ): Send a request to the container on the destination host ( other_host ): As a result, the stateful container has been migrated from one system to another without losing its state. Additional resources Container migration with Podman on RHEL
[ "cat counter.py #!/usr/bin/python3 import http.server counter = 0 class handler(http.server.BaseHTTPRequestHandler): def do_GET(s): global counter s.send_response(200) s.send_header('Content-type', 'text/html') s.end_headers() s.wfile.write(b'%d\\n' % counter) counter += 1 server = http.server.HTTPServer(('', 8088), handler) server.serve_forever()", "cat Containerfile FROM registry.access.redhat.com/ubi8/ubi COPY counter.py /home/counter.py RUN useradd -ms /bin/bash counter RUN yum -y install python3 && chmod 755 /home/counter.py USER counter ENTRYPOINT /home/counter.py", "podman build . --tag counter", "podman run --name criu-test --detach counter", "podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e4f82fd84d48 localhost/counter:latest 5 seconds ago Up 4 seconds ago criu-test", "podman inspect criu-test --format \"{{.NetworkSettings.IPAddress}}\" 10.88.0.247", "curl 10.88.0.247:8088 0 curl 10.88.0.247:8088 1", "podman container checkpoint criu-test", "podman container restore --keep criu-test", "curl 10.88.0.247:8080 2 curl 10.88.0.247:8080 3 curl 10.88.0.247:8080 4", "podman container checkpoint criu-test --export /tmp/chkpt.tar.gz", "podman container restore --import /tmp/chkpt.tar.gz --name counter1 podman container restore --import /tmp/chkpt.tar.gz --name counter2 podman container restore --import /tmp/chkpt.tar.gz --name counter3", "podman ps -a --format \"{{.ID}} {{.Names}}\" a8b2e50d463c counter3 faabc5c27362 counter2 2ce648af11e5 counter1", "#\\ufe0f podman inspect counter1 --format \"{{.NetworkSettings.IPAddress}}\" 10.88.0.248 #\\ufe0f podman inspect counter2 --format \"{{.NetworkSettings.IPAddress}}\" 10.88.0.249 #\\ufe0f podman inspect counter3 --format \"{{.NetworkSettings.IPAddress}}\" 10.88.0.250", "#\\ufe0f curl 10.88.0.248:8080 4 #\\ufe0f curl 10.88.0.249:8080 4 #\\ufe0f curl 10.88.0.250:8080 4", "podman save --output counter.tar counter", "scp counter.tar other_host :", "ssh other_host podman load --input counter.tar", "podman run --name criu-test --detach counter", "podman inspect criu-test --format \"{{.NetworkSettings.IPAddress}}\" 10.88.0.247", "curl 10.88.0.247:8080 0 curl 10.88.0.247:8080 1", "podman container checkpoint criu-test --export /tmp/chkpt.tar.gz", "scp /tmp/chkpt.tar.gz other_host :/tmp/", "podman container restore --import /tmp/chkpt.tar.gz", "*curl 10.88.0.247:8080* 2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/building_running_and_managing_containers/assembly_creating-and-restoring-container-checkpoints
Architecture
Architecture OpenShift Container Platform 4.18 An overview of the architecture for OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.", "openshift-install create ignition-configs --dir USDHOME/testconfig", "cat USDHOME/testconfig/bootstrap.ign | jq { \"ignition\": { \"version\": \"3.2.0\" }, \"passwd\": { \"users\": [ { \"name\": \"core\", \"sshAuthorizedKeys\": [ \"ssh-rsa AAAAB3NzaC1yc....\" ] } ] }, \"storage\": { \"files\": [ { \"overwrite\": false, \"path\": \"/etc/motd\", \"user\": { \"name\": \"root\" }, \"append\": [ { \"source\": \"data:text/plain;charset=utf-8;base64,VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg==\" } ], \"mode\": 420 },", "echo VGhpcyBpcyB0aGUgYm9vdHN0cmFwIG5vZGU7IGl0IHdpbGwgYmUgZGVzdHJveWVkIHdoZW4gdGhlIG1hc3RlciBpcyBmdWxseSB1cC4KClRoZSBwcmltYXJ5IHNlcnZpY2VzIGFyZSByZWxlYXNlLWltYWdlLnNlcnZpY2UgZm9sbG93ZWQgYnkgYm9vdGt1YmUuc2VydmljZS4gVG8gd2F0Y2ggdGhlaXIgc3RhdHVzLCBydW4gZS5nLgoKICBqb3VybmFsY3RsIC1iIC1mIC11IHJlbGVhc2UtaW1hZ2Uuc2VydmljZSAtdSBib290a3ViZS5zZXJ2aWNlCg== | base64 --decode", "This is the bootstrap node; it will be destroyed when the master is fully up. The primary services are release-image.service followed by bootkube.service. To watch their status, run e.g. journalctl -b -f -u release-image.service -u bootkube.service", "\"source\": \"https://api.myign.develcluster.example.com:22623/config/worker\",", "USD oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-1638c1aea398413bb918e76632f20799 False False False worker worker-2feef4f8288936489a5a832ca8efe953 False False False", "oc get machineconfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED OSIMAGEURL 00-master 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-master-ssh 4.0.0-0.150.0.0-dirty 16m 00-worker 4.0.0-0.150.0.0-dirty 3.2.0 16m 00-worker-ssh 4.0.0-0.150.0.0-dirty 16m 01-master-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m 01-worker-kubelet 4.0.0-0.150.0.0-dirty 3.2.0 16m master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 3.2.0 16m worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 3.2.0 16m", "oc describe machineconfigs 01-worker-container-runtime | grep Path:", "Path: /etc/containers/registries.conf Path: /etc/containers/storage.conf Path: /etc/crio/crio.conf", "apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: None", "apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: Unknown", "oc new-project my-webhook-namespace 1", "apiVersion: v1 kind: List items: - apiVersion: rbac.authorization.k8s.io/v1 1 kind: ClusterRoleBinding metadata: name: auth-delegator-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:auth-delegator subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRole metadata: annotations: name: system:openshift:online:my-webhook-server rules: - apiGroups: - online.openshift.io resources: - namespacereservations 3 verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRole metadata: name: system:openshift:online:my-webhook-requester rules: - apiGroups: - admission.online.openshift.io resources: - namespacereservations 5 verbs: - create - apiVersion: rbac.authorization.k8s.io/v1 6 kind: ClusterRoleBinding metadata: name: my-webhook-server-my-webhook-namespace roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: system:openshift:online:my-webhook-server subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 7 kind: RoleBinding metadata: namespace: kube-system name: extension-server-authentication-reader-my-webhook-namespace roleRef: kind: Role apiGroup: rbac.authorization.k8s.io name: extension-apiserver-authentication-reader subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server - apiVersion: rbac.authorization.k8s.io/v1 8 kind: ClusterRole metadata: name: my-cluster-role rules: - apiGroups: - admissionregistration.k8s.io resources: - validatingwebhookconfigurations - mutatingwebhookconfigurations verbs: - get - list - watch - apiGroups: - \"\" resources: - namespaces verbs: - get - list - watch - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: my-cluster-role roleRef: kind: ClusterRole apiGroup: rbac.authorization.k8s.io name: my-cluster-role subjects: - kind: ServiceAccount namespace: my-webhook-namespace name: server", "oc auth reconcile -f rbac.yaml", "apiVersion: apps/v1 kind: DaemonSet metadata: namespace: my-webhook-namespace name: server labels: server: \"true\" spec: selector: matchLabels: server: \"true\" template: metadata: name: server labels: server: \"true\" spec: serviceAccountName: server containers: - name: my-webhook-container 1 image: <image_registry_username>/<image_path>:<tag> 2 imagePullPolicy: IfNotPresent command: - <container_commands> 3 ports: - containerPort: 8443 4 volumeMounts: - mountPath: /var/serving-cert name: serving-cert readinessProbe: httpGet: path: /healthz port: 8443 5 scheme: HTTPS volumes: - name: serving-cert secret: defaultMode: 420 secretName: server-serving-cert", "oc apply -f webhook-daemonset.yaml", "apiVersion: v1 kind: Secret metadata: namespace: my-webhook-namespace name: server-serving-cert type: kubernetes.io/tls data: tls.crt: <server_certificate> 1 tls.key: <server_key> 2", "oc apply -f webhook-secret.yaml", "apiVersion: v1 kind: List items: - apiVersion: v1 kind: ServiceAccount metadata: namespace: my-webhook-namespace name: server - apiVersion: v1 kind: Service metadata: namespace: my-webhook-namespace name: server annotations: service.beta.openshift.io/serving-cert-secret-name: server-serving-cert spec: selector: server: \"true\" ports: - port: 443 1 targetPort: 8443 2", "oc apply -f webhook-service.yaml", "apiVersion: apiextensions.k8s.io/v1beta1 kind: CustomResourceDefinition metadata: name: namespacereservations.online.openshift.io 1 spec: group: online.openshift.io 2 version: v1alpha1 3 scope: Cluster 4 names: plural: namespacereservations 5 singular: namespacereservation 6 kind: NamespaceReservation 7", "oc apply -f webhook-crd.yaml", "apiVersion: apiregistration.k8s.io/v1beta1 kind: APIService metadata: name: v1beta1.admission.online.openshift.io spec: caBundle: <ca_signing_certificate> 1 group: admission.online.openshift.io groupPriorityMinimum: 1000 versionPriority: 15 service: name: server namespace: my-webhook-namespace version: v1beta1", "oc apply -f webhook-api-service.yaml", "apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration metadata: name: namespacereservations.admission.online.openshift.io 1 webhooks: - name: namespacereservations.admission.online.openshift.io 2 clientConfig: service: 3 namespace: default name: kubernetes path: /apis/admission.online.openshift.io/v1beta1/namespacereservations 4 caBundle: <ca_signing_certificate> 5 rules: - operations: - CREATE apiGroups: - project.openshift.io apiVersions: - \"*\" resources: - projectrequests - operations: - CREATE apiGroups: - \"\" apiVersions: - \"*\" resources: - namespaces failurePolicy: Fail", "oc apply -f webhook-config.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/architecture/architecture-installation
Developing Applications with Red Hat build of Apache Camel for Quarkus
Developing Applications with Red Hat build of Apache Camel for Quarkus Red Hat build of Apache Camel 4.8 Developing Applications with Red Hat build of Apache Camel for Quarkus
[ "<project> <properties> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.group-id>com.redhat.quarkus.platform</quarkus.platform.group-id> <quarkus.platform.version> <!-- The latest 3.15.x version from https://maven.repository.redhat.com/ga/com/redhat/quarkus/platform/quarkus-bom --> </quarkus.platform.version> </properties> <dependencyManagement> <dependencies> <!-- The BOMs managing the dependency versions --> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-bom</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> <dependency> <groupId>USD{quarkus.platform.group-id}</groupId> <artifactId>quarkus-camel-bom</artifactId> <version>USD{quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <!-- The extensions you chose in the project generator tool --> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-sql</artifactId> <!-- No explicit version required here and below --> </dependency> </dependencies> </project>", "import org.apache.camel.builder.RouteBuilder; public class TimerRoute extends RouteBuilder { @Override public void configure() throws Exception { from(\"timer:foo?period=1000\") .log(\"Hello World\"); } }", "import org.apache.camel.builder.RouteBuilder; import static org.apache.camel.builder.endpoint.StaticEndpointBuilders.timer; public class TimerRoute extends RouteBuilder { @Override public void configure() throws Exception { from(timer(\"foo\").period(1000)) .log(\"Hello World\"); } }", "camel.component.log.exchange-formatter = #class:org.apache.camel.support.processor.DefaultExchangeFormatter camel.component.log.exchange-formatter.show-exchange-pattern = false camel.component.log.exchange-formatter.show-body-type = false", "import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.event.Observes; import org.apache.camel.quarkus.core.events.ComponentAddEvent; import org.apache.camel.component.log.LogComponent; import org.apache.camel.support.processor.DefaultExchangeFormatter; @ApplicationScoped public static class EventHandler { public void onComponentAdd(@Observes ComponentAddEvent event) { if (event.getComponent() instanceof LogComponent) { /* Perform some custom configuration of the component */ LogComponent logComponent = ((LogComponent) event.getComponent()); DefaultExchangeFormatter formatter = new DefaultExchangeFormatter(); formatter.setShowExchangePattern(false); formatter.setShowBodyType(false); logComponent.setExchangeFormatter(formatter); } } }", "import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import org.apache.camel.component.log.LogComponent; import org.apache.camel.support.processor.DefaultExchangeFormatter; @ApplicationScoped public class Configurations { /** * Produces a {@link LogComponent} instance with a custom exchange formatter set-up. */ @Named(\"log\") 1 LogComponent log() { DefaultExchangeFormatter formatter = new DefaultExchangeFormatter(); formatter.setShowExchangePattern(false); formatter.setShowBodyType(false); LogComponent component = new LogComponent(); component.setExchangeFormatter(formatter); return component; } }", "import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Inject; import org.apache.camel.builder.RouteBuilder; import org.eclipse.microprofile.config.inject.ConfigProperty; @ApplicationScoped 1 public class TimerRoute extends RouteBuilder { @ConfigProperty(name = \"timer.period\", defaultValue = \"1000\") 2 String period; @Inject Counter counter; @Override public void configure() throws Exception { fromF(\"timer:foo?period=%s\", period) .setBody(exchange -> \"Incremented the counter: \" + counter.increment()) .to(\"log:cdi-example?showExchangePattern=false&showBodyType=false\"); } }", "import jakarta.inject.Inject; import jakarta.enterprise.context.ApplicationScoped; import java.util.stream.Collectors; import java.util.List; import org.apache.camel.CamelContext; @ApplicationScoped public class MyBean { @Inject CamelContext context; public List<String> listRouteIds() { return context.getRoutes().stream().map(Route::getId).sorted().collect(Collectors.toList()); } }", "import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.EndpointInject; import org.apache.camel.FluentProducerTemplate; import org.apache.camel.Produce; import org.apache.camel.ProducerTemplate; @ApplicationScoped class MyBean { @EndpointInject(\"direct:myDirect1\") ProducerTemplate producerTemplate; @EndpointInject(\"direct:myDirect2\") FluentProducerTemplate fluentProducerTemplate; @EndpointInject(\"direct:myDirect3\") DirectEndpoint directEndpoint; @Produce(\"direct:myDirect4\") ProducerTemplate produceProducer; @Produce(\"direct:myDirect5\") FluentProducerTemplate produceProducerFluent; }", "import jakarta.enterprise.context.ApplicationScoped; import org.apache.camel.Produce; @ApplicationScoped class MyProduceBean { public interface ProduceInterface { String sayHello(String name); } @Produce(\"direct:myDirect6\") ProduceInterface produceInterface; void doSomething() { produceInterface.sayHello(\"Kermit\") } }", "import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import io.quarkus.runtime.annotations.RegisterForReflection; @ApplicationScoped @Named(\"myNamedBean\") @RegisterForReflection public class NamedBean { public String hello(String name) { return \"Hello \" + name + \" from the NamedBean\"; } }", "import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { from(\"direct:named\") .bean(\"myNamedBean\", \"hello\"); /* ... which is an equivalent of the following: */ from(\"direct:named\") .to(\"bean:myNamedBean?method=hello\"); } }", "import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.runtime.annotations.RegisterForReflection; import io.smallrye.common.annotation.Identifier; @ApplicationScoped @Identifier(\"myBeanIdentifier\") @RegisterForReflection public class MyBean { public String hello(String name) { return \"Hello \" + name + \" from MyBean\"; } }", "import org.apache.camel.builder.RouteBuilder; public class CamelRoute extends RouteBuilder { @Override public void configure() { from(\"direct:start\") .bean(\"myBeanIdentifier\", \"Camel\"); } }", "import org.apache.camel.Consume; public class Foo { @Consume(\"activemq:cheese\") public void onCheese(String name) { } }", "from(\"activemq:cheese\").bean(\"foo1234\", \"onCheese\")", "curl -s localhost:9000/q/health/live", "curl -s localhost:9000/q/health/ready", "mvn clean compile quarkus:dev", "<dependencies> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-opentelemetry</artifactId> </dependency> <dependency> <groupId>io.quarkiverse.micrometer.registry</groupId> <artifactId>quarkus-micrometer-registry-prometheus</artifactId> </dependency> </dependencies>", ".to(\"micrometer:counter:org.acme.observability.greeting-provider?tags=type=events,purpose=example\")", "@Inject MeterRegistry registry;", "void countGreeting(Exchange exchange) { registry.counter(\"org.acme.observability.greeting\", \"type\", \"events\", \"purpose\", \"example\").increment(); }", "from(\"platform-http:/greeting\") .removeHeaders(\"*\") .process(this::countGreeting)", "@ApplicationScoped @Named(\"timerCounter\") public class TimerCounter { @Counted(value = \"org.acme.observability.timer-counter\", extraTags = { \"purpose\", \"example\" }) public void count() { } }", ".bean(\"timerCounter\", \"count\")", "curl -s localhost:9000/q/metrics", "curl -s localhost:9000/q/metrics | grep -i 'purpose=\"example\"'", "<dependencies> <dependency> <groupId>org.apache.camel.quarkus</groupId> <artifactId>camel-quarkus-opentelemetry</artifactId> </dependency> <dependency> <groupId>io.quarkiverse.micrometer.registry</groupId> <artifactId>quarkus-micrometer-registry-prometheus</artifactId> </dependency> </dependencies>", "We are using a property placeholder to be able to test this example in convenient way in a cloud environment quarkus.otel.exporter.otlp.traces.endpoint = http://USD{TELEMETRY_COLLECTOR_COLLECTOR_SERVICE_HOST:localhost}:4317", "docker-compose up -d", "mvn clean package java -jar target/quarkus-app/quarkus-run.jar [io.quarkus] (main) camel-quarkus-examples-... started in 1.163s. Listening on: http://0.0.0.0:8080", "mvn clean package -Pnative ./target/*-runner [io.quarkus] (main) camel-quarkus-examples-... started in 0.013s. Listening on: http://0.0.0.0:8080", "Charset.defaultCharset(), US-ASCII, ISO-8859-1, UTF-8, UTF-16BE, UTF-16LE, UTF-16", "quarkus.native.add-all-charsets = true", "quarkus.native.user-country=US quarkus.native.user-language=en", "quarkus.native.resources.includes = docs/*,images/* quarkus.native.resources.excludes = docs/ignored.adoc,images/ignored.png", "onException(MyException.class).handled(true); from(\"direct:route-that-could-produce-my-exception\").throw(MyException.class);", "import io.quarkus.runtime.annotations.RegisterForReflection; @RegisterForReflection class MyClassAccessedReflectively { } @RegisterForReflection( targets = { org.third-party.Class1.class, org.third-party.Class2.class } ) class ReflectionRegistrations { }", "quarkus.camel.native.reflection.include-patterns = org.apache.commons.lang3.tuple.* quarkus.camel.native.reflection.exclude-patterns = org.apache.commons.lang3.tuple.*Triple", "quarkus.index-dependency.commons-lang3.group-id = org.apache.commons quarkus.index-dependency.commons-lang3.artifact-id = commons-lang3", "Client side SSL quarkus.cxf.client.hello.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/hello quarkus.cxf.client.hello.service-interface = io.quarkiverse.cxf.it.security.policy.HelloService 1 quarkus.cxf.client.hello.trust-store-type = pkcs12 2 quarkus.cxf.client.hello.trust-store = client-truststore.pkcs12 quarkus.cxf.client.hello.trust-store-password = client-truststore-password", "Server side SSL quarkus.tls.key-store.p12.path = localhost-keystore.pkcs12 quarkus.tls.key-store.p12.password = localhost-keystore-password quarkus.tls.key-store.p12.alias = localhost quarkus.tls.key-store.p12.alias-password = localhost-keystore-password", "Server keystore for Simple TLS quarkus.tls.localhost-pkcs12.key-store.p12.path = localhost-keystore.pkcs12 quarkus.tls.localhost-pkcs12.key-store.p12.password = localhost-keystore-password quarkus.tls.localhost-pkcs12.key-store.p12.alias = localhost quarkus.tls.localhost-pkcs12.key-store.p12.alias-password = localhost-keystore-password Server truststore for Mutual TLS quarkus.tls.localhost-pkcs12.trust-store.p12.path = localhost-truststore.pkcs12 quarkus.tls.localhost-pkcs12.trust-store.p12.password = localhost-truststore-password Select localhost-pkcs12 as the TLS configuration for the HTTP server quarkus.http.tls-configuration-name = localhost-pkcs12 Do not allow any clients which do not prove their indentity through an SSL certificate quarkus.http.ssl.client-auth = required CXF service quarkus.cxf.endpoint.\"/mTls\".implementor = io.quarkiverse.cxf.it.auth.mtls.MTlsHelloServiceImpl CXF client with a properly set certificate for mTLS quarkus.cxf.client.mTls.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/mTls quarkus.cxf.client.mTls.service-interface = io.quarkiverse.cxf.it.security.policy.HelloService quarkus.cxf.client.mTls.key-store = target/classes/client-keystore.pkcs12 quarkus.cxf.client.mTls.key-store-type = pkcs12 quarkus.cxf.client.mTls.key-store-password = client-keystore-password quarkus.cxf.client.mTls.key-password = client-keystore-password quarkus.cxf.client.mTls.trust-store = target/classes/client-truststore.pkcs12 quarkus.cxf.client.mTls.trust-store-type = pkcs12 quarkus.cxf.client.mTls.trust-store-password = client-truststore-password Include the keystores in the native executable quarkus.native.resources.includes = *.pkcs12,*.jks", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <wsp:Policy wsp:Id=\"HttpsSecurityServicePolicy\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\" xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <wsp:ExactlyOne> <wsp:All> <sp:TransportBinding> <wsp:Policy> <sp:TransportToken> <wsp:Policy> <sp:HttpsToken RequireClientCertificate=\"false\" /> </wsp:Policy> </sp:TransportToken> <sp:IncludeTimestamp /> <sp:AlgorithmSuite> <wsp:Policy> <sp:Basic128 /> </wsp:Policy> </sp:AlgorithmSuite> </wsp:Policy> </sp:TransportBinding> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>", "package io.quarkiverse.cxf.it.security.policy; import jakarta.jws.WebMethod; import jakarta.jws.WebService; import org.apache.cxf.annotations.Policy; /** * A service implementation with a transport policy set */ @WebService(serviceName = \"HttpsPolicyHelloService\") @Policy(placement = Policy.Placement.BINDING, uri = \"https-policy.xml\") public interface HttpsPolicyHelloService extends AbstractHelloService { @WebMethod @Override public String hello(String text); }", "ERROR [org.apa.cxf.ws.pol.PolicyVerificationInInterceptor] Inbound policy verification failed: These policy alternatives can not be satisfied: {http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702}TransportBinding: TLS is not enabled", "quarkus.cxf.client.basicAuth.wsdl = http://localhost:USD{quarkus.http.test-port}/soap/basicAuth?wsdl quarkus.cxf.client.basicAuth.client-endpoint-url = http://localhost:USD{quarkus.http.test-port}/soap/basicAuth quarkus.cxf.client.basicAuth.username = bob quarkus.cxf.client.basicAuth.password = bob234", "quarkus.cxf.client.basicAuthSecureWsdl.wsdl = http://localhost:USD{quarkus.http.test-port}/soap/basicAuth?wsdl quarkus.cxf.client.basicAuthSecureWsdl.client-endpoint-url = http://localhost:USD{quarkus.http.test-port}/soap/basicAuthSecureWsdl quarkus.cxf.client.basicAuthSecureWsdl.username = bob quarkus.cxf.client.basicAuthSecureWsdl.password = USD{client-server.bob.password} quarkus.cxf.client.basicAuthSecureWsdl.secure-wsdl-access = true", "quarkus.http.auth.basic = true quarkus.security.users.embedded.enabled = true quarkus.security.users.embedded.plain-text = true quarkus.security.users.embedded.users.alice = alice123 quarkus.security.users.embedded.roles.alice = admin quarkus.security.users.embedded.users.bob = bob234 quarkus.security.users.embedded.roles.bob = app-user", "package io.quarkiverse.cxf.it.auth.basic; import jakarta.annotation.security.RolesAllowed; import jakarta.jws.WebService; import io.quarkiverse.cxf.it.HelloService; @WebService(serviceName = \"HelloService\", targetNamespace = HelloService.NS) @RolesAllowed(\"app-user\") public class BasicAuthHelloServiceImpl implements HelloService { @Override public String hello(String person) { return \"Hello \" + person + \"!\"; } }", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <wsp:Policy wsp:Id=\"UsernameTokenSecurityServicePolicy\" xmlns:wsp=\"http://schemas.xmlsoap.org/ws/2004/09/policy\" xmlns:sp=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702\" xmlns:sp13=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200802\" xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <wsp:ExactlyOne> <wsp:All> <sp:SupportingTokens> <wsp:Policy> <sp:UsernameToken sp:IncludeToken=\"http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/IncludeToken/AlwaysToRecipient\"> <wsp:Policy> <sp:WssUsernameToken11 /> <sp13:Created /> <sp13:Nonce /> </wsp:Policy> </sp:UsernameToken> </wsp:Policy> </sp:SupportingTokens> </wsp:All> </wsp:ExactlyOne> </wsp:Policy>", "@WebService(serviceName = \"UsernameTokenPolicyHelloService\") @Policy(placement = Policy.Placement.BINDING, uri = \"username-token-policy.xml\") public interface UsernameTokenPolicyHelloService extends AbstractHelloService { }", "A service with a UsernameToken policy assertion quarkus.cxf.endpoint.\"/helloUsernameToken\".implementor = io.quarkiverse.cxf.it.security.policy.UsernameTokenPolicyHelloServiceImpl quarkus.cxf.endpoint.\"/helloUsernameToken\".security.callback-handler = #usernameTokenPasswordCallback These properties are used in UsernameTokenPasswordCallback and in the configuration of the helloUsernameToken below wss.user = cxf-user wss.password = secret A client with a UsernameToken policy assertion quarkus.cxf.client.helloUsernameToken.client-endpoint-url = https://localhost:USD{quarkus.http.test-ssl-port}/services/helloUsernameToken quarkus.cxf.client.helloUsernameToken.service-interface = io.quarkiverse.cxf.it.security.policy.UsernameTokenPolicyHelloService quarkus.cxf.client.helloUsernameToken.security.username = USD{wss.user} quarkus.cxf.client.helloUsernameToken.security.password = USD{wss.password}", "package io.quarkiverse.cxf.it.security.policy; import java.io.IOException; import javax.security.auth.callback.Callback; import javax.security.auth.callback.CallbackHandler; import javax.security.auth.callback.UnsupportedCallbackException; import jakarta.enterprise.context.ApplicationScoped; import jakarta.inject.Named; import org.apache.wss4j.common.ext.WSPasswordCallback; import org.eclipse.microprofile.config.inject.ConfigProperty; @ApplicationScoped @Named(\"usernameTokenPasswordCallback\") /* We refer to this bean by this name from application.properties */ public class UsernameTokenPasswordCallback implements CallbackHandler { /* These two configuration properties are set in application.properties */ @ConfigProperty(name = \"wss.password\") String password; @ConfigProperty(name = \"wss.user\") String user; @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { if (callbacks.length < 1) { throw new IllegalStateException(\"Expected a \" + WSPasswordCallback.class.getName() + \" at possition 0 of callbacks. Got array of length \" + callbacks.length); } if (!(callbacks[0] instanceof WSPasswordCallback)) { throw new IllegalStateException( \"Expected a \" + WSPasswordCallback.class.getName() + \" at possition 0 of callbacks. Got an instance of \" + callbacks[0].getClass().getName() + \" at possition 0\"); } final WSPasswordCallback pc = (WSPasswordCallback) callbacks[0]; if (user.equals(pc.getIdentifier())) { pc.setPassword(password); } else { throw new IllegalStateException(\"Unexpected user \" + user); } } }", "package io.quarkiverse.cxf.it.security.policy; import org.assertj.core.api.Assertions; import org.junit.jupiter.api.Test; import io.quarkiverse.cxf.annotation.CXFClient; import io.quarkus.test.junit.QuarkusTest; @QuarkusTest public class UsernameTokenTest { @CXFClient(\"helloUsernameToken\") UsernameTokenPolicyHelloService helloUsernameToken; @Test void helloUsernameToken() { Assertions.assertThat(helloUsernameToken.hello(\"CXF\")).isEqualTo(\"Hello CXF from UsernameToken!\"); } }", "<soap:Envelope xmlns:soap=\"http://schemas.xmlsoap.org/soap/envelope/\"> <soap:Header> <wsse:Security xmlns:wsse=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd\" soap:mustUnderstand=\"1\"> <wsse:UsernameToken xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" wsu:Id=\"UsernameToken-bac4f255-147e-42a4-aeec-e0a3f5cd3587\"> <wsse:Username>cxf-user</wsse:Username> <wsse:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">secret</wsse:Password> <wsse:Nonce EncodingType=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-soap-message-security-1.0#Base64Binary\">3uX15dZT08jRWFWxyWmfhg==</wsse:Nonce> <wsu:Created>2024-10-02T17:32:10.497Z</wsu:Created> </wsse:UsernameToken> </wsse:Security> </soap:Header> <soap:Body> <ns2:hello xmlns:ns2=\"http://policy.security.it.cxf.quarkiverse.io/\"> <arg0>CXF</arg0> </ns2:hello> </soap:Body> </soap:Envelope>", "export USDCAMEL_VAULT_AWS_ACCESS_KEY=accessKey export USDCAMEL_VAULT_AWS_SECRET_KEY=secretKey export USDCAMEL_VAULT_AWS_REGION=region", "camel.vault.aws.accessKey = accessKey camel.vault.aws.secretKey = secretKey camel.vault.aws.region = region", "export USDCAMEL_VAULT_AWS_USE_DEFAULT_CREDENTIALS_PROVIDER=true export USDCAMEL_VAULT_AWS_REGION=region", "camel.vault.aws.defaultCredentialsProvider = true camel.vault.aws.region = region", "export USDCAMEL_VAULT_AWS_USE_PROFILE_CREDENTIALS_PROVIDER=true export USDCAMEL_VAULT_AWS_PROFILE_NAME=test-account export USDCAMEL_VAULT_AWS_REGION=region", "camel.vault.aws.profileCredentialsProvider = true camel.vault.aws.profileName = test-account camel.vault.aws.region = region", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{aws:route}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{aws:route:default}}\"/> </route> </camelContext>", "{ \"username\": \"admin\", \"password\": \"password123\", \"engine\": \"postgres\", \"host\": \"127.0.0.1\", \"port\": \"3128\", \"dbname\": \"db\" }", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{aws:database/username}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{aws:database/username:admin}}\"/> </route> </camelContext>", "export USDCAMEL_VAULT_GCP_SERVICE_ACCOUNT_KEY=file:////path/to/service.accountkey export USDCAMEL_VAULT_GCP_PROJECT_ID=projectId", "camel.vault.gcp.serviceAccountKey = accessKey camel.vault.gcp.projectId = secretKey", "export USDCAMEL_VAULT_GCP_USE_DEFAULT_INSTANCE=true export USDCAMEL_VAULT_GCP_PROJECT_ID=projectId", "camel.vault.gcp.useDefaultInstance = true camel.vault.aws.projectId = region", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{gcp:route}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{gcp:route:default}}\"/> </route> </camelContext>", "{ \"username\": \"admin\", \"password\": \"password123\", \"engine\": \"postgres\", \"host\": \"127.0.0.1\", \"port\": \"3128\", \"dbname\": \"db\" }", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{gcp:database/username}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{gcp:database/username:admin}}\"/> </route> </camelContext>", "export USDCAMEL_VAULT_AZURE_TENANT_ID=tenantId export USDCAMEL_VAULT_AZURE_CLIENT_ID=clientId export USDCAMEL_VAULT_AZURE_CLIENT_SECRET=clientSecret export USDCAMEL_VAULT_AZURE_VAULT_NAME=vaultName", "camel.vault.azure.tenantId = accessKey camel.vault.azure.clientId = clientId camel.vault.azure.clientSecret = clientSecret camel.vault.azure.vaultName = vaultName", "export USDCAMEL_VAULT_AZURE_IDENTITY_ENABLED=true export USDCAMEL_VAULT_AZURE_VAULT_NAME=vaultName", "camel.vault.azure.azureIdentityEnabled = true camel.vault.azure.vaultName = vaultName", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{azure:route}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{azure:route:default}}\"/> </route> </camelContext>", "{ \"username\": \"admin\", \"password\": \"password123\", \"engine\": \"postgres\", \"host\": \"127.0.0.1\", \"port\": \"3128\", \"dbname\": \"db\" }", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{azure:database/username}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{azure:database/username:admin}}\"/> </route> </camelContext>", "export USDCAMEL_VAULT_HASHICORP_TOKEN=token export USDCAMEL_VAULT_HASHICORP_HOST=host export USDCAMEL_VAULT_HASHICORP_PORT=port export USDCAMEL_VAULT_HASHICORP_SCHEME=http/https", "camel.vault.hashicorp.token = token camel.vault.hashicorp.host = host camel.vault.hashicorp.port = port camel.vault.hashicorp.scheme = scheme", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{hashicorp:secret:route}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{hashicorp:secret:route:default}}\"/> </route> </camelContext>", "{ \"username\": \"admin\", \"password\": \"password123\", \"engine\": \"postgres\", \"host\": \"127.0.0.1\", \"port\": \"3128\", \"dbname\": \"db\" }", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{hashicorp:secret:database/username}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{hashicorp:secret:database/username:admin}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{hashicorp:secret:route@2}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <to uri=\"{{hashicorp:route:default@2}}\"/> </route> </camelContext>", "<camelContext> <route> <from uri=\"direct:start\"/> <log message=\"Username is {{hashicorp:secret:database/username:admin@2}}\"/> </route> </camelContext>", "export USDCAMEL_VAULT_AWS_USE_DEFAULT_CREDENTIALS_PROVIDER=accessKey export USDCAMEL_VAULT_AWS_REGION=region", "camel.vault.aws.useDefaultCredentialProvider = true camel.vault.aws.region = region", "camel.vault.aws.refreshEnabled=true camel.vault.aws.refreshPeriod=60000 camel.vault.aws.secrets=Secret camel.main.context-reload-enabled = true", "{ \"source\": [\"aws.secretsmanager\"], \"detail-type\": [\"AWS API Call via CloudTrail\"], \"detail\": { \"eventSource\": [\"secretsmanager.amazonaws.com\"] } }", "{ \"Policy\": \"{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Id\\\":\\\"<queue_arn>/SQSDefaultPolicy\\\",\\\"Statement\\\":[{\\\"Sid\\\": \\\"EventsToMyQueue\\\", \\\"Effect\\\": \\\"Allow\\\", \\\"Principal\\\": {\\\"Service\\\": \\\"events.amazonaws.com\\\"}, \\\"Action\\\": \\\"sqs:SendMessage\\\", \\\"Resource\\\": \\\"<queue_arn>\\\", \\\"Condition\\\": {\\\"ArnEquals\\\": {\\\"aws:SourceArn\\\": \\\"<eventbridge_rule_arn>\\\"}}}]}\" }", "aws sqs set-queue-attributes --queue-url <queue_url> --attributes file://policy.json", "camel.vault.aws.refreshEnabled=true camel.vault.aws.refreshPeriod=60000 camel.vault.aws.secrets=Secret camel.main.context-reload-enabled = true camel.vault.aws.useSqsNotification=true camel.vault.aws.sqsQueueUrl=<queue_url>", "export USDCAMEL_VAULT_GCP_USE_DEFAULT_INSTANCE=true export USDCAMEL_VAULT_GCP_PROJECT_ID=projectId", "camel.vault.gcp.useDefaultInstance = true camel.vault.aws.projectId = projectId", "camel.vault.gcp.projectId= projectId camel.vault.gcp.refreshEnabled=true camel.vault.gcp.refreshPeriod=60000 camel.vault.gcp.secrets=hello* camel.vault.gcp.subscriptionName=subscriptionName camel.main.context-reload-enabled = true", "export USDCAMEL_VAULT_AZURE_TENANT_ID=tenantId export USDCAMEL_VAULT_AZURE_CLIENT_ID=clientId export USDCAMEL_VAULT_AZURE_CLIENT_SECRET=clientSecret export USDCAMEL_VAULT_AZURE_VAULT_NAME=vaultName", "camel.vault.azure.tenantId = accessKey camel.vault.azure.clientId = clientId camel.vault.azure.clientSecret = clientSecret camel.vault.azure.vaultName = vaultName", "export USDCAMEL_VAULT_AZURE_IDENTITY_ENABLED=true export USDCAMEL_VAULT_AZURE_VAULT_NAME=vaultName", "camel.vault.azure.azureIdentityEnabled = true camel.vault.azure.vaultName = vaultName", "camel.vault.azure.refreshEnabled=true camel.vault.azure.refreshPeriod=60000 camel.vault.azure.secrets=Secret camel.vault.azure.eventhubConnectionString=eventhub_conn_string camel.vault.azure.blobAccountName=blob_account_name camel.vault.azure.blobContainerName=blob_container_name camel.vault.azure.blobAccessKey=blob_access_key camel.main.context-reload-enabled = true" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html-single/developing_applications_with_red_hat_build_of_apache_camel_for_quarkus/%7BLinkCEQReference%7Dextensions-telegram
Chapter 7. Securing connections by using OCSP
Chapter 7. Securing connections by using OCSP Online Certificate Status Protocol (OCSP) is a technology that allows web browsers and web servers to communicate over a secured connection. The encrypted data is sent from one side and decrypted by the other side before processing. The web browser and the web server both encrypt and decrypt the data. 7.1. Online Certificate Status Protocol When a web browser and a web server communicate over a secured connection, the server presents a set of credentials in the form of a certificate. The browser then validates the certificate and sends a request for certificate status information. The server responds with a certificate status of current, expired, or unknown. The certificate contains the following types of information: Syntax for communication Control information such as start time, end time, and address information to access an Online Certificate Status Protocol (OCSP) responder. The web server uses an OCSP responder to check the certificate status. You can configure the web server to use the OCSP responder that is listed in the certificate or another OCSP responder. OCSP allows a grace period for expired certificates, which allows access to a server for a limited time before renewing the certificate. OCSP overcomes limitations of the older Certificate Revocation List (CRL) method. Additional resources Red Hat Certificate System Planning, Installation, and Deployment Guide . 7.2. Configuring the Apache HTTP Server for SSL connections You can configure the Apache HTTP Server to support SSL connections, by installing the mod_ssl package and specifying configuration settings in the ssl.conf file. Prerequisites You have generated an SSL certificate and private key. You know the location of the SSL certificate and private key file. You have obtained the Common Name (CN) that is associated with the SSL certificate. Procedure To install mod_ssl , enter the following command: To specify SSL configuration settings: Open the JBCS_HOME /httpd/conf.d/ssl.conf file. Enter details for the ServerName , SSLCertificateFile , and SSLCertificateKeyFile . For example: Note The ServerName must match the Common Name (CN) that is associated with the SSL certificate. If the ServerName does not match the CN, client browsers display domain name mismatch errors. The SSLCertificateFile specifies the path to the SSL certificate file. The SSLCertificateKeyFile specifies the path to the private key file that is associated with the SSL certificate. Verify that the Listen directive matches the hostname or IP address for the httpd service for your deployment. To restart the Apache HTTP Server, enter the following command: 7.3. Using OCSP with the Apache HTTP Server You can use the Online Certificate Status Protocol (OCSP) for secure connections with the Apache HTTP Server. Prerequisites You have configured Apache HTTP Server for SSL connections . Procedure Configure a certificate authority. Note Ensure that your CA can issue OCSP certificates. The CA must be able to append the following attributes to the certificate: In the preceding example, replace HOST and PORT with the details of the OCSP responder that you will configure. Configure an OCSP responder. Additional resources Managing certificates and certificate authorities . Configuring OCSP responders . 7.4. Configuring the Apache HTTP Server to validate OCSP certificates You can configure the Apache HTTP Server to validate OCSP certificates, by defining OCSP settings in the ssl_conf file. Prerequisites You have configured a Certificate Authority (CA) . You have configured an OCSP Responder . Procedure Open the JBCS_HOME /httpd/conf.d/ssl.conf file. Specify the appropriate OCSP configuration details for your deployment. For example: Note The preceding example shows how to enable OCSP validation of client certificates. In the preceding example, replace <HOST> and <PORT> with the IP address and port of the default OCSP Responder. 7.5. Verifying the OCSP configuration for the Apache HTTP Server You can use the OpenSSL command-line tool to verify the OCSP configuration for the Apache HTTP Server. Procedure On the command line, enter the openssl command in the following format: In the preceding command, ensure that you specify the following details: Use the -issuer option to specify the CA certificate. Use the -cert option to specify the client certificate that you want to verify. Use the -url option to specify the HTTP server validating Certificate (OCSP). Use the -CA option to specify the CA certificate for verifying the Apache HTTP Server server certificate. Use the -VAfile option to specify the OCSP responder certificate. Revised on 2024-03-15 10:24:32 UTC
[ "yum install jbcs-httpd24-mod_ssl", "<VirtualHost _default_:443> ServerName www.example.com:443 SSLCertificateFile /opt/rh/jbcs-httpd24/root/etc/pki/tls/certs/localhost.crt SSLCertificateKeyFile /opt/rh/jbcs-httpd24/root/etc/pki/tls/private/localhost.key", "service jbcs-httpd24-httpd restart", "[ usr_cert ] authorityInfoAccess=OCSP;URI:http:// <HOST> : <PORT> [ v3_OCSP ] basicConstraints = CA:FALSE keyUsage = nonRepudiation, digitalSignature, keyEncipherment extendedKeyUsage = OCSP Signing", "Require valid client certificates (mutual auth) SSLVerifyClient require SSLVerifyDepth 3 Enable OCSP SSLOCSPEnable on SSLOCSPDefaultResponder http:// <HOST> : <PORT> SSLOCSPOverrideResponder on", "openssl ocsp -issuer cacert.crt -cert client.cert -url http:// HOST : PORT -CA ocsp_ca.cert -VAfile ocsp.cert" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/apache_http_server_installation_guide/ocsp
29.5. Analyzing the Data
29.5. Analyzing the Data Periodically, the OProfile daemon, oprofiled , collects the samples and writes them to the /var/lib/oprofile/samples/ directory. Before reading the data, make sure all data has been written to this directory by executing the following command as root: Each sample file name is based on the name of the executable. For example, the samples for the default event on a Pentium III processor for /bin/bash becomes: The following tools are available to profile the sample data once it has been collected: opreport opannotate Use these tools, along with the binaries profiled, to generate reports that can be further analyzed. Warning The executable being profiled must be used with these tools to analyze the data. If it must change after the data is collected, back up the executable used to create the samples as well as the sample files. Please note that the sample file and the binary have to agree. Making a backup is not going to work if they do not match. oparchive can be used to address this problem. Samples for each executable are written to a single sample file. Samples from each dynamically linked library are also written to a single sample file. While OProfile is running, if the executable being monitored changes and a sample file for the executable exists, the existing sample file is automatically deleted. Thus, if the existing sample file is needed, it must be backed up, along with the executable used to create it before replacing the executable with a new version. The OProfile analysis tools use the executable file that created the samples during analysis. If the executable changes the analysis tools will be unable to analyze the associated samples. See Section 29.4, "Saving Data" for details on how to back up the sample file. 29.5.1. Using opreport The opreport tool provides an overview of all the executables being profiled. The following is part of a sample output: Each executable is listed on its own line. The first column is the number of samples recorded for the executable. The second column is the percentage of samples relative to the total number of samples. The third column is the name of the executable. See the opreport man page for a list of available command-line options, such as the -r option used to sort the output from the executable with the smallest number of samples to the one with the largest number of samples.
[ "~]# opcontrol --dump", "\\{root\\}/bin/bash/\\{dep\\}/\\{root\\}/bin/bash/CPU_CLK_UNHALTED.100000", "Profiling through timer interrupt TIMER:0| samples| %| ------------------ 25926 97.5212 no-vmlinux 359 1.3504 pi 65 0.2445 Xorg 62 0.2332 libvte.so.4.4.0 56 0.2106 libc-2.3.4.so 34 0.1279 libglib-2.0.so.0.400.7 19 0.0715 libXft.so.2.1.2 17 0.0639 bash 8 0.0301 ld-2.3.4.so 8 0.0301 libgdk-x11-2.0.so.0.400.13 6 0.0226 libgobject-2.0.so.0.400.7 5 0.0188 oprofiled 4 0.0150 libpthread-2.3.4.so 4 0.0150 libgtk-x11-2.0.so.0.400.13 3 0.0113 libXrender.so.1.2.2 3 0.0113 du 1 0.0038 libcrypto.so.0.9.7a 1 0.0038 libpam.so.0.77 1 0.0038 libtermcap.so.2.0.8 1 0.0038 libX11.so.6.2 1 0.0038 libgthread-2.0.so.0.400.7 1 0.0038 libwnck-1.so.4.9.0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-oprofile-analyzing-data
Chapter 2. Getting Started with NetworkManager
Chapter 2. Getting Started with NetworkManager 2.1. Overview of NetworkManager In Red Hat Enterprise Linux 7, the default networking service is provided by NetworkManager , which is a dynamic network control and configuration daemon to keep network devices and connections up and active when they are available. The traditional ifcfg type configuration files are still supported. See Section 2.6, "Using NetworkManager with Network Scripts" for more information. 2.1.1. Benefits of Using NetworkManager The main benefits of using NetworkManager are: Making Network management easier: NetworkManager ensures that network connectivity works. When it detects that there is no network configuration in a system but there are network devices, NetworkManager creates temporary connections to provide connectivity. Providing easy setup of connection to the user: NetworkManager offers management through different tools - GUI, nmtui, nmcli -. See Section 2.5, "NetworkManager Tools" . Supporting configuration flexibility. For example, configuring a WiFi interface, NetworkManager scans and shows the available wifi networks. You can select an interface, and NetworkManager displays the required credentials providing automatic connection after the reboot process. NetworkManager can configure network aliases, IP addresses, static routes, DNS information, and VPN connections, as well as many connection-specific parameters. You can modify the configuration options to reflect your needs. Offering an API through D-Bus which allows applications to query and control network configuration and state. In this way, applications can check or configure networking through D-BUS. For example, the web console interface, which monitors and configures servers through a web browser, uses the NetworkManager D-BUS interface to configure networking. Maintaining the state of devices after the reboot process and taking over interfaces which are set into managed mode during restart. Handling devices which are not explicitly set unmanaged but controlled manually by the user or another network service.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/getting_started_with_networkmanager
Chapter 2. Authentication and Security
Chapter 2. Authentication and Security 2.1. TLS/SSL Certification The Red Hat Virtualization API requires Hypertext Transfer Protocol Secure (HTTPS) footnote:[See RFC 2818: HTTP Over TLS for secure interaction with client software, such as the SDK and CLI components. This involves obtaining the CA certificate used by the server and importing it into the certificate store of your client. 2.1.1. Obtaining the CA Certificate You can obtain the CA certificate from the Red Hat Virtualization Manager and transfer it to the client machine using one of these methods: Method 1 The preferred method for obtaining the CA certificate is to use the openssl s_client command line tool to perform a real TLS handshake with the server, and then extract the certificates that it presents. Run the openssl s_client command as in the following example: USD openssl s_client \ -connect myengine.example.com:443 \ -showcerts \ < /dev/null Example output CONNECTED(00000003) depth=1 C = US, O = Example Inc., CN = myengine.example.com.23416 verify error:num=19:self signed certificate in certificate chain --- Certificate chain 0 s:/C=US/O=Example Inc./CN=myengine.example.com i:/C=US/O=Example Inc./CN=myengine.example.com.23416 -----BEGIN CERTIFICATE----- MIIEaTCCA1GgAwIBAgICEAQwDQYJKoZIhvcNAQEFBQAwSTELMAkGA1UEBhMCVVMx FTATBgNVBAoTDEV4YW1wbGUgSW5jLjEjMCEGA1UEAxMaZW5naW5lNDEuZXhhbXBs SVlJe7e5FTEtHJGTAeWWM6dGbsFhip5VXM0gfqg= -----END CERTIFICATE----- 1 s:/C=US/O=Example Inc./CN=myengine.example.com.23416 i:/C=US/O=Example Inc./CN=myengine.example.com.23416 -----BEGIN CERTIFICATE----- MIIDxjCCAq6gAwIBAgICEAAwDQYJKoZIhvcNAQEFBQAwSTELMAkGA1UEBhMCVVMx FTATBgNVBAoTDEV4YW1wbGUgSW5jLjEjMCEGA1UEAxMaZW5naW5lNDEuZXhhbXBs Pkyg1rQHR6ebGQ== -----END CERTIFICATE----- The text between the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- lines shows the certificates presented by the server. The first certificate is the certificate of the server itself. The second certificate is the certificate of the CA. Copy the CA certificate, including the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- lines, to the ca.crt file as in the following example: Important This is the most reliable method to obtain the CA certificate used by the server. The rest of the methods described here will work in most cases, but they will not obtain the correct CA certificate if the certificate has been manually replaced by the server administrator. Method 2 If you cannot use openssl s_client to obtain the certificate, you can use a command line tool, for example curl or wget , to download the CA certificate from the Red Hat Virtualization Manager. curl and wget are available on multiple platforms. If using curl : USD curl \ --output ca.crt \ 'http://myengine.example.com/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA' If using wget : USD wget \ --output-document ca.crt \ 'http://myengine.example.com/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA' Method 3 Use a web browser to navigate to the certificate located at `https://myengine.example.com/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA`. Depending on the chosen browser, the certificate is downloaded or imported into the browser's keystore: If the browser downloads the certificate, save the file as ca.crt . If the browser imports the certificate, export it using the browser's certificate management options and save it as ca.crt . Method 4 Log in to the Red Hat Virtualization Manager, export the certificate from the truststore, and copy it to your client machine. Log in to the Red Hat Virtualization Manager machine as root . Export the certificate from the truststore using the Java keytool management utility: # keytool \ -keystore /etc/pki/ovirt-engine/.truststore \ -storepass mypass \ -exportcert \ -alias cacert \ -rfc \ -file ca.crt This creates a certificate file called ca.crt . Copy the certificate to the client machine using the scp command: USD scp ca.crt [email protected]:/home/myuser/. Each of these methods results in a certificate file named ca.crt on your client machine. You must then import this file into the certificate store of the client. 2.1.2. Importing a Certificate to a Client Importing a certificate to a client relies on how the client stores and interprets certificates. See your client documentation for more information on importing a certificate. 2.2. Authentication Any user with a Red Hat Virtualization Manager account has access to the API. All requests must be authenticated using either OAuth or basic authentication, as described below. 2.2.1. OAuth Authentication Since version 4.0 of Red Hat Virtualization the preferred authentication mechanism is OAuth 2.0 , as described in RFC 6749 . OAuth is a sophisticated protocol, with several mechanisms for obtaining authorization and access tokens. For use with the Red Hat Virtualization API, the only supported one is the Resource Owner Password Credentials Grant , as described in RFC 6749 . You must first obtain a token , sending the user name and password to the Red Hat Virtualization Manager single sign-on service: The request body must contain the grant_type , scope , username , and password parameters: Table 2.1. OAuth token request parameters Name Value grant_type password scope ovirt-app-api username admin@internal password mypassword These parameters must be URL-encoded . For example, the @ character in the user name needs to be encoded as %40 . The resulting request body will be something like this: Important The scope parameter is described as optional in the OAuth RFC, but when using it with the Red Hat Virtualization API it is mandatory, and its value must be ovirt-app-api . If the user name and password are valid, the Red Hat Virtualization Manager single sign-on service will respond with a JSON document similar to this one: { "access_token": "fqbR1ftzh8wBCviLxJcYuV5oSDI=", "token_type": "bearer", "scope": "...", ... } For API authentication purposes, the only relevant name/value pair is the access_token . Do not manipulate this in any way; use it exactly as provided by the SSO service. Once the token has been obtained, it can be used to perform requests to the API by including it in the HTTP Authorization header, and using the Bearer scheme. For example, to get the list of virtual machines, send a request like this: The token can be used multiple times, for multiple requests, but it will eventually expire. When it expires, the server will reject the request with the 401 HTTP response code: When this happens, a new token is needed, as the Red Hat Virtualization Manager single sign-on service does not currently support refreshing tokens. A new token can be requested using the same method described above. 2.2.2. Basic Authentication Important Basic authentication is supported only for backwards compatibility; it is deprecated since version 4.0 of Red Hat Virtualization, and will be removed in the future. Each request uses HTTP Basic Authentication [1] to encode the credentials. If a request does not include an appropriate Authorization header, the server sends a 401 Authorization Required response: Request are issued with an Authorization header for the specified realm. Encode an appropriate Red Hat Virtualization Manager domain and user in the supplied credentials with the username@domain:password convention. The following table shows the process for encoding credentials in Base64 . Table 2.2. Encoding credentials for API access Item Value User name admin Domain internal Password mypassword Unencoded credentials admin@internal:mypassword Base64 encoded credentials YWRtaW5AaW50ZXJuYWw6bXlwYXNzd29yZA== Provide the Base64-encoded credentials as shown: Important Basic authentication involves potentially sensitive information, such as passwords, sent as plain text. The API requires Hypertext Transfer Protocol Secure (HTTPS) for transport-level encryption of plain-text requests. Important Some Base64 libraries break the result into multiple lines and terminate each line with a newline character. This breaks the header and causes a faulty request. The Authorization header requires the encoded credentials on a single line within the header. 2.2.3. Authentication Sessions The API also provides authentication session support. Send an initial request with authentication details, then send all subsequent requests using a session cookie to authenticate. 2.2.3.1. Requesting an Authenticated Session Send a request with the Authorization and Prefer: persistent-auth headers: This returns a response with the following header: Take note of the JSESSIONID= value. In this example the value is 5dQja5ubr4yvI2MM2z+LZxrK . Send all subsequent requests with the Prefer: persistent-auth and Cookie headers with the JSESSIONID= value. The Authorization header is no longer needed when using an authenticated session. When the session is no longer required, perform a request to the sever without the Prefer: persistent-auth header. [1] Basic Authentication is described in RFC 2617: HTTP Authentication: Basic and Digest Access Authentication .
[ "openssl s_client -connect myengine.example.com:443 -showcerts < /dev/null", "CONNECTED(00000003) depth=1 C = US, O = Example Inc., CN = myengine.example.com.23416 verify error:num=19:self signed certificate in certificate chain --- Certificate chain 0 s:/C=US/O=Example Inc./CN=myengine.example.com i:/C=US/O=Example Inc./CN=myengine.example.com.23416 -----BEGIN CERTIFICATE----- MIIEaTCCA1GgAwIBAgICEAQwDQYJKoZIhvcNAQEFBQAwSTELMAkGA1UEBhMCVVMx FTATBgNVBAoTDEV4YW1wbGUgSW5jLjEjMCEGA1UEAxMaZW5naW5lNDEuZXhhbXBs SVlJe7e5FTEtHJGTAeWWM6dGbsFhip5VXM0gfqg= -----END CERTIFICATE----- 1 s:/C=US/O=Example Inc./CN=myengine.example.com.23416 i:/C=US/O=Example Inc./CN=myengine.example.com.23416 -----BEGIN CERTIFICATE----- MIIDxjCCAq6gAwIBAgICEAAwDQYJKoZIhvcNAQEFBQAwSTELMAkGA1UEBhMCVVMx FTATBgNVBAoTDEV4YW1wbGUgSW5jLjEjMCEGA1UEAxMaZW5naW5lNDEuZXhhbXBs Pkyg1rQHR6ebGQ== -----END CERTIFICATE-----", "-----BEGIN CERTIFICATE----- MIIDxjCCAq6gAwIBAgICEAAwDQYJKoZIhvcNAQEFBQAwSTELMAkGA1UEBhMCVVMx FTATBgNVBAoTDEV4YW1wbGUgSW5jLjEjMCEGA1UEAxMaZW5naW5lNDEuZXhhbXBs Pkyg1rQHR6ebGQ== -----END CERTIFICATE-----", "curl --output ca.crt 'http://myengine.example.com/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA'", "wget --output-document ca.crt 'http://myengine.example.com/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA'", "keytool -keystore /etc/pki/ovirt-engine/.truststore -storepass mypass -exportcert -alias cacert -rfc -file ca.crt", "scp ca.crt [email protected]:/home/myuser/.", "POST /ovirt-engine/sso/oauth/token HTTP/1.1 Host: myengine.example.com Content-Type: application/x-www-form-urlencoded Accept: application/json", "grant_type=password&scope=ovirt-app-api&username=admin%40internal&password=mypassword", "{ \"access_token\": \"fqbR1ftzh8wBCviLxJcYuV5oSDI=\", \"token_type\": \"bearer\", \"scope\": \"...\", }", "GET /ovirt-engine/api/vms HTTP/1.1 Host: myengine.example.com Accept: application/xml Authorization: Bearer fqbR1ftzh8wBCviLxJcYuV5oSDI=", "HTTP/1.1 401 Unauthorized", "HEAD /ovirt-engine/api HTTP/1.1 Host: myengine.example.com HTTP/1.1 401 Authorization Required", "HEAD /ovirt-engine/api HTTP/1.1 Host: myengine.example.com Authorization: Basic YWRtaW5AaW50ZXJuYWw6bXlwYXNzd29yZA== HTTP/1.1 200 OK", "HEAD /ovirt-engine/api HTTP/1.1 Host: myengine.example.com Authorization: Basic YWRtaW5AaW50ZXJuYWw6bXlwYXNzd29yZA== Prefer: persistent-auth HTTP/1.1 200 OK", "Set-Cookie: JSESSIONID=5dQja5ubr4yvI2MM2z+LZxrK; Path=/ovirt-engine/api; Secure", "HEAD /ovirt-engine/api HTTP/1.1 Host: myengine.example.com Prefer: persistent-auth Cookie: JSESSIONID=5dQja5ubr4yvI2MM2z+LZxrK HTTP/1.1 200 OK", "HEAD /ovirt-engine/api HTTP/1.1 Host: myengine.example.com Authorization: Basic YWRtaW5AaW50ZXJuYWw6bXlwYXNzd29yZA== HTTP/1.1 200 OK" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/rest_api_guide/authentication-and-security
5.4.16.3. Converting an LVM RAID1 Logical Volume to an LVM Linear Logical Volume
5.4.16.3. Converting an LVM RAID1 Logical Volume to an LVM Linear Logical Volume You can convert an existing RAID1 LVM logical volume to an LVM linear logical volume with the lvconvert command by specifying the -m0 argument. This removes all the RAID data subvolumes and all the RAID metadata subvolumes that make up the RAID array, leaving the top-level RAID1 image as the linear logical volume. The following example displays an existing LVM RAID1 logical volume. The following command converts the LVM RAID1 logical volume my_vg/my_lv to an LVM linear device. When you convert an LVM RAID1 logical volume to an LVM linear volume, you can specify which physical volumes to remove. The following example shows the layout of an LVM RAID1 logical volume made up of two images: /dev/sda1 and /dev/sdb1 . In this example, the lvconvert command specifies that you want to remove /dev/sda1 , leaving /dev/sdb1 as the physical volume that makes up the linear device.
[ "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0)", "lvconvert -m0 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sde1(1)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) lvconvert -m0 my_vg/my_lv /dev/sda1 lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sdb1(1)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/convert-raid1-to-linear
Spine Leaf Networking
Spine Leaf Networking Red Hat OpenStack Platform 17.0 Configuring routed spine-leaf networks using Red Hat OpenStack Platform director OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/spine_leaf_networking/index
Chapter 4. Search
Chapter 4. Search Use automation controller's search tool for search and filter capabilities across many functions. An expandable list of search conditions is available from the Name menu in the search field. 4.1. Rules for searching These searching tips assume that you are not searching hosts. The typical syntax of a search consists of a field, followed by a value. A colon is used to separate the field that you want to search from the value. If the search has no colon (see example 3) it is treated as a simple string search where ?search=foobar is sent. Note Search functionality for Job templates is limited to alphanumeric characters only. The following are examples of syntax used for searching: name:localhost In this example, the user is searching for the string localhost in the name attribute. If that string does not match something from Fields or Related Fields , the entire search is treated as a string. organization.name:Default This example shows a Related Field Search. The period in organization.name separates the model from the field. Depending on how deep or complex the search is, you can have multiple periods in that part of the query. foobar This is a simple string (key term) search that finds all instances of the search term using an icontains search against the name and description fields. If you use a space between terms, for example foo bar , then results that contain both terms are returned. If the terms are wrapped in quotes, for example, "foo bar" , automation controller searches for the string with the terms appearing together. Specific name searches search against the API name. For example, Management job in the user interface is system_job in the API. organization:Default This example shows a Related Field search but without specifying a field to go along with the organization. This is supported by the API and is analogous to a simple string search but carried out against the organization (does an icontains search against both the name and description). 4.1.1. Values for search fields To find values for certain fields, refer to the API endpoint for extensive options and their valid values. For example, if you want to search against /api/v2/jobs > type field, you can find the values by performing an OPTIONS request to /api/v2/jobs and look for entries in the API for "type" . Additionally, you can view the related searches by scrolling to the bottom of each screen. In the example for /api/v2/jobs , the related search shows: "related_search_fields": [ "modified_by__search", "project__search", "project_update__search", "credentials__search", "unified_job_template__search", "created_by__search", "inventory__search", "labels__search", "schedule__search", "webhook_credential__search", "job_template__search", "job_events__search", "dependent_jobs__search", "launch_config__search", "unifiedjob_ptr__search", "notifications__search", "unified_job_node__search", "instance_group__search", "hosts__search", "job_host_summaries__search" The values for Fields come from the keys in a GET request. url , related , and summary_fields are not used. The values for Related Fields also come from the OPTIONS response, but from a different attribute. Related Fields is populated by taking all the values from related_search_fields and stripping off the __search from the end. Any search that does not start with a value from Fields or a value from the Related Fields, is treated as a generic string search. Searching for localhost , for example, results in the UI sending ?search=localhost as a query parameter to the API endpoint. This is a shortcut for an icontains search on the name and description fields. 4.1.2. Searching using values from related fields Searching a Related Field requires you to start the search string with the Related Field. The following example describes how to search using values from the Related Field, organization . The left-hand side of the search string must start with organization , for example, organization:Default . Depending on the related field, you can provide more specific direction for the search by providing secondary and tertiary fields. An example of this is to specify that you want to search for all job templates that use a project matching a certain name. The syntax on this would look like: job_template.project.name:"A Project" . Note This query executes against the unified_job_templates endpoint which is why it starts with job_template . If you were searching against the job_templates endpoint, then you would not need the job_template portion of the query. 4.1.3. Other search considerations Be aware of the following issues when searching in automation controller: There is currently no supported syntax for OR queries. All search terms are AND ed in the query parameters. The left part of a search parameter can be wrapped in quotes to support searching for strings with spaces. For more information, see Rules for searching . Currently, the values in the Fields are direct attributes expected to be returned in a GET request. Whenever you search against one of the values, automation controller carries out an __icontains search. So, for example, name:localhost sends back ?name__icontains=localhost . Automation controller currently performs this search for every Field value, even id . 4.2. Sort Where applicable, use the arrows in each column to sort by ascending order. The following is an example from the schedules list: The direction of the arrow indicates the sort order of the column.
[ "\"related_search_fields\": [ \"modified_by__search\", \"project__search\", \"project_update__search\", \"credentials__search\", \"unified_job_template__search\", \"created_by__search\", \"inventory__search\", \"labels__search\", \"schedule__search\", \"webhook_credential__search\", \"job_template__search\", \"job_events__search\", \"dependent_jobs__search\", \"launch_config__search\", \"unifiedjob_ptr__search\", \"notifications__search\", \"unified_job_node__search\", \"instance_group__search\", \"hosts__search\", \"job_host_summaries__search\"" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/using_automation_execution/assembly-controller-search
2.2.3. Comments and Documentation
2.2.3. Comments and Documentation All probes and functions should include comment blocks that describe their purpose, the data they provide, and the context in which they run (e.g. interrupt, process, etc). Use comments in areas where your intent may not be clear from reading the code. Note that specially-formatted comments are automatically extracted from most tapsets and included in this guide. This helps ensure that tapset contributors can write their tapset and document it in the same place. The specified format for documenting tapsets is as follows: For example: To override the automatically-generated Synopsis content, use: For example: It is recommended that you use the <programlisting> tag in this instance, since overriding the Synopsis content of an entry does not automatically form the necessary tags. For the purposes of improving the DocBook XML output of your comments, you can also use the following XML tags in your comments: command emphasis programlisting remark (tagged strings will appear in Publican beta builds of the document)
[ "/** * probe tapset.name - Short summary of what the tapset does. * @argument: Explanation of argument. * @argument2: Explanation of argument2. Probes can have multiple arguments. * * Context: * A brief explanation of the tapset context. * Note that the context should only be 1 paragraph short. * * Text that will appear under \"Description.\" * * A new paragraph that will also appear under the heading \"Description\". * * Header: * A paragraph that will appear under the heading \"Header\". **/", "/** * probe vm.write_shared_copy- Page copy for shared page write. * @address: The address of the shared write. * @zero: Boolean indicating whether it is a zero page * (can do a clear instead of a copy). * * Context: * The process attempting the write. * * Fires when a write to a shared page requires a page copy. This is * always preceded by a vm.shared_write . **/", "* Synopsis: * New Synopsis string *", "/** * probe signal.handle - Fires when the signal handler is invoked * @sig: The signal number that invoked the signal handler * * Synopsis: * <programlisting>static int handle_signal(unsigned long sig, siginfo_t *info, struct k_sigaction *ka, * sigset_t *oldset, struct pt_regs * regs)</programlisting> */" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/tapsetelements-docscomments
Chapter 3. User access for RBAC in systems inventory
Chapter 3. User access for RBAC in systems inventory 3.1. User Access for inventory Red Hat uses role-based access control (RBAC) to manage User Access on the Red Hat Hybrid Cloud Console. You can use User Access to configure access and permissions in systems inventory. Insights for Red Hat Enterprise Linux provides a set of predefined roles. Depending on the application, the predefined roles for each supported application can have different permissions that are tailored to that application. 3.1.1. How User Access works The User Access feature is based on managing roles, rather than on individually assigning permissions to specific users. In User Access, each role has a specific set of permissions. For example, a role might allow read permission for an application. Another role might allow write permission for an application. You create groups that contain roles and, by extension, the permissions assigned to each role. You also assign users to those groups. This means that each user in a group is assigned the permissions of the roles in that group. By creating different groups and adding or removing roles for that group, you control the permissions allowed for that group. When you add one or more users to a group, those users can perform all actions that are allowed for that group. Insights for Red Hat Enterprise Linux provides two default access groups for User Access: Default admin access group. The Default admin access group is limited to Organization Administrator users in your organization. You cannot change or modify the roles in the Default admin access group. Default access group. The Default access group contains all authenticated users in your organization. These users automatically inherit a selection of predefined roles. Note You can make changes to the Default access group. However, when you do so, the group name automatically changes to Custom default access . 3.1.2. Inventory predefined roles and permissions Role Name Description Permissions Inventory administrator You can perform any available operation against any Inventory resource. inventory:*:* (* denotes all permissions on all resources) Workspace administrator You can read and edit Workspace data. inventory: groups: write and inventory: groups: read Workspace viewer You can read Workspace data. inventory: groups: read Inventory Hosts administrator You can read and edit Inventory Hosts data. inventory: hosts: write and inventory: hosts: read Inventory Hosts viewer You can read Inventory Hosts data. inventory: hosts: read Additional Resources Role Based Access Control 3.2. User access to workspaces Workspaces allow you to group systems in your inventory together into logical units, such as location, department, or purpose. Each system can belong to only one workspace. Workspaces also support role-based access control (RBAC). Use RBAC to set custom permissions on workspaces according to user role. The Workspace administrator User Access role allows you to create workspaces. This role is automatically included in the Default Access group and cannot be removed from it. However, users with this role can modify any workspace. Provide this role only to those users who are entitled to access the entire system inventory. For a user to be able to use workspaces and RBAC to restrict access to specific systems, that user must either be a member of the Default Access group, or have both the Workspace administrator and the User Access Administrator roles. Workspace users have group-level RBAC permissions. Custom permissions include the following: inventory:groups:read View workspace details page inventory:groups:write Rename the workspace Add systems to the workspace Remove systems from the workspace Note A user cannot view the systems inside the workspace without inventory:hosts:read permissions. Systems users have system-level RBAC permissions. They can perform the following workspace operations: inventory:hosts:read View all the systems in the workspace and their details, or view ungrouped systems View information about the systems for other Insights services inventory:hosts:write Rename the system Delete the system 3.2.1. Managing user access to Workspaces Note If you do not have access to Workspaces, navigating to Inventory > Workspaces shows the message Workspace access permissions needed . Be aware that you can still view the workspace name assigned to the system for which you have read access, even if you do not have access to the workspace itself. To view the workspace that contains the system, you need to have the Workspaces Viewer role, or have Workspace view permissions assigned. Important Before making changes in the RBAC configuration, review the list of known limitations in the User Scenarios section. For more information about managing user access, assigning roles, and adding members to user access groups, see User Access Configuration Guide for Role-based Access Control (RBAC) . 3.2.1.1. Creating a custom User Access role Use the User Access application to configure user access for your workspace. To create a custom role: Click the Settings icon (⚙) in the top right corner, and then select User Access to navigate to the User Access application. The Identity & Access Management main page displays. In the left navigation menu, click Roles . Click Create role . The Create Role wizard displays. Select whether you want to create a new role, or copy an existing role. To create a new role, select create a role from scratch . To copy an existing role, select Copy an existing role . A list of roles appears. Select the role you want to copy, and then click . Name the new role. If desired, add a description. Click . The Add permissions page displays. The Applications filter displays by default. Click the Filter by application drop-down and select inventory to display all the available inventory permissions. The four inventory permissions include: inventory:hosts:read - Allows users to view systems (needed to view systems both inside and outside the workspace). inventory:hosts:write - Allows users to Rename or Delete systems. inventory:groups:read - Allows users to view Workspaces, and general info (not including systems in it). inventory:groups:write - Allows users to edit Workspace membership (add and remove systems from workspaces). Select the inventory permissions that you need. Here are some examples: To give a user full access to the workspace and all systems in that workspace, select all four permissions. To give a user full access to the systems inside a workspace without granting workspace editing access, select inventory:hosts:read, inventory:hosts:write, and inventory:groups:read, but do not select inventory:groups:write. To give a user full access to ungrouped systems, select all four permissions (ungrouped systems are considered a workspace). Click . The Define workspace access page displays. Click the drop-down arrow to each permission in the list, and then select the workspaces you want to apply to those permissions. You must select at least one workspace for each permission. Click . The Review details page displays. Review the permissions for the custom role and click Submit . Repeat this process for each workspace or for each group of users that requires specific workspace access. Example scenarios These examples describe the permissions you assign to users in specific custom roles. To allow users to only see systems in specific Workspaces, but to not see systems that do not belong to any Workspaces, select only those workspaces. To allow users to see systems in specific workspaces as well as any systems that do not belong to any workspaces, select those workspaces for all permissions and select Ungrouped systems for inventory:hosts permissions. To allow users to see everything in the inventory, you do not need to create a custom role. To give a group of system administrators the same access to workspaces A, B, and C, create a single custom role and assign permissions to those three workspaces. However, if you want to give different users access to different workspaces, create a separate custom role for each workspace. 3.2.1.2. Assigning custom roles To assign custom roles to a user or group of users, create a User Access group. The users inside a group receive the roles assigned to that group. At the top right of the screen, click the Settings icon (the Settings icon (⚙)), and then click User Access . In the left navigation menu, click User Access > Groups . Click Create group . The Create group wizard displays the Name and description page. Add a group name. If desired, add a description for the group. Click . The Add roles page displays. Select the custom role you created, and then click . The Add members page displays. Select the users to whom you want to assign the custom role. Click . The Add service accounts page appears. Optional. If you want to assign a service account or accounts to the selected users, select one or more service accounts from the list. Click . Review the details of your selections and click Submit . Repeat this procedure for each custom role that you want to assign to one or more users. 3.2.1.3. Configuring user access After you create and assign a custom role, all users in your organization still have full access to inventory because they still have the Inventory Hosts administrator role assigned. This allows any user to view and edit all hosts. The Default Access workspace assigns this role to all users in your organization by default. To limit organization users' access to only the workspaces/systems defined in your custom roles, edit the Default Access workspace to remove the Inventory Hosts administrator role. At the top right of the screen, click the Settings icon (the Settings icon (⚙)), and then click User Access . In the left navigation menu, click User Access > Groups . The list of User access groups displays. Click the Default access group. The list of roles displays. Select the checkbox for the Inventory Hosts administrator role. Click the options icon (...) at the far right of the row. The Remove role option appears. Click Remove role . The Remove role dialog box appears. Click the Remove role button. If you have never edited the Default Access workspace before, a warning message displays. Select the I understand, and I want to continue checkbox, and then click Continue . 3.2.1.4. Configuring Inventory Hosts administrator access After you edit the Default Access workspace, you might want to create a new User Access group of users who should have Inventory Hosts administrator permissions. At the top right of the screen, click the Settings icon (the Settings icon (⚙)), and then click User Access . In the left navigation menu, click User Access > Groups . The list of workspaces displays. Click Create group . The Create Group wizard appears. Add a name for the group. If desired, add a description. Click . The Add roles page displays. Select the Inventory Hosts administrator role from the list of roles. Click . The Add members page displays. Select the users to whom you want to assign the role. Click . The Add service accounts page appears. Optional. If you want to assign a service account or accounts to the selected users, select one or more service accounts from the list. Click . The Review details page displays. Review the details of your selections, and click Submit . After you have finished configuring access, specific users within your organization have full inventory access, and others have limited inventory access. 3.3. User scenarios This section contains two example scenarios that illustrate the features of workspaces. These scenarios follow a procedure format, so that you can follow the required steps and test them, if desired. 3.3.1. Scenario 1: Two different IT teams must manage their systems with Insights In this scenario, two different IT teams working for the same company share the same Insights organization within their Red Hat account. Each IT team must have complete control of their systems in the Red Hat Hybrid Cloud Console, but should not be able to see or modify the systems belonging to the other team. All users within the same team have the same level of access on both their workspaces and their systems. Access levels can be adjusted as needed. Regular users of both IT teams will not be able to see or modify systems that are not part of any Workspaces. Organization administrators, or anyone with Workspace administrator and Inventory Hosts administrator roles, have access to the entire inventory. Any other users without those roles cannot access the entire inventory. 3.3.1.1. Initial phase By default, organization administrators (who are members of the Default administrator access group) on the Red Hat Hybrid Cloud Console always have read/write access to all workspaces and read/write access to all systems, regardless of how permissions are defined for the workspace objects and systems assigned to them. These users are the only ones who may configure user access for workspaces. If any regular users need to manage user access, the administrators can grant them Workspace admin and Inventory Hosts admin roles separately. By default, users who are not Organization administrators are assigned the Inventory Hosts administrator role from the Default access group. The Default access group gives these users inventory:hosts:read and inventory:hosts:write access across the entire inventory. Those permissions grant read and write permissions on all systems and all Workspaces. Note For more information about the Default access group, see The Default access group . 3.3.1.2. Restricting access Prerequisites You are a member of the Default administrator access group. Step 1: Create the workspaces First, create two separate workspaces. (This example shows two workspaces, but you may create as many as you need). Workspace 1: IT team A - Systems Workspace 2: IT team B - Systems Step 2: Add systems to Workspaces Now that the workspaces have been created, add systems to them. Click in each workspace and select Add systems . At this stage, all the users still have access to all systems, regardless of the workspaces they are in. This is because they still have the Inventory hosts administrator role, which allows them to see all systems, whether or not they are grouped into workspaces. Step 3: Create custom roles To customize access for different workspaces, create custom roles for those workspaces. To create a custom role, navigate to User Access > Roles , and click Create role . A wizard opens. Name your role (For example, IT Team - A Role), and click . Step 3a: Select permissions to add to the custom role The wizard displays the Add permissions step. This step contains four inventory permissions options. Select them depending on the level of access you want to grant. For full access to the workspace and its systems, select: inventory:groups:read inventory:groups:write inventory:hosts:read inventory:hosts:write After selecting permissions, click . You can adjust the permissions as needed. Step 3b: Assign permissions to selected Workspaces In this step, choose the workspace(s) to which you want to grant permission. This example shows how to select the workspace that corresponds to the current role. For example, create the role IT team A - Role , and specify the workspace IT team A - Systems for each permission. Review the details and click Submit . Repeat the steps in this section to create a second custom role called IT team B - Role and select the IT team B - Systems workspace. Note You can grant access to systems that are not part of any workspace to one or both IT teams. To add those systems, add the Ungrouped systems that appear in the Group definition of the host permissions to your custom role. Step 4: Create User Access groups to assign custom roles to users Now that the custom roles are created, create User Access groups to assign the custom roles to users. To create a new group, navigate to User Access > Groups and click Create group . Name the group, select the newly created role, and select the users to whom you want to give the role. For example, two IT groups have the following permissions: IT team A - user group IT team A - role IT team B - user group IT team B - role The groups appear as follows: Step 5: Remove Inventory Hosts administrator role from the Default Access group At this stage, despite all the steps taken above, all users still have access to all systems, regardless of the workspaces they are in. This is because they still have the Inventory Hosts administrator role, which allows them to see all systems, whether or not they are grouped into workspaces. To limit access to systems, navigate to User Access > Groups and select the Default Access group. Remove the Inventory Hosts administrator role from this group. If the users are also members of additional User Access Groups, make sure to review and remove the Inventory Hosts administrator role from those groups as needed. Once the role has been removed, the User Access controls behave as expected: Users given custom roles to limit their views to certain workspaces and systems only see those workspaces and systems. 3.3.1.3. Adjustment considerations If you have more than two IT groups, you can create as many custom roles and user groups as you need. If you are trying to grant the same people the same access to multiple workspaces, you can select more than one workspace to grant permissions within the same custom role. You can grant access to systems that are not part of any workspace. Add the Ungrouped systems in the Group definition of the host permissions to the custom role. Remember that as long the Inventory hosts administrator role is still in the Default Access group, all users who have that role still have access to everything. If you do not select Ungrouped systems in your custom roles, users with those roles will not be able to see any ungrouped systems once you remove the Inventory Hosts administrator permission from the Default access group. 3.3.2. Scenario 2: Access to ungrouped systems In this example, an admin wants to give a group of users access to ungrouped systems, but not to grouped systems. Step 1: Create a custom role Navigate to User Access > Roles and click Create role . The Create Role wizard displays. Set the role name and description and click . Add the inventory:hosts permissions and click . Configure both of the permissions to apply to the Group definition named Ungrouped systems . Click . Review the details of the role and click Submit . Step 2: Add the custom role to an RBAC group Once you create the custom role, navigate to User Access > Groups and click Create Group to create a User Access (RBAC) group. Name the group, select the new custom role, and select the users to whom you want to assign this role. Note These steps only work when the users do not have the Inventory Hosts administrator role assigned from the Default Access group. To check this, navigate to User Access > Groups and click on the Default Access group at the top. If that role is in the group, remove it, because that role gives users access to the whole inventory - including both ungrouped and grouped systems. After you remove the role, the selected set of users only has access to ungrouped systems in your inventory. 3.3.3. Known limitations Users who are Organization Administrators (members of the Default admin access group) will always have full access to systems and Workspaces. A user without permission on the system will not be able to add it to a Remediation. However, if an existing Remediation with active systems was created in the past, the user will still be able to run it, even if the permissions have been removed on that system for the current user. Note Before enabling workspaces in your organization, review your Notifications configuration to ensure that only appropriate groups of users are configured to receive Email notifications. If you do not review your Notifications configuration, users might receive alerts triggered by systems outside of their workspace permission scope. Additional Resources For more information about user access, refer to User Access Guide for Role-based Access Control (RBAC) .
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/viewing_and_managing_system_inventory_with_fedramp/assembly-user-access
12.7. Additional Resources
12.7. Additional Resources The following sources of information provide additional resources regarding BIND. 12.7.1. Installed Documentation BIND features a full-range of installed documentation covering many different topics, each placed in its own subject directory: /usr/share/doc/bind- <version-number> / - This directory lists the most recent features. Replace <version-number> with the version of bind installed on the system. /usr/share/doc/bind- <version-number> /arm/ - This directory contains HTML and SGML of the BIND 9 Administrator Reference Manual , which details BIND resource requirements, how to configure different types of nameservers, perform load balancing, and other advanced topics. For most new users of BIND, this is the best place to start. Replace <version-number> with the version of bind installed on the system. /usr/share/doc/bind- <version-number> /draft/ - This directory contains assorted technical documents that reviews issues related to DNS service and some methods proposed to address them. Replace <version-number> with the version of bind installed on the system. /usr/share/doc/bind- <version-number> /misc/ - This directory contains documents designed to address specific advanced issues. Users of BIND version 8 should consult the migration document for specific changes they must make when moving to BIND 9. The options file lists all of the options implemented in BIND 9 that are used in /etc/named.conf . Replace <version-number> with the version of bind installed on the system. /usr/share/doc/bind- <version-number> /rfc/ - This directory privides every RFC document related to BIND. Replace <version-number> with the version of bind installed on the system. BIND related man pages - There are a number of man pages for the various applications and configuration files involved with BIND. The following lists some of the more important man pages. Administrative Applications man rndc - Explains the different options available when using the rndc command to control a BIND nameserver. Server Applications man named - Explores assorted arguments that can be used to control the BIND nameserver daemon. man lwresd - Describes the purpose of and options available for the lightweight resolver daemon. Configuration Files man named.conf - A comprehensive list of options available within the named configuration file. man rndc.conf - A comprehensive list of options available within the rndc configuration file.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-bind-additional-resources
Chapter 17. Debugging low latency node tuning status
Chapter 17. Debugging low latency node tuning status Use the PerformanceProfile custom resource (CR) status fields for reporting tuning status and debugging latency issues in the cluster node. 17.1. Debugging low latency CNF tuning status The PerformanceProfile custom resource (CR) contains status fields for reporting tuning status and debugging latency degradation issues. These fields report on conditions that describe the state of the operator's reconciliation functionality. A typical issue can arise when the status of machine config pools that are attached to the performance profile are in a degraded state, causing the PerformanceProfile status to degrade. In this case, the machine config pool issues a failure message. The Node Tuning Operator contains the performanceProfile.spec.status.Conditions status field: Status: Conditions: Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Available Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Upgradeable Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Progressing Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Degraded The Status field contains Conditions that specify Type values that indicate the status of the performance profile: Available All machine configs and Tuned profiles have been created successfully and are available for cluster components are responsible to process them (NTO, MCO, Kubelet). Upgradeable Indicates whether the resources maintained by the Operator are in a state that is safe to upgrade. Progressing Indicates that the deployment process from the performance profile has started. Degraded Indicates an error if: Validation of the performance profile has failed. Creation of all relevant components did not complete successfully. Each of these types contain the following fields: Status The state for the specific type ( true or false ). Timestamp The transaction timestamp. Reason string The machine readable reason. Message string The human readable reason describing the state and error details, if any. 17.1.1. Machine config pools A performance profile and its created products are applied to a node according to an associated machine config pool (MCP). The MCP holds valuable information about the progress of applying the machine configurations created by performance profiles that encompass kernel args, kube config, huge pages allocation, and deployment of rt-kernel. The Performance Profile controller monitors changes in the MCP and updates the performance profile status accordingly. The only conditions returned by the MCP to the performance profile status is when the MCP is Degraded , which leads to performanceProfile.status.condition.Degraded = true . Example The following example is for a performance profile with an associated machine config pool ( worker-cnf ) that was created for it: The associated machine config pool is in a degraded state: # oc get mcp Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2ee57a93fa6c9181b546ca46e1571d2d True False False 3 3 3 0 2d21h worker rendered-worker-d6b2bdc07d9f5a59a6b68950acf25e5f True False False 2 2 2 0 2d21h worker-cnf rendered-worker-cnf-6c838641b8a08fff08dbd8b02fb63f7c False True True 2 1 1 1 2d20h The describe section of the MCP shows the reason: # oc describe mcp worker-cnf Example output Message: Node node-worker-cnf is reporting: "prepping update: machineconfig.machineconfiguration.openshift.io \"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\" not found" Reason: 1 nodes are reporting degraded status on sync The degraded state should also appear under the performance profile status field marked as degraded = true : # oc describe performanceprofiles performance Example output Message: Machine config pool worker-cnf Degraded Reason: 1 nodes are reporting degraded status on sync. Machine config pool worker-cnf Degraded Message: Node yquinn-q8s5v-w-b-z5lqn.c.openshift-gce-devel.internal is reporting: "prepping update: machineconfig.machineconfiguration.openshift.io \"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\" not found". Reason: MCPDegraded Status: True Type: Degraded 17.2. Collecting low latency tuning debugging data for Red Hat Support When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. The must-gather tool enables you to collect diagnostic information about your OpenShift Container Platform cluster, including node tuning, NUMA topology, and other information needed to debug issues with low latency setup. For prompt support, supply diagnostic information for both OpenShift Container Platform and low latency tuning. 17.2.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, such as: Resource definitions Audit logs Service logs You can specify one or more images when you run the command by including the --image argument. When you specify an image, the tool collects data related to that feature or product. When you run oc adm must-gather , a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in your current working directory. 17.2.2. Gathering low latency tuning data Use the oc adm must-gather CLI command to collect information about your cluster, including features and objects associated with low latency tuning, including: The Node Tuning Operator namespaces and child objects. MachineConfigPool and associated MachineConfig objects. The Node Tuning Operator and associated Tuned objects. Linux kernel command line options. CPU and NUMA topology Basic PCI device information and NUMA locality. Prerequisites Access to the cluster as a user with the cluster-admin role. The OpenShift Container Platform CLI (oc) installed. Procedure Navigate to the directory where you want to store the must-gather data. Collect debugging information by running the following command: USD oc adm must-gather Example output [must-gather ] OUT Using must-gather plug-in image: quay.io/openshift-release When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 829er0fa-1ad8-4e59-a46e-2644921b7eb6 ClusterVersion: Stable at "<cluster_version>" ClusterOperators: All healthy and stable [must-gather ] OUT namespace/openshift-must-gather-8fh4x created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-rhlgc created [must-gather-5564g] POD 2023-07-17T10:17:37.610340849Z Gathering data for ns/openshift-cluster-version... [must-gather-5564g] POD 2023-07-17T10:17:38.786591298Z Gathering data for ns/default... [must-gather-5564g] POD 2023-07-17T10:17:39.117418660Z Gathering data for ns/openshift... [must-gather-5564g] POD 2023-07-17T10:17:39.447592859Z Gathering data for ns/kube-system... [must-gather-5564g] POD 2023-07-17T10:17:39.803381143Z Gathering data for ns/openshift-etcd... ... Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 829er0fa-1ad8-4e59-a46e-2644921b7eb6 ClusterVersion: Stable at "<cluster_version>" ClusterOperators: All healthy and stable Create a compressed file from the must-gather directory that was created in your working directory. For example, on a computer that uses a Linux operating system, run the following command: USD tar cvaf must-gather.tar.gz must-gather-local.5421342344627712289 1 1 Replace must-gather-local.5421342344627712289// with the directory name created by the must-gather tool. Note Create a compressed file to attach the data to a support case or to use with the Performance Profile Creator wrapper script when you create a performance profile. Attach the compressed file to your support case on the Red Hat Customer Portal . Additional resources Gathering data about your cluster with the must-gather tool Managing nodes with MachineConfig and KubeletConfig CRs Using the Node Tuning Operator Configuring huge pages at boot time How huge pages are consumed by apps
[ "Status: Conditions: Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Available Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: True Type: Upgradeable Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Progressing Last Heartbeat Time: 2020-06-02T10:01:24Z Last Transition Time: 2020-06-02T10:01:24Z Status: False Type: Degraded", "oc get mcp", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-2ee57a93fa6c9181b546ca46e1571d2d True False False 3 3 3 0 2d21h worker rendered-worker-d6b2bdc07d9f5a59a6b68950acf25e5f True False False 2 2 2 0 2d21h worker-cnf rendered-worker-cnf-6c838641b8a08fff08dbd8b02fb63f7c False True True 2 1 1 1 2d20h", "oc describe mcp worker-cnf", "Message: Node node-worker-cnf is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\" Reason: 1 nodes are reporting degraded status on sync", "oc describe performanceprofiles performance", "Message: Machine config pool worker-cnf Degraded Reason: 1 nodes are reporting degraded status on sync. Machine config pool worker-cnf Degraded Message: Node yquinn-q8s5v-w-b-z5lqn.c.openshift-gce-devel.internal is reporting: \"prepping update: machineconfig.machineconfiguration.openshift.io \\\"rendered-worker-cnf-40b9996919c08e335f3ff230ce1d170\\\" not found\". Reason: MCPDegraded Status: True Type: Degraded", "oc adm must-gather", "[must-gather ] OUT Using must-gather plug-in image: quay.io/openshift-release When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 829er0fa-1ad8-4e59-a46e-2644921b7eb6 ClusterVersion: Stable at \"<cluster_version>\" ClusterOperators: All healthy and stable [must-gather ] OUT namespace/openshift-must-gather-8fh4x created [must-gather ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-rhlgc created [must-gather-5564g] POD 2023-07-17T10:17:37.610340849Z Gathering data for ns/openshift-cluster-version [must-gather-5564g] POD 2023-07-17T10:17:38.786591298Z Gathering data for ns/default [must-gather-5564g] POD 2023-07-17T10:17:39.117418660Z Gathering data for ns/openshift [must-gather-5564g] POD 2023-07-17T10:17:39.447592859Z Gathering data for ns/kube-system [must-gather-5564g] POD 2023-07-17T10:17:39.803381143Z Gathering data for ns/openshift-etcd Reprinting Cluster State: When opening a support case, bugzilla, or issue please include the following summary data along with any other requested information: ClusterID: 829er0fa-1ad8-4e59-a46e-2644921b7eb6 ClusterVersion: Stable at \"<cluster_version>\" ClusterOperators: All healthy and stable", "tar cvaf must-gather.tar.gz must-gather-local.5421342344627712289 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/scalability_and_performance/cnf-debugging-low-latency-tuning-status
2.3. Creating and Maintaining Database Links
2.3. Creating and Maintaining Database Links Chaining means that a server contacts other servers on behalf of a client application and then returns the combined results. Chaining is implemented through a database link , which points to data stored remotely. When a client application requests data from a database link, the database link retrieves the data from the remote database and returns it to the client. For more general information about chaining, see the chapter "Designing Directory Topology," in the Red Hat Directory Server Deployment Guide . Section 21.8, "Monitoring Database Link Activity" covers how to monitor database link activity. 2.3.1. Creating a New Database Link The basic database link configuration requires the following information: Suffix information. A suffix is created in the directory tree that is managed by the database link, not a regular database. This suffix corresponds to the suffix on the remote server that contains the data. Bind credentials. When the database link binds to a remote server, it impersonates a user, and this specifies the DN and the credentials for each database link to use to bind with remote servers. LDAP URL. This supplies the LDAP URL of the remote server to which the database link connects. The URL consists of the protocol (ldap or ldaps), the host name or IP address (IPv4 or IPv6) for the server, and the port. List of failover servers. This supplies a list of alternative servers for the database link to contact in the event of a failure. This configuration item is optional. Note If secure binds are required for simple password authentication ( Section 20.12.1, "Requiring Secure Binds" ), then any chaining operations will fail unless they occur over a secure connection. Using a secure connection (TLS and STARTTLS connections or SASL authentication) is recommended, anyway. 2.3.1.1. Creating a New Database Link Using the Command Line To create a new database link, use the dsconf chaining link-create command. For example: This creates a database link named example_chain_name for the ou=Customers,dc=example,dc=com . The link refers to the server ldap://remote_server.example.com:389 and uses the specified bind DN and password to authenticate. Because the --bind-mech is set empty, the link uses simple authentication. Note To grant the proxy_user the rights to access data, you must create the proxy ACI entry in the dc=example,dc=com suffix on remote server. How to do so, refer to the section Section 2.3.1.4, "Additional Information on Required Settings When Creating a Database Link" To display additional settings you can set when you create the database link, see: For further details, see Section 2.3.1.4, "Additional Information on Required Settings When Creating a Database Link" . 2.3.1.2. Creating a New Database Link Using the Web Console To create a new database link: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Database menu. Create a new suffix as described in Section 2.1.1, "Creating Suffixes" . Select the suffix, click Suffix Tasks , and select Create Database Link . Fill the fields with the details about the connection to the remote server. For example: For further details, see Section 2.3.1.4, "Additional Information on Required Settings When Creating a Database Link" . Click Create Database Link . 2.3.1.3. Managing the Default Configuration for New Database Links With the dsconf chaining command you can manage the default configuration of database links. To display the current default values, see: To change new database links configuration, use dsconf chaining config-set-def command. For example, to set response-delay parameter to 30 , run: The example command sets the default response timeout for all chaining connections. You can overwrite the response timeout for a specific chaining link if you use dsconf instance chaining link-set command. To see the list of all parameters you can set, run: 2.3.1.4. Additional Information on Required Settings When Creating a Database Link Suffix Information The suffix defines the suffix that is managed by the database link. Bind Credentials For a request from a client application to be chained to a remote server, special bind credentials can be supplied for the client application. This gives the remote server the proxied authorization rights needed to chain operations. Without bind credentials, the database link binds to the remote server as anonymous . For example, a client application sends a request to Server A. Server A contains a database link that chains the request to a database on Server B. The database link on Server A binds to Server B using a special user and password: Server B must contain a user entry and set the proxy authentication rights for this user. To set the proxy authorization correctly, set the proxy ACI as any other ACI. Warning Carefully examine access controls when enabling chaining to avoid giving access to restricted areas of the directory. For example, if a default proxy ACI is created on a branch, the users that connect using the database link will be able to see all entries below the branch. There may be cases when not all of the subtrees should be viewed by a user. To avoid a security hole, create an additional ACI to restrict access to the subtree. For more information on ACIs, see Chapter 18, Managing Access Control . Note When a database link is used by a client application to create or modify entries, the attributes creatorsName and modifiersName do not reflect the real creator or modifier of the entries. These attributes contain the name of the administrative user granted proxied authorization rights on the remote data server. Providing bind credentials involves the following steps on the remote server: Create an administrative user, such as cn=proxy_user,cn=config , for the database link. For information on adding entries, see Chapter 3, Managing Directory Entries . Provide proxy access rights for the administrative user created in the step on the subtree chained to by the database link. For more information on configuring ACIs, see Chapter 18, Managing Access Control For example, the following ACI grants read-only access to the cn=proxy_admin,cn=config user to access data contained on the remote server only within the subtree where the ACI is set. Note When a user binds to a database link, the user's identity is sent to the remote server. Access controls are always evaluated on the remote server. For the user to modify or write data successfully to the remote server, set up the correct access controls on the remote server. For more information about how access controls are evaluated in the context of chained operations, see Section 2.3.3, "Database Links and Access Control Evaluation" . LDAP URL On the server containing the database link, identify the remote server that the database link connects with using an LDAP URL . Unlike the standard LDAP URL format, the URL of the remote server does not specify a suffix. It takes the form ldap:// host_name : port . For the database link to connect to the remote server using LDAP over TLS, the LDAP URL of the remote server uses the protocol LDAPS instead of LDAP in the URL and points to the secure port of the server. For example Note TLS has to be enabled on the local Directory Server and the remote Directory Server to be chained over TLS. For more information on enabling TLS, see Section 9.4, "Enabling TLS" . When the database link and remote server are configured to communicate using TLS, this does not mean that the client application making the operation request must also communicate using TLS. The client can bind using a normal port. Bind Mechanisms The local server can connect to the remote server using several different connection types and authentication mechanisms. There are three ways that the local server can connect to the remote server: Over the standard LDAP port Over a dedicated LDAPS port Using STARTTLS, which is a secure connection over a standard port Note If secure binds are required for simple password authentication ( Section 20.12.1, "Requiring Secure Binds" ), then any chaining operations will fail unless they occur over a secure connection. Using a secure connection (TLS and STARTTLS connections or SASL authentication) is recommended, anyway. There are four different methods which the local server can use to authenticate to the farm server. empty : If there is no bind mechanism set, then the server performs simple authentication and requires a bind DN and password. EXTERNAL : This uses an TLS certificate to authenticate the farm server to the remote server. Either the farm server URL must be set to the secure URL ( ldaps ) or the nsUseStartTLS attribute must be set to on . Additionally, the remote server must be configured to map the farm server's certificate to its bind identity, as described in the certmap.conf section in the Red Hat Directory Server Configuration, Command, and File Reference . DIGEST-MD5 : This uses SASL authentication with DIGEST-MD5 encryption. As with simple authentication, this requires the nsMultiplexorBindDN and nsMultiplexorCredentials attributes to give the bind information. GSSAPI : This uses Kerberos-based authentication over SASL. The farm server must be configured with a Kerberos keytab, and the remote server must have a defined SASL mapping for the farm server's bind identity. Setting up Kerberos keytabs and SASL mappings is described in Section 9.10, "Setting up SASL Identity Mapping" . Note SASL connections can be established over standard connections or TLS connections. Note If SASL is used, then the local server must also be configured to chain the SASL and password policy components. Add the components for the database link configuration, as described in Section 2.3.2, "Configuring the Chaining Policy" . 2.3.2. Configuring the Chaining Policy These procedures describe configuring how Directory Server chains requests made by client applications to Directory Servers that contain database links. This chaining policy applies to all database links created on Directory Server. 2.3.2.1. Chaining Component Operations A component is any functional unit in the server that uses internal operations. For example, plug-ins are considered to be components, as are functions in the front-end. However, a plug-in may actually be comprised of multiple components (for example, the ACI plug-in). Some components send internal LDAP requests to the server, expecting to access local data only. For such components, control the chaining policy so that the components can complete their operations successfully. One example is the certificate verification function. Chaining the LDAP request made by the function to check certificates implies that the remote server is trusted. If the remote server is not trusted, then there is a security problem. By default, all internal operations are not chained and no components are allowed to chain, although this can be overridden. Additionally, an ACI must be created on the remote server to allow the specified plug-in to perform its operations on the remote server. The ACI must exist in the suffix assigned to the database link. The following lists the component names, the potential side-effects of allowing them to chain internal operations, and the permissions they need in the ACI on the remote server: ACI plug-in This plug-in implements access control. Operations used to retrieve and update ACI attributes are not chained because it is not safe to mix local and remote ACI attributes. However, requests used to retrieve user entries may be chained by setting the chaining components attribute: Permissions: Read, search, and compare Resource limit component This component sets server limits depending on the user bind DN. Resource limits can be applied on remote users if the resource limitation component is allowed to chain. To chain resource limit component operations, add the chaining component attribute: Permissions: Read, search, and compare Certificate-based authentication checking component This component is used when the external bind method is used. It retrieves the user certificate from the database on the remote server. Allowing this component to chain means certificate-based authentication can work with a database link. To chain this component's operations, add the chaining component attribute: Permissions: Read, search, and compare Password policy component This component is used to allow SASL binds to the remote server. Some forms of SASL authentication require authenticating with a user name and password. Enabling the password policy allows the server to verify and implement the specific authentication method requested and to apply the appropriate password policies. To chain this component's operations, add the chaining component attribute: Permissions: Read, search, and compare SASL component This component is used to allow SASL binds to the remote server. To chain this component's operations, add the chaining component attribute: Permissions: Read, search, and compare Referential Integrity plug-in This plug-in ensures that updates made to attributes containing DNs are propagated to all entries that contain pointers to the attribute. For example, when an entry that is a member of a group is deleted, the entry is automatically removed from the group. Using this plug-in with chaining helps simplify the management of static groups when the group members are remote to the static group definition. To chain this component's operations, add the chaining component attribute: Permissions: Read, search, and compare Attribute Uniqueness plug-in This plug-in checks that all the values for a specified attribute are unique (no duplicates). If this plug-in is chained, it confirms that attribute values are unique even on attributes changed through a database link. To chain this component's operations, add the chaining component attribute: Permissions: Read, search, and compare Roles component This component chains the roles and roles assignments for the entries in a database. Chaining this component maintains the roles even on chained databases. To chain this component's operations, add the chaining component attribute: Permissions: Read, search, and compare Note The following components cannot be chained: Roles plug-in Password policy component Replication plug-ins Referential Integrity plug-in When enabling the Referential Integrity plug-in on servers issuing chaining requests, be sure to analyze performance, resource, and time needs as well as integrity needs. Integrity checks can be time-consuming and draining on memory and CPU. For further information on the limitations surrounding ACIs and chaining, see Section 18.5, "Limitations of ACIs" . 2.3.2.1.1. Chaining Component Operations Using the Command Line To add a component allowed to chain: Specify the components to include in chaining. For example, to configure that the referential integrity component can chain operations: See Section 2.3.2.1, "Chaining Component Operations" for a list of the components which can be chained. Restart the instance: Create an ACI in the suffix on the remote server to which the operation will be chained. For example, to create an ACI for the Referential Integrity plug-in: 2.3.2.1.2. Chaining Component Operations Using the Web Console To add a component allowed to chain: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Database tab. In the navigation on the left, select the Chaining Configuration entry. Click the Add button below the Components to Chain field. Select the component, and click Add & Save New Components . Create an ACI in the suffix on the remote server to which the operation will be chained. For example, to create an ACI for the Referential Integrity plug-in: 2.3.2.2. Chaining LDAP Controls It is possible to not chain operation requests made by LDAP controls. By default, requests made by the following controls are forwarded to the remote server by the database link: Virtual List View (VLV). This control provides lists of parts of entries rather than returning all entry information. Server-side sorting. This control sorts entries according to their attribute values, usually using a specific matching rule. Dereferencing. This control pulls specified attribute information from the referenced entry and returns this information with the rest of the search results. Managed DSA. This controls returns smart referrals as entries, rather than following the referral, so the smart referral itself can be changed or deleted. Loop detection. This control keeps track of the number of times the server chains with another server. When the count reaches the configured number, a loop is detected, and the client application is notified. For more information about using this control, see Section 2.4.3, "Detecting Loops" . Note Server-side sorting and VLV controls are supported only when a client application request is made to a single database. Database links cannot support these controls when a client application makes a request to multiple databases. The LDAP controls which can be chained and their OIDs are listed in the following table: Table 2.1. LDAP Controls and Their OIDs Control Name OID Virtual list view (VLV) 2.16.840.1.113730.3.4.9 Server-side sorting 1.2.840.113556.1.4.473 Managed DSA 2.16.840.1.113730.3.4.2 Loop detection 1.3.6.1.4.1.1466.29539.12 Dereferencing searches 1.3.6.1.4.1.4203.666.5.16 2.3.2.2.1. Chaining LDAP Controls Using the Command Line To chain LDAP controls, use the dsconf chaining config-set --add-control command. For example, to forward the virtual list view control: If clients of Directory Server create their own controls and their operations should be chained to remote servers, add the object identifier (OID) of the custom control. For a list of LDAP controls that can be chained and their OIDs, see Table 2.1, "LDAP Controls and Their OIDs" . 2.3.2.2.2. Chaining LDAP Controls Using the Web Console To chain LDAP controls using the web console: Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Select the instance. Open the Database menu. Select the Chaining Configuration entry. Click the Add button below the Forwarded LDAP Controls field. Select the LDAP control and click Add & Save New Controls . If clients of Directory Server create their own controls and their operations should be chained to remote servers, add the object identifier (OID) of the custom control. For a list of LDAP controls that can be chained and their OIDs, see Table 2.1, "LDAP Controls and Their OIDs" . Click Save . 2.3.3. Database Links and Access Control Evaluation When a user binds to a server containing a database link, the database link sends the user's identity to the remote server. Access controls are always evaluated on the remote server. Every LDAP operation evaluated on the remote server uses the original identity of the client application passed using the proxied authorization control. Operations succeed on the remote server only if the user has the correct access controls on the subtree contained on the remote server. This requires adding the usual access controls to the remote server with a few restrictions: Not all types of access control can be used. For example, role-based or filter-based ACIs need access to the user entry. Because the data are accessed through database links, only the data in the proxy control can be verified. Consider designing the directory in a way that ensures the user entry is located in the same database as the user's data. All access controls based on the IP address or DNS domain of the client may not work since the original domain of the client is lost during chaining. The remote server views the client application as being at the same IP address and in the same DNS domain as the database link. Note Directory Server supports both IPv4 and IPv6 IP addresses. The following restrictions apply to the ACIs used with database links: ACIs must be located with any groups they use. If the groups are dynamic, all users in the group must be located with the ACI and the group. If the group is static, it links to remote users. ACIs must be located with any role definitions they use and with any users intended to have those roles. ACIs that link to values of a user's entry (for example, userattr subject rules) will work if the user is remote. Though access controls are always evaluated on the remote server, they can also be evaluated on both the server containing the database link and the remote server. This poses several limitations: During access control evaluation, contents of user entries are not necessarily available (for example, if the access control is evaluated on the server containing the database link and the entry is located on a remote server). For performance reasons, clients cannot do remote inquiries and evaluate access controls. The database link does not necessarily have access to the entries being modified by the client application. When performing a modify operation, the database link does not have access to the full entry stored on the remote server. If performing a delete operation, the database link is only aware of the entry's DN. If an access control specifies a particular attribute, then a delete operation will fail when being conducted through a database link. Note By default, access controls set on the server containing the database link are not evaluated. To override this default, use the nsCheckLocalACI attribute in the cn= database_link, cn=chaining database,cn=plugins,cn=config entry. However, evaluating access controls on the server containing the database link is not recommended except with cascading chaining.
[ "dsconf -D \"cn=Directory Manager\" ldap://server.example.com chaining link-create --suffix=\" ou=Customers,dc=example,dc=com \" --server-url=\" ldap://remote_server.example.com:389 \" --bind-mech=\"\" --bind-dn=\" cn=proxy_user,cn=config \" --bind-pw=\" password \" \" example_chain_name \"", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com chaining link-create --help", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com chaining config-get-def", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com chaining config-set-def --response-delay 30", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com chaining config-set-def --help", "aci: (targetattr = \"*\")(version 3.0; acl \"Proxied authorization for database links\"; allow (proxy) userdn = \"ldap:///cn=proxy_admin ,cn=config\";)", "ldaps://africa.example.com:636/", "nsActiveChainingComponents: cn=ACI Plugin,cn=plugins,cn=config", "nsActiveChainingComponents: cn=resource limits,cn=components,cn=config", "nsActiveChainingComponents: cn=certificate-based authentication,cn=components,cn=config", "nsActiveChainingComponents: cn=password policy,cn=components,cn=config", "nsActiveChainingComponents: cn=password policy,cn=components,cn=config", "nsActiveChainingComponents: cn=referential integrity postoperation,cn=plugins,cn=config", "nsActiveChainingComponents: cn=attribute uniqueness,cn=plugins,cn=config", "nsActiveChainingComponents: cn=roles,cn=components,cn=config", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com chaining config-set --add-comp=\"cn=referential integrity postoperation,cn=components,cn=config\"", "dsctl instance_name restart", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h remoteserver.example.com -x dn: ou=People,dc=example,dc=com changetype: modify add: aci aci: (targetattr = \"*\")(target=\"ldap:///ou=customers,l=us,dc=example,dc=com\") (version 3.0; acl \"RefInt Access for chaining\"; allow (read,write,search,compare) userdn = \"ldap:///cn=referential integrity postoperation,cn=plugins,cn=config\";)", "ldapmodify -D \"cn=Directory Manager\" -W -p 389 -h remoteserver.example.com -x dn: ou=People,dc=example,dc=com changetype: modify add: aci aci: (targetattr = \"*\")(target=\"ldap:///ou=customers,l=us,dc=example,dc=com\") (version 3.0; acl \"RefInt Access for chaining\"; allow (read,write,search,compare) userdn = \"ldap:///cn=referential integrity postoperation,cn=plugins,cn=config\";)", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com chaining config-set --add-control=\"2.16.840.1.113730.3.4.9\"" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Configuring_Directory_Databases-Creating_and_Maintaining_Database_Links
Chapter 6. Managing application lifecycles
Chapter 6. Managing application lifecycles This chapter outlines the application lifecycle in Satellite and how to create and remove application lifecycles for Satellite and Capsule. 6.1. Introduction to application lifecycle The application lifecycle is a concept central to Satellite's content management functions. The application lifecycle defines how a particular system and its software look at a particular stage. For example, an application lifecycle might be simple; you might only have a development stage and production stage. In this case the application lifecycle might look like this: Development Production However, a more complex application lifecycle might have further stages, such as a phase for testing or a beta release. This adds extra stages to the application lifecycle: Development Testing Beta Release Production Satellite provides methods to customize each application lifecycle stage so that it suits your specifications. Each stage in the application lifecycle is called an environment in Satellite. Each environment uses a specific collection of content. Satellite defines these content collections as a content view. Each content view acts as a filter where you can define what repositories, and packages to include in a particular environment. This provides a method for you to define specific sets of content to designate to each environment. For example, an email server might only require a simple application lifecycle where you have a production-level server for real-world use and a test server for trying out the latest mail server packages. When the test server passes the initial phase, you can set the production-level server to use the new packages. Another example is a development lifecycle for a software product. To develop a new piece of software in a development environment, test it in a quality assurance environment, pre-release as a beta, then release the software as a production-level application. Figure 6.1. Satellite application lifecycle 6.2. Content promotion across the application lifecycle In the application lifecycle chain, when content moves from one environment to the , this is called promotion . Example: Content promotion across Satellite lifecycle environments Each environment contains a set of systems registered to Red Hat Satellite. These systems only have access to repositories relevant to their environment. When you promote packages from one environment to the , the target environment's repositories receive new package versions. As a result, each system in the target environment can update to the new package versions. Development Testing Production example_software -1.1-0.noarch.rpm example_software -1.0-0.noarch.rpm example_software -1.0-0.noarch.rpm After completing development on the patch, you promote the package to the Testing environment so the Quality Engineering team can review the patch. The application lifecycle then contains the following package versions in each environment: Development Testing Production example_software -1.1-0.noarch.rpm example_software -1.1-0.noarch.rpm example_software -1.0-0.noarch.rpm While the Quality Engineering team reviews the patch, the Development team starts work on example_software 2.0. This results in the following application lifecycle: Development Testing Production example_software -2.0-0.noarch.rpm example_software -1.1-0.noarch.rpm example_software -1.0-0.noarch.rpm The Quality Engineering team completes their review of the patch. Now example_software 1.1 is ready to release. You promote 1.1 to the Production environment: Development Testing Production example_software -2.0-0.noarch.rpm example_software -1.1-0.noarch.rpm example_software -1.1-0.noarch.rpm The Development team completes their work on example_software 2.0 and promotes it to the Testing environment: Development Testing Production example_software -2.0-0.noarch.rpm example_software -2.0-0.noarch.rpm example_software -1.1-0.noarch.rpm Finally, the Quality Engineering team reviews the package. After a successful review, promote the package to the Production environment: Development Testing Production example_software -2.0-0.noarch.rpm example_software -2.0-0.noarch.rpm example_software -2.0-0.noarch.rpm For more information, see Section 7.8, "Promoting a content view" . 6.3. Best practices for lifecycle environments Use multiple lifecycle environment paths to implement multiple sequential stages of content consumption. Each stage contains a defined set of content, for example in the Production lifecycle environment. Automate the creation of lifecycle environments by using a Hammer script or an Ansible Playbook . Default use case: Fixed stages in each lifecycle environment paths, for example Development , Test , and Production . Promote content views to lifecycle environments, for example, from Test to Production . All content hosts consuming this content view or composite content view are able to install packages from the Production lifecycle environment. Note that these packages are not installed or updated automatically. If you encounter errors during patching content hosts, attach the host to a version of the content view. This only affects the availability of packages but does not downgrade installed packages. Alternative use case: Using stages in lifecycle environments for fixed content, for example, quarterly updates, and only publishing new minor versions with incremental updates from errata. When patching content hosts, change the lifecycle environment from 2023-Q4 to 2024-Q1 using the Satellite web UI, Satellite API, Hammer CLI, or an activation key. Advantage: You can directly see which software packages a hosts receives by looking at its lifecycle environment. Disadvantage: Promoting content is less dynamic without clearly defined stages such as Development , Test , and Production . Use multiple lifecycle environment paths to define multiple stages for different environments, for example to decouple web server and database hosts. Capsule Servers use lifecycle environments to synchronize content. They synchronize content more efficiently if you split content into multiple lifecycle environment paths. If a specific Capsule Server only serves content for one operating system in a single lifecycle environment path, it only synchronizes required content. 6.4. Creating a lifecycle environment path To create an application lifecycle for developing and releasing software, use the Library environment as the initial environment to create environment paths. Then optionally add additional environments to the environment paths. Procedure In the Satellite web UI, navigate to Content > Lifecycle > Lifecycle Environments . Click New Environment Path to start a new application lifecycle. In the Name field, enter a name for your environment. In the Description field, enter a description for your environment. Click Save . Optional: To add an environment to the environment path, click Add New Environment , complete the Name and Description fields, and select the prior environment from the Prior Environment list. CLI procedure To create an environment path, enter the hammer lifecycle-environment create command and specify the Library environment with the --prior option: Optional: To add an environment to the environment path, enter the hammer lifecycle-environment create command and specify the parent environment with the --prior option: To view the chain of the lifecycle environment, enter the following command: 6.5. Adding lifecycle environments to Capsule Servers If your Capsule Server has the content functionality enabled, you must add an environment so that Capsule can synchronize content from Satellite Server and provide content to host systems. Do not assign the Library lifecycle environment to your Capsule Server because it triggers an automated Capsule sync every time the CDN updates a repository. This might consume multiple system resources on Capsules, network bandwidth between Satellite and Capsules, and available disk space on Capsules. You can use Hammer CLI on Satellite Server or the Satellite web UI. Procedure In the Satellite web UI, navigate to Infrastructure > Capsules , and select the Capsule that you want to add a lifecycle to. Click Edit and click the Lifecycle Environments tab. From the left menu, select the lifecycle environments that you want to add to Capsule and click Submit . To synchronize the content on the Capsule, click the Overview tab and click Synchronize . Select either Optimized Sync or Complete Sync . For definitions of each synchronization type, see Recovering a Repository . CLI procedure To display a list of all Capsule Servers, on Satellite Server, enter the following command: Note the Capsule ID of the Capsule to which you want to add a lifecycle. Using the ID, verify the details of your Capsule: To view the lifecycle environments available for your Capsule Server, enter the following command and note the ID and the organization name: Add the lifecycle environment to your Capsule Server: Repeat for each lifecycle environment you want to add to Capsule Server. Synchronize the content from Satellite to Capsule. To synchronize all content from your Satellite Server environment to Capsule Server, enter the following command: To synchronize a specific lifecycle environment from your Satellite Server to Capsule Server, enter the following command: To synchronize all content from your Satellite Server to your Capsule Server without checking metadata: This equals selecting Complete Sync in the Satellite web UI. 6.6. Removing lifecycle environments from Satellite Server Use this procedure to remove a lifecycle environment. Procedure In the Satellite web UI, navigate to Content > Lifecycle Environments . Click the name of the lifecycle environment that you want to remove, and then click Remove Environment . Click Remove to remove the environment. CLI procedure List the lifecycle environments for your organization and note the name of the lifecycle environment you want to remove: Use the hammer lifecycle-environment delete command to remove an environment: 6.7. Removing lifecycle environments from Capsule Server When lifecycle environments are no longer relevant to the host system or environments are added incorrectly to Capsule Server, you can remove the lifecycle environments from Capsule Server. You can use both the Satellite web UI and the Hammer CLI to remove lifecycle environments from Capsule. Procedure In the Satellite web UI, navigate to Infrastructure > Capsules , and select the Capsule that you want to remove a lifecycle from. Click Edit and click the Lifecycle Environments tab. From the right menu, select the lifecycle environments that you want to remove from Capsule, and then click Submit . To synchronize Capsule's content, click the Overview tab, and then click Synchronize . Select either Optimized Sync or Complete Sync . CLI procedure Select Capsule Server that you want from the list and take note of its id : To verify Capsule Server's details, enter the following command: Verify the list of lifecycle environments currently attached to Capsule Server and take note of the Environment ID : Remove the lifecycle environment from Capsule Server: Repeat this step for every lifecycle environment that you want to remove from Capsule Server. Synchronize the content from Satellite Server's environment to Capsule Server:
[ "hammer lifecycle-environment create --name \" Environment Path Name \" --description \" Environment Path Description \" --prior \"Library\" --organization \" My_Organization \"", "hammer lifecycle-environment create --name \" Environment Name \" --description \" Environment Description \" --prior \" Prior Environment Name \" --organization \" My_Organization \"", "hammer lifecycle-environment paths --organization \" My_Organization \"", "hammer capsule list", "hammer capsule info --id My_capsule_ID", "hammer capsule content available-lifecycle-environments --id My_capsule_ID", "hammer capsule content add-lifecycle-environment --id My_capsule_ID --lifecycle-environment-id My_Lifecycle_Environment_ID --organization \" My_Organization \"", "hammer capsule content synchronize --id My_capsule_ID", "hammer capsule content synchronize --id My_capsule_ID --lifecycle-environment-id My_Lifecycle_Environment_ID", "hammer capsule content synchronize --id My_capsule_ID --skip-metadata-check true", "hammer lifecycle-environment list --organization \" My_Organization \"", "hammer lifecycle-environment delete --name \" My_Environment \" --organization \" My_Organization \"", "hammer capsule list", "hammer capsule info --id My_Capsule_ID", "hammer capsule content lifecycle-environments --id My_Capsule_ID", "hammer capsule content remove-lifecycle-environment --id My_Capsule_ID --lifecycle-environment-id My_Lifecycle_Environment_ID", "hammer capsule content synchronize --id My_Capsule_ID" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_content/managing_application_lifecycles_content-management
7.180. resource-agents
7.180. resource-agents 7.180.1. RHBA-2015:1280 - resource-agents bug fix and enhancement update Updated resource-agents packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The resource-agents packages provide the Pacemaker and RGManager service managers with a set of scripts that interface with several services in order to allow operating in a High Availability (HA) environment. Bug Fixes BZ# 1085109 The lvm.sh agent was unable to accurately detect a tag represented by a cluster node. Consequently, the active logical volume on a cluster node failed when another node rejoined the cluster. Now, lvm.sh properly detects whether tags represent a cluster node. When nodes rejoin the cluster, the volume group no longer fails on other nodes. BZ# 1150702 If the file system used by a MySQL resource became unavailable, the MySQL agent's validation checks prevented the resource from stopping. This bug has been fixed, and MySQL resources are now properly restarted in the described case. BZ# 1151379 The RGManager resource agent failed to recognize that Oracle Database started successfully when notifications about non-critical errors were printed on startup. This update modifies the behavior of RGManager to ignore the non-critical errors, so that the Oracle Database service does not fail in this situation. BZ# 1159805 Floating IPv6 addresses managed by the RGManager ip.sh agent did not send unsolicited advertisement packets when starting. Consequently, when an IP resource failed over, it took about five minutes for the tables to be updated. The packets are now sent, which optimizes the time required before an IP address is recognized as being available. BZ# 1161727 When a node experiences a loss of quorum, the RGManager utility performs an emergency stop of all resources, not just those that are in a started state. Previously, when a separate node split from the cluster and lost quorum, the vg_stop_single() function stripped the Logical Volume Manager (LVM) tags from the Volume Group (VG) if the vg_owner was set. With this update, the LVM agent strips the tags only when the local node performing the stop operation is the owner, and the service now runs as part of the quorate partition even if the service owner's LVM tags have been removed. BZ# 1179412 Due to a regression, some NFS options went missing in the nfsserver after updating, and it was impossible to modify the number of the NFS thread. A patch has been applied, and the number is now modifiable. BZ# 1181187 When monitoring a cluster network interface, the IPaddr2 agent could display an "ERROR: [findif] failed" message even though the IP address and interface were working properly. This update fixes the underlying code, and the IPaddr2 agent consistently reports accurate results during the monitor operation. BZ# 1183148 The MySQL agent failed to work if configured with a user other than 'mysql'. Consequently, MySQL failed to start due to a permission error manifested as a timeout error. A fix has been applied, and MySQL now starts and runs as the configured user. BZ# 1183735 Under certain circumstances, the write test of the is_alive() function did not properly detect and report when a file system failed and was remounted as read-only. This update fixes the bug and in the described scenario, is_alive() now reports the status of the file system correctly. Enhancements BZ# 1096376 The Pacemaker nfsserver agent now sets the rpc.statd TCPPORT or UDPPORT via configuration options. BZ# 1150655 The nginx resource agent now allows an nginx web server to be managed as a Pacemaker cluster resource. This provides the ability to deploy the nginx web server in a high availability environment. BZ# 1168251 The resource-agents-sap-hana package now provides two Pacemaker resource agents, SAPHanaTopology and SAPHana. These resource agents allow configuration of a Pacemaker cluster to manage a SAP HANA Scale-Up System Replication environment on Red Hat Enterprise Linux. Users of resource-agents are advised to upgrade to these updated packages, which fix these bugs and add these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-resource-agents
8.42. device-mapper-multipath
8.42. device-mapper-multipath 8.42.1. RHBA-2014:1555 - device-mapper-multipath bug fix and enhancement update Updated device-mapper-multipath packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The device-mapper-multipath packages provide tools for managing multipath devices using the device-mapper multipath kernel module. Bug Fixes BZ# 1009061 When running a command on a specific device, the multipath utility did not accept device specification using the major:minor format. This bug has been fixed, and a multipath device can now be associated with the multipath command using a path device major:minor specification. BZ# 1027061 Previously, the readsector0 checker did not consider the blocksize when calculating the number of blocks to read, which led to readsector0 reading too much on 512 byte devices causing device errors. With this update, readsector0 considers the device blocksize when calculating the amount to read, and 512 byte devices now work with the readsector0 checker as intended. BZ# 1049637 Prior to this update, the code that handles multipathd sysfs devices could free a device structure while other data still pointed to it. As a consequence, the multipathd daemon could occasionally experience use-after-free memory corruption leading to unexpected termination. The sysfs device handling code for multipathd has been rewritten, and multipathd no longer frees sysfs device memory while it is still in use. BZ# 1078485 Some of multipathds prioritizers run scsi commands with long timeouts. These prioritizers do not run asynchronously and multipathd becomes busy waiting for one to timeout. Consequently, the multipathd daemon can become unresponsive for as long as 5 minutes when a path fails. With this update, the prioritizers use the "checker_timeout" option to configure their timeout. Now, prioritizer timeouts can be adjusted using the checker_timeout option to prevent multipathd hangs. BZ# 1080052 When a multipath device was reloaded outside the multipathd daemon and existing paths were removed from the device, mutipathd still treated them as belonging to a multipath device. Consequently, multipathd tried to access a non-existent path_group and terminated unexpectedly. With this update, multipathd correctly disassociates removed paths and no longer crashes when existing paths are removed by external programs. BZ# 1086417 If the multipathd daemon failed to add a path to the multipath device table, the path was incorrectly orphaned. As a consequence, the multipath utility treated the path as belonging to a multipath device, and multipathd could keep attempting to switch to a non-existent path_group. The underlying source code has been fixed, and multipathd now correctly orphans paths that cannot be added to the multipath device table. In addition, this update adds the following Enhancements BZ# 1054747 This update adds the force_sync multipath.conf option. Setting force_sync to "yes" keeps the multipathd daemon from calling the path checkers in asynchronous mode, which forces multipathd to run only one checker at a time. In addition, with this option set, multipathd no longer takes up a significant amount of CPU when a large number of paths is present. BZ# 1088013 Previously, the default path ordering often led to the Round Robin path selector picking multiple paths to the same controller, reducing the performance benefit from multiple paths. With this update, multipath reorders device paths in order to alternate between device controllers, leading to a performance improvement. BZ# 1099932 This update adds iscsi support for the "fast_io_fail_tmo" option to allow the user to modify the speed of multipath responding to failed iscsi devices. BZ# 1101101 With this update, "-w" and "-W" options have been added to multipath; the "-w" option removes the named WWID from the wwids file, the "-W" option removes all WWIDs from the wwids file except for the WWIDs of the current multipath devices.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/device-mapper-multipath
Preface
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Container Platform (RHOCP) IBM Z clusters in connected or disconnected environments along with out-of-the-box support for proxy environments. Note See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, follow the appropriate deployment process for your environment: Internal Attached Devices mode Deploy using local storage devices External mode
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_ibm_z/preface-ibm-z
Chapter 1. OpenShift Container Platform storage overview
Chapter 1. OpenShift Container Platform storage overview OpenShift Container Platform supports multiple types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. 1.1. Glossary of common terms for OpenShift Container Platform storage This glossary defines common terms that are used in the storage content. Access modes Volume access modes describe volume capabilities. You can use access modes to match persistent volume claim (PVC) and persistent volume (PV). The following are the examples of access modes: ReadWriteOnce (RWO) ReadOnlyMany (ROX) ReadWriteMany (RWX) ReadWriteOncePod (RWOP) Cinder The Block Storage service for Red Hat OpenStack Platform (RHOSP) which manages the administration, security, and scheduling of all volumes. Config map A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. Container Storage Interface (CSI) An API specification for the management of container storage across different container orchestration (CO) systems. Dynamic Provisioning The framework allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision persistent storage. Ephemeral storage Pods and containers can require temporary or transient local storage for their operation. The lifetime of this ephemeral storage does not extend beyond the life of the individual pod, and this ephemeral storage cannot be shared across pods. Fiber channel A networking technology that is used to transfer data among data centers, computer servers, switches and storage. FlexVolume FlexVolume is an out-of-tree plugin interface that uses an exec-based model to interface with storage drivers. You must install the FlexVolume driver binaries in a pre-defined volume plugin path on each node and in some cases the control plane nodes. fsGroup The fsGroup defines a file system group ID of a pod. iSCSI Internet Small Computer Systems Interface (iSCSI) is an Internet Protocol-based storage networking standard for linking data storage facilities. An iSCSI volume allows an existing iSCSI (SCSI over IP) volume to be mounted into your Pod. hostPath A hostPath volume in an OpenShift Container Platform cluster mounts a file or directory from the host node's filesystem into your pod. KMS key The Key Management Service (KMS) helps you achieve the required level of encryption of your data across different services. you can use the KMS key to encrypt, decrypt, and re-encrypt data. Local volumes A local volume represents a mounted local storage device such as a disk, partition or directory. NFS A Network File System (NFS) that allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables system administrators to consolidate resources onto centralized servers on the network. OpenShift Data Foundation A provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds Persistent storage Pods and containers can require permanent storage for their operation. OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use PVC to request PV resources without having specific knowledge of the underlying storage infrastructure. Persistent volumes (PV) OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use PVC to request PV resources without having specific knowledge of the underlying storage infrastructure. Persistent volume claims (PVCs) You can use a PVC to mount a PersistentVolume into a Pod. You can access the storage without knowing the details of the cloud environment. Pod One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed. Reclaim policy A policy that tells the cluster what to do with the volume after it is released. A volume's reclaim policy can be Retain , Recycle , or Delete . Role-based access control (RBAC) Role-based access control (RBAC) is a method of regulating access to computer or network resources based on the roles of individual users within your organization. Stateless applications A stateless application is an application program that does not save client data generated in one session for use in the session with that client. Stateful applications A stateful application is an application program that saves data to persistent disk storage. A server, client, and applications can use a persistent disk storage. You can use the Statefulset object in OpenShift Container Platform to manage the deployment and scaling of a set of Pods, and provides guarantee about the ordering and uniqueness of these Pods. Static provisioning A cluster administrator creates a number of PVs. PVs contain the details of storage. PVs exist in the Kubernetes API and are available for consumption. Storage OpenShift Container Platform supports many types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. Storage class A storage class provides a way for administrators to describe the classes of storage they offer. Different classes might map to quality of service levels, backup policies, arbitrary policies determined by the cluster administrators. VMware vSphere's Virtual Machine Disk (VMDK) volumes Virtual Machine Disk (VMDK) is a file format that describes containers for virtual hard disk drives that is used in virtual machines. 1.2. Storage types OpenShift Container Platform storage is broadly classified into two categories, namely ephemeral storage and persistent storage. 1.2.1. Ephemeral storage Pods and containers are ephemeral or transient in nature and designed for stateless applications. Ephemeral storage allows administrators and developers to better manage the local storage for some of their operations. For more information about ephemeral storage overview, types, and management, see Understanding ephemeral storage . 1.2.2. Persistent storage Stateful applications deployed in containers require persistent storage. OpenShift Container Platform uses a pre-provisioned storage framework called persistent volumes (PV) to allow cluster administrators to provision persistent storage. The data inside these volumes can exist beyond the lifecycle of an individual pod. Developers can use persistent volume claims (PVCs) to request storage requirements. For more information about persistent storage overview, configuration, and lifecycle, see Understanding persistent storage . 1.3. Container Storage Interface (CSI) CSI is an API specification for the management of container storage across different container orchestration (CO) systems. You can manage the storage volumes within the container native environments, without having specific knowledge of the underlying storage infrastructure. With the CSI, storage works uniformly across different container orchestration systems, regardless of the storage vendors you are using. For more information about CSI, see Using Container Storage Interface (CSI) . 1.4. Dynamic Provisioning Dynamic Provisioning allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision storage. For more information about dynamic provisioning, see Dynamic provisioning .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/storage/storage-overview
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.11/making-open-source-more-inclusive
Part VI. Troubleshooting and common questions
Part VI. Troubleshooting and common questions When you review the subscriptions service data for your account, you might have additional questions about how those calculations are made or whether the calculations are accurate. Answers for some of the most commonly asked questions might help you understand more about the data that appears in the subscriptions service. Other information can help you troubleshoot some common problems that are experienced by subscriptions service users. In some cases, completing the suggested steps in the troubleshooting information can help you improve the accuracy of the reported data in the subscriptions service. Troubleshoot Troubleshooting: Correcting over-reporting of virtualized RHEL Troubleshooting: Correcting problems with filtering Learn more How is the subscription threshold calculated? How is core hour usage data calculated? How do vCPUs and hyperthreading affect the subscriptions service usage data?
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_the_subscriptions_service/assembly-troubleshooting-common-questions
Managing configurations using Puppet integration
Managing configurations using Puppet integration Red Hat Satellite 6.15 Configure Puppet integration in Satellite and use Puppet classes to configure your hosts Red Hat Satellite Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_configurations_using_puppet_integration/index
Preface
Preface Kamelets are reusable route components that hide the complexity of creating data pipelines that connect to external systems. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.9/html/integrating_applications_with_kamelets/pr01
2.4. Threats to Workstation and Home PC Security
2.4. Threats to Workstation and Home PC Security Workstations and home PCs may not be as prone to attack as networks or servers, but since they often contain sensitive data, such as credit card information, they are targeted by system crackers. Workstations can also be co-opted without the user's knowledge and used by attackers as "slave" machines in coordinated attacks. For these reasons, knowing the vulnerabilities of a workstation can save users the headache of reinstalling the operating system, or worse, recovering from data theft. 2.4.1. Bad Passwords Bad passwords are one of the easiest ways for an attacker to gain access to a system. For more on how to avoid common pitfalls when creating a password, refer to Section 4.3, "Password Security" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s1-risk-wspc
probe::tcpmib.PassiveOpens
probe::tcpmib.PassiveOpens Name probe::tcpmib.PassiveOpens - Count the passive creation of a socket Synopsis tcpmib.PassiveOpens Values sk pointer to the struct sock being acted on op value to be added to the counter (default value of 1) Description The packet pointed to by skb is filtered by the function tcpmib_filter_key . If the packet passes the filter is is counted in the global PassiveOpens (equivalent to SNMP's MIB TCP_MIB_PASSIVEOPENS)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-tcpmib-passiveopens
Chapter 14. Cruise Control for cluster rebalancing
Chapter 14. Cruise Control for cluster rebalancing Important Cruise Control for cluster rebalancing is a Technology Preview only. Technology Preview features are not supported with Red Hat production service-level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend implementing any Technology Preview features in production environments. This Technology Preview feature provides early access to upcoming product innovations, enabling you to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can deploy Cruise Control to your AMQ Streams cluster and use it to rebalance the load across the Kafka brokers. Cruise Control is an open source system for automating Kafka operations, such as monitoring cluster workload, rebalancing a cluster based on predefined constraints, and detecting and fixing anomalies. It consists of four components (Load Monitor, Analyzer, Anomaly Detector, and Executor) and a REST API. When AMQ Streams and Cruise Control are both deployed to Red Hat Enterprise Linux, you can access Cruise Control features through the Cruise Control REST API. The following features are supported: Configuring optimization goals and capacity limits Using the /rebalance endpoint to: Generate optimization proposals , as dry runs, based on the configured optimization goals or user-provided goals supplied as request parameters Initiate an optimization proposal to rebalance the Kafka cluster Checking the progress of an active rebalance operation using the /user_tasks endpoint Stopping an active rebalance operation using the /stop_proposal_execution endpoint All other Cruise Control features are not currently supported, including anomaly detection, notifications, write-your-own goals, and changing the topic replication factor. The web UI component (Cruise Control Frontend) is not supported. Cruise Control for AMQ Streams on Red Hat Enterprise Linux is provided as a separate zipped distribution. For more information, see Section 14.2, "Downloading a Cruise Control archive" . 14.1. Why use Cruise Control? Cruise Control reduces the time and effort involved in running an efficient Kafka cluster, with a more evenly balanced workload across the brokers. A typical cluster can become unevenly loaded over time. Partitions that handle large amounts of message traffic might be unevenly distributed across the available brokers. To rebalance the cluster, administrators must monitor the load on brokers and manually reassign busy partitions to brokers with spare capacity. Cruise Control automates this cluster rebalancing process. It constructs a workload model of resource utilization, based on CPU, disk, and network load. Using a set of configurable optimization goals, you can instruct Cruise Control to generate dry run optimization proposals for more balanced partition assignments. After you have reviewed a dry run optimization proposal, you can instruct Cruise Control to initiate a cluster rebalance based on that proposal, or generate a new proposal. When a cluster rebalancing operation is complete, the brokers are used more effectively and the load on the Kafka cluster is more evenly balanced. Additional resources Cruise Control Wiki Section 14.5, "Optimization goals overview" Section 14.6, "Optimization proposals overview" Capacity configuration 14.2. Downloading a Cruise Control archive A zipped distribution of Cruise Control for AMQ Streams on Red Hat Enterprise Linux is available for download from the Red Hat Customer Portal . Procedure Download the latest version of the Red Hat AMQ Streams Cruise Control archive from the Red Hat Customer Portal . Create the /opt/cruise-control directory: sudo mkdir /opt/cruise-control Extract the contents of the Cruise Control ZIP file to the new directory: unzip amq-streams-y.y.y-cruise-control-bin.zip -d /opt/cruise-control Change the ownership of the /opt/cruise-control directory to the kafka user: sudo chown -R kafka:kafka /opt/cruise-control 14.3. Deploying the Cruise Control Metrics Reporter Before starting Cruise Control, you must configure the Kafka brokers to use the provided Cruise Control Metrics Reporter. When loaded at runtime, the Metrics Reporter sends metrics to the __CruiseControlMetrics topic, one of three auto-created topics . Cruise Control uses these metrics to create and update the workload model and to calculate optimization proposals. Prerequisites You are logged in to Red Hat Enterprise Linux as the kafka user. Kafka and ZooKeeper are running. Section 14.2, "Downloading a Cruise Control archive" . Procedure For each broker in the Kafka cluster and one at a time: Stop the Kafka broker: /opt/kafka/bin/kafka-server-stop.sh Copy the Cruise Control Metrics Reporter .jar file to the Kafka libraries directory: cp /opt/cruise-control/libs/ cruise-control-metrics-reporter-y.y.yyy.redhat-0000x.jar /opt/kafka/libs In the Kafka configuration file ( /opt/kafka/config/server.properties ) configure the Cruise Control Metrics Reporter: Add the CruiseControlMetricsReporter class to the metric.reporters configuration option. Do not remove any existing Metrics Reporters. Add the following configuration options and values to the Kafka configuration file: These options enable the Cruise Control Metrics Reporter to create the __CruiseControlMetrics topic with a log cleanup policy of DELETE . For more information, see Auto-created topics and Log cleanup policy for Cruise Control Metrics topic . Configure SSL, if required. In the Kafka configuration file ( /opt/kafka/config/server.properties ) configure SSL between the Cruise Control Metrics Reporter and the Kafka broker by setting the relevant client configuration properties. The Metrics Reporter accepts all standard producer-specific configuration properties with the cruise.control.metrics.reporter prefix. For example: cruise.control.metrics.reporter.ssl.truststore.password . In the Cruise Control properties file ( /opt/cruise-control/config/cruisecontrol.properties ) configure SSL between the Kafka broker and the Cruise Control server by setting the relevant client configuration properties. Cruise Control inherits SSL client property options from Kafka and uses those properties for all Cruise Control server clients. Restart the Kafka broker: /opt/kafka/bin/kafka-server-start.sh Repeat steps 1-5 for the remaining brokers. 14.4. Configuring and starting Cruise Control Configure the properties used by Cruise Control and then start the Cruise Control server using the cruise-control-start.sh script. The server is hosted on a single machine for the whole Kafka cluster. Three topics are auto-created when Cruise Control starts. For more information, see Auto-created topics . Prerequisites You are logged in to Red Hat Enterprise Linux as the kafka user. Section 14.2, "Downloading a Cruise Control archive" Section 14.3, "Deploying the Cruise Control Metrics Reporter" Procedure Edit the Cruise Control properties file ( /opt/cruise-control/config/cruisecontrol.properties ). Configure the properties shown in the following example configuration: # The Kafka cluster to control. bootstrap.servers=localhost:9092 1 # The replication factor of Kafka metric sample store topic sample.store.topic.replication.factor=2 2 # The configuration for the BrokerCapacityConfigFileResolver (supports JBOD, non-JBOD, and heterogeneous CPU core capacities) #capacity.config.file=config/capacity.json #capacity.config.file=config/capacityCores.json capacity.config.file=config/capacityJBOD.json 3 # The list of goals to optimize the Kafka cluster for with pre-computed proposals default.goals={List of default optimization goals} 4 # The list of supported goals goals={list of master optimization goals} 5 # The list of supported hard goals hard.goals={List of hard goals} 6 # How often should the cached proposal be expired and recalculated if necessary proposal.expiration.ms=60000 7 # The zookeeper connect of the Kafka cluster zookeeper.connect=localhost:2181 8 1 Host and port numbers of the Kafka broker (always port 9092). 2 Replication factor of the Kafka metric sample store topic. If you are evaluating Cruise Control in a single-node Kafka and ZooKeeper cluster, set this property to 1. For production use, set this property to 2 or more. 3 The configuration file that sets the maximum capacity limits for broker resources. Use the file that applies to your Kafka deployment configuration. For more information, see Capacity configuration . 4 Comma-separated list of default optimization goals, using fully-qualified domain names (FQDNs). A number of master optimization goals (see 5) are already set as default optimization goals; you can add or remove goals if desired. For more information, see Section 14.5, "Optimization goals overview" . 5 Comma-separated list of master optimization goals, using FQDNs. To completely exclude goals from being used to generate optimization proposals, remove them from the list. For more information, see Section 14.5, "Optimization goals overview" . 6 Comma-separated list of hard goals, using FQDNs. Seven of the master optimization goals are already set as hard goals; you can add or remove goals if desired. For more information, see Section 14.5, "Optimization goals overview" . 7 The interval, in milliseconds, for refreshing the cached optimization proposal that is generated from the default optimization goals. For more information, see Section 14.6, "Optimization proposals overview" . 8 Host and port numbers of the ZooKeeper connection (always port 2181). Start the Cruise Control server. The server starts on port 9092 by default; optionally, specify a different port. cd /opt/cruise-control/ ./bin/cruise-control-start.sh config/cruisecontrol.properties PORT To verify that Cruise Control is running, send a GET request to the /state endpoint of the Cruise Control server: curl 'http://HOST:PORT/kafkacruisecontrol/state' Auto-created topics The following table shows the three topics that are automatically created when Cruise Control starts. These topics are required for Cruise Control to work properly and must not be deleted or changed. Table 14.1. Auto-created topics Auto-created topic Created by Function __CruiseControlMetrics Cruise Control Metrics Reporter Stores the raw metrics from the Metrics Reporter in each Kafka broker. __KafkaCruiseControlPartitionMetricSamples Cruise Control Stores the derived metrics for each partition. These are created by the Metric Sample Aggregator . __KafkaCruiseControlModelTrainingSamples Cruise Control Stores the metrics samples used to create the Cluster Workload Model . To ensure that log compaction is disabled in the auto-created topics, make sure that you configure the Cruise Control Metrics Reporter as described in Section 14.3, "Deploying the Cruise Control Metrics Reporter" . Log compaction can remove records that are needed by Cruise Control and prevent it from working properly. Additional resources Log cleanup policy for Cruise Control Metrics topic 14.5. Optimization goals overview To rebalance a Kafka cluster, Cruise Control uses optimization goals to generate optimization proposals . Optimization goals are constraints on workload redistribution and resource utilization across a Kafka cluster. AMQ Streams on Red Hat Enterprise Linux supports all the optimization goals developed in the Cruise Control project. The supported goals, in the default descending order of priority, are as follows: Rack-awareness Minimum number of leader replicas per broker for a set of topics Replica capacity Capacity: Disk capacity, Network inbound capacity, Network outbound capacity CPU capacity Replica distribution Potential network output Resource distribution: Disk utilization distribution, Network inbound utilization distribution, Network outbound utilization distribution Leader bytes-in rate distribution Topic replica distribution CPU usage distribution Leader replica distribution Preferred leader election Kafka Assigner disk usage distribution Intra-broker disk capacity Intra-broker disk usage For more information on each optimization goal, see Goals in the Cruise Control Wiki . Goals configuration in the Cruise Control properties file You configure optimization goals in the cruisecontrol.properties file in the cruise-control/config/ directory. There are configurations for hard optimization goals that must be satisfied, as well as master and default optimization goals. Optional, user-provided optimization goals are set at runtime as parameters in requests to the /rebalance endpoint. Optimization goals are subject to any capacity limits on broker resources. The following sections describe each goal configuration in more detail. Master optimization goals The master optimization goals are available to all users. Goals that are not listed in the master optimization goals are not available for use in Cruise Control operations. The following master optimization goals are preset in the cruisecontrol.properties file, in the goals property, in descending priority order: For simplicity, we recommend that you do not change the preset master optimization goals, unless you need to completely exclude one or more goals from being used to generate optimization proposals. The priority order of the master optimization goals can be modified, if desired, in the configuration for default optimization goals. If you need to modify the preset master optimization goals, specify a list of goals, in descending priority order, in the goals property. Use fully-qualified domain names as shown in the cruisecontrol.properties file. You must specify at least one master goal, or Cruise Control will crash. Note If you change the preset master optimization goals, you must ensure that the configured hard.goals are a subset of the master optimization goals that you configured. Otherwise, errors will occur when generating optimization proposals. Hard goals and soft goals Hard goals are goals that must be satisfied in optimization proposals. Goals that are not configured as hard goals are known as soft goals . You can think of soft goals as best effort goals: they do not need to be satisfied in optimization proposals, but are included in optimization calculations. Cruise Control will calculate optimization proposals that satisfy all the hard goals and as many soft goals as possible (in their priority order). An optimization proposal that does not satisfy all the hard goals is rejected by the Analyzer and is not sent to the user. Note For example, you might have a soft goal to distribute a topic's replicas evenly across the cluster (the topic replica distribution goal). Cruise Control will ignore this goal if doing so enables all the configured hard goals to be met. The following master optimization goals are preset as hard goals in the cruisecontrol.properties file, in the hard.goals property: To change the hard goals, edit the hard.goals property and specify the desired goals, using their fully-qualified domain names. Increasing the number of hard goals reduces the likelihood that Cruise Control will calculate and generate valid optimization proposals. Default optimization goals Cruise Control uses the default optimization goals list to generate the cached optimization proposal . For more information, see Section 14.6, "Optimization proposals overview" . You can override the default optimization goals at runtime by setting user-provided optimization goals . The following default optimization goals are preset in the cruisecontrol.properties file, in the default.goals property, in descending priority order: You must specify at least one default goal, or Cruise Control will crash. To modify the default optimization goals, specify a list of goals, in descending priority order, in the default.goals property. Default goals must be a subset of the master optimization goals; use fully-qualified domain names. User-provided optimization goals User-provided optimization goals narrow down the configured default goals for a particular optimization proposal. You can set them, as required, as parameters in HTTP requests to the /rebalance endpoint. For more information, see Section 14.9, "Generating optimization proposals" . User-provided optimization goals can generate optimization proposals for different scenarios. For example, you might want to optimize leader replica distribution across the Kafka cluster without considering disk capacity or disk utilization. So, you send a request to the /rebalance endpoint containing a single goal for leader replica distribution. User-provided optimization goals must: Include all configured hard goals , or an error occurs Be a subset of the master optimization goals To ignore the configured hard goals in an optimization proposal, add the skip_hard_goals_check=true parameter to the request. Additional resources Section 14.8, "Cruise Control configuration" Configurations in the Cruise Control Wiki. 14.6. Optimization proposals overview An optimization proposal is a summary of proposed changes that, if applied, will produce a more balanced Kafka cluster, with partition workloads distributed more evenly among the brokers. Each optimization proposal is based on the set of optimization goals that was used to generate it, subject to any configured capacity limits on broker resources. When you make a POST request to the /rebalance endpoint, an optimization proposal is returned in response. Use the information in the proposal to decide whether to initiate a cluster rebalance based on the proposal. Alternatively, you can change the optimization goals and then generate another proposal. By default, optimization proposals are generated as dry runs that must be initiated separately. There is no limit to the number of optimization proposals that can be generated. Cached optimization proposal Cruise Control maintains a cached optimization proposal based on the configured default optimization goals . Generated from the workload model, the cached optimization proposal is updated every 15 minutes to reflect the current state of the Kafka cluster. The most recent cached optimization proposal is returned when the following goal configurations are used: The default optimization goals User-provided optimization goals that can be met by the current cached proposal To change the cached optimization proposal refresh interval, edit the proposal.expiration.ms setting in the cruisecontrol.properties file. Consider a shorter interval for fast changing clusters, although this increases the load on the Cruise Control server. Contents of optimization proposals The following table describes the properties contained in an optimization proposal. Table 14.2. Properties contained in an optimization proposal Property Description n inter-broker replica (y MB) moves n : The number of partition replicas that will be moved between separate brokers. Performance impact during rebalance operation : Relatively high. y MB : The sum of the size of each partition replica that will be moved to a separate broker. Performance impact during rebalance operation : Variable. The larger the number of MBs, the longer the cluster rebalance will take to complete. n intra-broker replica (y MB) moves n : The total number of partition replicas that will be transferred between the disks of the cluster's brokers. Performance impact during rebalance operation : Relatively high, but less than inter-broker replica moves . y MB : The sum of the size of each partition replica that will be moved between disks on the same broker. Performance impact during rebalance operation : Variable. The larger the number, the longer the cluster rebalance will take to complete. Moving a large amount of data between disks on the same broker has less impact than between separate brokers (see inter-broker replica moves ). n excluded topics The number of topics excluded from the calculation of partition replica/leader movements in the optimization proposal. You can exclude topics in one of the following ways: In the cruisecontrol.properties file, specify a regular expression in the topics.excluded.from.partition.movement property. In a POST request to the /rebalance endpoint, specify a regular expression in the excluded_topics parameter. Topics that match the regular expression are listed in the response and will be excluded from the cluster rebalance. n leadership moves n : The number of partitions whose leaders will be switched to different replicas. This involves a change to ZooKeeper configuration. Performance impact during rebalance operation : Relatively low. n recent windows n : The number of metrics windows upon which the optimization proposal is based. n% of the partitions covered n% : The percentage of partitions in the Kafka cluster covered by the optimization proposal. On-demand Balancedness Score Before (nn.yyy) After (nn.yyy) Measurements of the overall balance of a Kafka Cluster. Cruise Control assigns a Balancedness Score to every optimization goal based on several factors, including priority (the goal's position in the list of default.goals or user-provided goals). The On-demand Balancedness Score is calculated by subtracting the sum of the Balancedness Score of each violated soft goal from 100. The Before score is based on the current configuration of the Kafka cluster. The After score is based on the generated optimization proposal. Additional resources Section 14.5, "Optimization goals overview" . Section 14.9, "Generating optimization proposals" Section 14.10, "Initiating a cluster rebalance" 14.7. Rebalance performance tuning overview You can adjust several performance tuning options for cluster rebalances. These options control how partition replica and leadership movements in a rebalance are executed, as well as the bandwidth that is allocated to a rebalance operation. Partition reassignment commands Optimization proposals are composed of separate partition reassignment commands. When you initiate a proposal, the Cruise Control server applies these commands to the Kafka cluster. A partition reassignment command consists of either of the following types of operations: Partition movement : Involves transferring the partition replica and its data to a new location. Partition movements can take one of two forms: Inter-broker movement: The partition replica is moved to a log directory on a different broker. Intra-broker movement: The partition replica is moved to a different log directory on the same broker. Leadership movement : Involves switching the leader of the partition's replicas. Cruise Control issues partition reassignment commands to the Kafka cluster in batches. The performance of the cluster during the rebalance is affected by the number of each type of movement contained in each batch. To configure partition reassignment commands, see Rebalance tuning options . Replica movement strategies Cluster rebalance performance is also influenced by the replica movement strategy that is applied to the batches of partition reassignment commands. By default, Cruise Control uses the BaseReplicaMovementStrategy , which applies the commands in the order in which they were generated. However, if there are some very large partition reassignments early in the proposal, this strategy can slow down the application of the other reassignments. Cruise Control provides three alternative replica movement strategies that can be applied to optimization proposals: PrioritizeSmallReplicaMovementStrategy : Order reassignments in ascending size. PrioritizeLargeReplicaMovementStrategy : Order reassignments in descending size. PostponeUrpReplicaMovementStrategy : Prioritize reassignments for replicas of partitions which have no out-of-sync replicas. These strategies can be configured as a sequence. The first strategy attempts to compare two partition reassignments using its internal logic. If the reassignments are equivalent, then it passes them to the strategy in the sequence to decide the order, and so on. To configure replica movement strategies, see Rebalance tuning options . Rebalance tuning options Cruise Control provides several configuration options for tuning rebalance parameters. These options are set in the following ways: As properties, in the default Cruise Control configuration, in the cruisecontrol.properties file As parameters in POST requests to the /rebalance endpoint The relevant configurations for both methods are summarized in the following table. Table 14.3. Rebalance performance tuning configuration Property and request parameter configurations Description Default Value num.concurrent.partition.movements.per.broker The maximum number of inter-broker partition movements in each partition reassignment batch 5 concurrent_partition_movements_per_broker num.concurrent.intra.broker.partition.movements The maximum number of intra-broker partition movements in each partition reassignment batch 2 concurrent_intra_broker_partition_movements num.concurrent.leader.movements The maximum number of partition leadership changes in each partition reassignment batch 1000 concurrent_leader_movements default.replication.throttle The bandwidth (in bytes per second) to assign to partition reassignment Null (no limit) replication_throttle default.replica.movement.strategies The list of strategies (in priority order) used to determine the order in which partition reassignment commands are executed for generated proposals. There are three strategies: PrioritizeSmallReplicaMovementStrategy , PrioritizeLargeReplicaMovementStrategy , and PostponeUrpReplicaMovementStrategy . For the property, use a comma-separated list of the fully qualified names of the strategy classes (add com.linkedin.kafka.cruisecontrol.executor.strategy. to the start of each class name). For the parameter, use a comma-separated list of the class names of the replica movement strategies. BaseReplicaMovementStrategy replica_movement_strategies Changing the default settings affects the length of time that the rebalance takes to complete, as well as the load placed on the Kafka cluster during the rebalance. Using lower values reduces the load but increases the amount of time taken, and vice versa. Additional resources Configurations in the Cruise Control Wiki. REST APIs in the Cruise Control Wiki. 14.8. Cruise Control configuration The config/cruisecontrol.properties file contains the configuration for Cruise Control. The file consists of properties in one of the following types: String Number Boolean You can specify and configure all the properties listed in the Configurations section of the Cruise Control Wiki. Capacity configuration Cruise Control uses capacity limits to determine if certain resource-based optimization goals are being broken. An attempted optimization fails if one or more of these resource-based goals is set as a hard goal and then broken. This prevents the optimization from being used to generate an optimization proposal. You specify capacity limits for Kafka broker resources in one of the following three .json files in cruise-control/config : capacityJBOD.json : For use in JBOD Kafka deployments (the default file). capacity.json : For use in non-JBOD Kafka deployments where each broker has the same number of CPU cores. capacityCores.json : For use in non-JBOD Kafka deployments where each broker has varying numbers of CPU cores. Set the file in the capacity.config.file property in cruisecontrol.properties . The selected file will be used for broker capacity resolution. For example: Capacity limits can be set for the following broker resources in the described units: DISK : Disk storage in MB CPU : CPU utilization as a percentage (0-100) or as a number of cores NW_IN : Inbound network throughput in KB per second NW_OUT : Outbound network throughput in KB per second To apply the same capacity limits to every broker monitored by Cruise Control, set capacity limits for broker ID -1 . To set different capacity limits for individual brokers, specify each broker ID and its capacity configuration. Example capacity limits configuration { "brokerCapacities":[ { "brokerId": "-1", "capacity": { "DISK": "100000", "CPU": "100", "NW_IN": "10000", "NW_OUT": "10000" }, "doc": "This is the default capacity. Capacity unit used for disk is in MB, cpu is in percentage, network throughput is in KB." }, { "brokerId": "0", "capacity": { "DISK": "500000", "CPU": "100", "NW_IN": "50000", "NW_OUT": "50000" }, "doc": "This overrides the capacity for broker 0." } ] } For more information, see Populating the Capacity Configuration File in the Cruise Control Wiki. Log cleanup policy for Cruise Control Metrics topic It is important that the auto-created __CruiseControlMetrics topic (see auto-created topics ) has a log cleanup policy of DELETE rather than COMPACT . Otherwise, records that are needed by Cruise Control might be removed. As described in Section 14.3, "Deploying the Cruise Control Metrics Reporter" , setting the following options in the Kafka configuration file ensures that the COMPACT log cleanup policy is correctly set: cruise.control.metrics.topic.auto.create=true cruise.control.metrics.topic.num.partitions=1 cruise.control.metrics.topic.replication.factor=1 If topic auto-creation is disabled in the Cruise Control Metrics Reporter ( cruise.control.metrics.topic.auto.create=false ), but enabled in the Kafka cluster, then the __CruiseControlMetrics topic is still automatically created by the broker. In this case, you must change the log cleanup policy of the __CruiseControlMetrics topic to DELETE using the kafka-configs.sh tool. Get the current configuration of the __CruiseControlMetrics topic: bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name __CruiseControlMetrics --describe Change the log cleanup policy in the topic configuration: bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name __CruiseControlMetrics --alter --add-config cleanup.policy=delete If topic auto-creation is disabled in both the Cruise Control Metrics Reporter and the Kafka cluster, you must create the __CruiseControlMetrics topic manually and then configure it to use the DELETE log cleanup policy using the kafka-configs.sh tool. For more information, see Section 5.9, "Modifying a topic configuration" . Logging configuration Cruise Control uses log4j1 for all server logging. To change the default configuration, edit the log4j.properties file in /opt/cruise-control/config/log4j.properties . You must restart the Cruise Control server before the changes take effect. 14.9. Generating optimization proposals When you make a POST request to the /rebalance endpoint, Cruise Control generates an optimization proposal to rebalance the Kafka cluster, based on the provided optimization goals. The optimization proposal is generated as a dry run , unless the dryrun parameter is supplied and set to false . In "dry run mode", Cruise Control generates the optimization proposal and the estimated result, but doesn't initiate the proposal by rebalancing the cluster. You can analyze the information returned in the optimization proposal and decide whether to initiate it. Following are the key parameters for requests to the /rebalance endpoint. For information about all the available parameters, see REST APIs in the Cruise Control Wiki. dryrun type: boolean, default: true Informs Cruise Control whether you want to generate an optimization proposal only ( true ), or generate an optimization proposal and perform a cluster rebalance ( false ). When dryrun=true (the default), you can also pass the verbose parameter to return more detailed information about the state of the Kafka cluster. This includes metrics for the load on each Kafka broker before and after the optimization proposal is applied, and the differences between the before and after values. excluded_topics type: regex A regular expression that matches the topics to exclude from the calculation of the optimization proposal. goals type: list of strings, default: the configured default.goals list List of user-provided optimization goals to use to prepare the optimization proposal. If goals are not supplied, the configured default.goals list in the cruisecontrol.properties file is used. skip_hard_goals_check type: boolean, default: false By default, Cruise Control checks that the user-provided optimization goals (in the goals parameter) contain all the configured hard goals (in hard.goals ). A request fails if you supply goals that are not a subset of the configured hard.goals . Set skip_hard_goals_check to true if you want to generate an optimization proposal with user-provided optimization goals that do not include all the configured hard.goals . json type: boolean, default: false Controls the type of response returned by the Cruise Control server. If not supplied, or set to false , then Cruise Control returns text formatted for display on the command line. If you want to extract elements of the returned information programmatically, set json=true . This will return JSON formatted text that can be piped to tools such as jq , or parsed in scripts and programs. verbose type: boolean, default: false Controls the level of detail in responses that are returned by the Cruise Control server. Can be used with dryrun=true . Prerequisites Kafka and ZooKeeper are running Cruise Control is running Procedure To generate a "dry run" optimization proposal formatted for the console, send a POST request to the /rebalance endpoint. To use the configured default.goals : curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance' The cached optimization proposal is immediately returned. Note If NotEnoughValidWindows is returned, Cruise Control has not yet recorded enough metrics data to generate an optimization proposal. Wait a few minutes and then resend the request. To specify user-provided optimization goals instead of the configured default.goals , supply one or more goals in the goals parameter: curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal' If it satisfies the supplied goals, the cached optimization proposal is immediately returned. Otherwise, a new optimization proposal is generated using the supplied goals; this takes longer to calculate. You can enforce this behavior by adding the ignore_proposal_cache=true parameter to the request. To specify user-provided optimization goals that do not include all the configured hard goals, add the skip_hard_goal_check=true parameter to the request: curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal,ReplicaDistributionGoal&skip_hard_goal_check=true' Review the optimization proposal contained in the response. The properties describe the pending cluster rebalance operation. The proposal contains a high level summary of the proposed optimization, followed by summaries for each default optimization goal, and the expected cluster state after the proposal has executed. Pay particular attention to the following information: The Cluster load after rebalance summary. If it meets your requirements, you should assess the impact of the proposed changes using the high level summary. n inter-broker replica (y MB) moves indicates how much data will be moved across the network between brokers. The higher the value, the greater the potential performance impact on the Kafka cluster during the rebalance. n intra-broker replica (y MB) moves indicates how much data will be moved within the brokers themselves (between disks). The higher the value, the greater the potential performance impact on individual brokers (although less than that of n inter-broker replica (y MB) moves ). The number of leadership moves. This has a negligible impact on the performance of the cluster during the rebalance. Asynchronous responses The Cruise Control REST API endpoints timeout after 10 seconds by default, although proposal generation continues on the server. A timeout might occur if the most recent cached optimization proposal is not ready, or if user-provided optimization goals were specified with ignore_proposal_cache=true . To allow you to retrieve the optimization proposal at a later time, take note of the request's unique identifier, which is given in the header of responses from the /rebalance endpoint. To obtain the response using curl , specify the verbose ( -v ) option: Here is an example header: * Connected to cruise-control-server (::1) port 9090 (#0) > POST /kafkacruisecontrol/rebalance HTTP/1.1 > Host: cc-host:9090 > User-Agent: curl/7.70.0 > Accept: / > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Mon, 01 Jun 2020 15:19:26 GMT < Set-Cookie: JSESSIONID=node01wk6vjzjj12go13m81o7no5p7h9.node0; Path=/ < Expires: Thu, 01 Jan 1970 00:00:00 GMT < User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201 < Content-Type: text/plain;charset=utf-8 < Cruise-Control-Version: 2.0.103.redhat-00002 < Cruise-Control-Commit_Id: 58975c9d5d0a78dd33cd67d4bcb497c9fd42ae7c < Content-Length: 12368 < Server: Jetty(9.4.26.v20200117-redhat-00001) If an optimization proposal is not ready within the timeout, you can re-submit the POST request, this time including the User-Task-ID of the original request in the header: curl -v -X POST -H 'User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201' 'cruise-control-server:9090/kafkacruisecontrol/rebalance' What to do Section 14.10, "Initiating a cluster rebalance" 14.10. Initiating a cluster rebalance If you are satisfied with an optimization proposal, you can instruct Cruise Control to initiate the cluster rebalance and begin reassigning partitions, as summarized in the proposal. Leave as little time as possible between generating an optimization proposal and initiating the cluster rebalance. If some time has passed since you generated the original optimization proposal, the cluster state might have changed. Therefore, the cluster rebalance that is initiated might be different to the one you reviewed. If in doubt, first generate a new optimization proposal. Only one cluster rebalance, with a status of "Active", can be in progress at a time. Prerequisites You have generated an optimization proposal from Cruise Control. Procedure To execute the most recently generated optimization proposal, send a POST request to the /rebalance endpoint, with the dryrun=false parameter: curl -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?dryrun=false' Cruise Control initiates the cluster rebalance and returns the optimization proposal. Check the changes that are summarized in the optimization proposal. If the changes are not what you expect, you can stop the rebalance . Check the progress of the cluster rebalance using the /user_tasks endpoint. The cluster rebalance in progress has a status of "Active". To view all cluster rebalance tasks executed on the Cruise Control server: curl 'cruise-control-server:9090/kafkacruisecontrol/user_tasks' USER TASK ID CLIENT ADDRESS START TIME STATUS REQUEST URL c459316f-9eb5-482f-9d2d-97b5a4cd294d 0:0:0:0:0:0:0:1 2020-06-01_16:10:29 UTC Active POST /kafkacruisecontrol/rebalance?dryrun=false 445e2fc3-6531-4243-b0a6-36ef7c5059b4 0:0:0:0:0:0:0:1 2020-06-01_14:21:26 UTC Completed GET /kafkacruisecontrol/state?json=true 05c37737-16d1-4e33-8e2b-800dee9f1b01 0:0:0:0:0:0:0:1 2020-06-01_14:36:11 UTC Completed GET /kafkacruisecontrol/state?json=true aebae987-985d-4871-8cfb-6134ecd504ab 0:0:0:0:0:0:0:1 2020-06-01_16:10:04 UTC To view the status of a particular cluster rebalance task, supply the user-task-ids parameter and the task ID: 14.11. Stopping an active cluster rebalance You can stop the cluster rebalance that is currently in progress. This instructs Cruise Control to finish the current batch of partition reassignments and then stop the rebalance. When the rebalance has stopped, completed partition reassignments have already been applied; therefore, the state of the Kafka cluster is different when compared to before the start of the rebalance operation. If further rebalancing is required, you should generate a new optimization proposal. Note The performance of the Kafka cluster in the intermediate (stopped) state might be worse than in the initial state. Prerequisites A cluster rebalance is in progress (indicated by a status of "Active"). Procedure Send a POST request to the /stop_proposal_execution endpoint: Additional resources Section 14.9, "Generating optimization proposals"
[ "sudo mkdir /opt/cruise-control", "unzip amq-streams-y.y.y-cruise-control-bin.zip -d /opt/cruise-control", "sudo chown -R kafka:kafka /opt/cruise-control", "/opt/kafka/bin/kafka-server-stop.sh", "cp /opt/cruise-control/libs/ cruise-control-metrics-reporter-y.y.yyy.redhat-0000x.jar /opt/kafka/libs", "metric.reporters=com.linkedin.kafka.cruisecontrol.metricsreporter.CruiseControlMetricsReporter", "cruise.control.metrics.topic.auto.create=true cruise.control.metrics.topic.num.partitions=1 cruise.control.metrics.topic.replication.factor=1", "/opt/kafka/bin/kafka-server-start.sh", "The Kafka cluster to control. bootstrap.servers=localhost:9092 1 The replication factor of Kafka metric sample store topic sample.store.topic.replication.factor=2 2 The configuration for the BrokerCapacityConfigFileResolver (supports JBOD, non-JBOD, and heterogeneous CPU core capacities) #capacity.config.file=config/capacity.json #capacity.config.file=config/capacityCores.json capacity.config.file=config/capacityJBOD.json 3 The list of goals to optimize the Kafka cluster for with pre-computed proposals default.goals={List of default optimization goals} 4 The list of supported goals goals={list of master optimization goals} 5 The list of supported hard goals hard.goals={List of hard goals} 6 How often should the cached proposal be expired and recalculated if necessary proposal.expiration.ms=60000 7 The zookeeper connect of the Kafka cluster zookeeper.connect=localhost:2181 8", "cd /opt/cruise-control/ ./bin/cruise-control-start.sh config/cruisecontrol.properties PORT", "curl 'http://HOST:PORT/kafkacruisecontrol/state'", "RackAwareGoal; MinTopicLeadersPerBrokerGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; ReplicaDistributionGoal; PotentialNwOutGoal; DiskUsageDistributionGoal; NetworkInboundUsageDistributionGoal; NetworkOutboundUsageDistributionGoal; CpuUsageDistributionGoal; TopicReplicaDistributionGoal; LeaderReplicaDistributionGoal; LeaderBytesInDistributionGoal; PreferredLeaderElectionGoal", "RackAwareGoal; MinTopicLeadersPerBrokerGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; CpuCapacityGoal", "RackAwareGoal; MinTopicLeadersPerBrokerGoal; ReplicaCapacityGoal; DiskCapacityGoal; NetworkInboundCapacityGoal; NetworkOutboundCapacityGoal; CpuCapacityGoal; ReplicaDistributionGoal; PotentialNwOutGoal; DiskUsageDistributionGoal; NetworkInboundUsageDistributionGoal; NetworkOutboundUsageDistributionGoal; CpuUsageDistributionGoal; TopicReplicaDistributionGoal; LeaderReplicaDistributionGoal; LeaderBytesInDistributionGoal", "capacity.config.file=config/capacityJBOD.json", "{ \"brokerCapacities\":[ { \"brokerId\": \"-1\", \"capacity\": { \"DISK\": \"100000\", \"CPU\": \"100\", \"NW_IN\": \"10000\", \"NW_OUT\": \"10000\" }, \"doc\": \"This is the default capacity. Capacity unit used for disk is in MB, cpu is in percentage, network throughput is in KB.\" }, { \"brokerId\": \"0\", \"capacity\": { \"DISK\": \"500000\", \"CPU\": \"100\", \"NW_IN\": \"50000\", \"NW_OUT\": \"50000\" }, \"doc\": \"This overrides the capacity for broker 0.\" } ] }", "bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name __CruiseControlMetrics --describe", "bin/kafka-configs.sh --bootstrap-server <BrokerAddress> --entity-type topics --entity-name __CruiseControlMetrics --alter --add-config cleanup.policy=delete", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance'", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal'", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?goals=RackAwareGoal,ReplicaCapacityGoal,ReplicaDistributionGoal&skip_hard_goal_check=true'", "curl -v -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance'", "* Connected to cruise-control-server (::1) port 9090 (#0) > POST /kafkacruisecontrol/rebalance HTTP/1.1 > Host: cc-host:9090 > User-Agent: curl/7.70.0 > Accept: / > * Mark bundle as not supporting multiuse < HTTP/1.1 200 OK < Date: Mon, 01 Jun 2020 15:19:26 GMT < Set-Cookie: JSESSIONID=node01wk6vjzjj12go13m81o7no5p7h9.node0; Path=/ < Expires: Thu, 01 Jan 1970 00:00:00 GMT < User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201 < Content-Type: text/plain;charset=utf-8 < Cruise-Control-Version: 2.0.103.redhat-00002 < Cruise-Control-Commit_Id: 58975c9d5d0a78dd33cd67d4bcb497c9fd42ae7c < Content-Length: 12368 < Server: Jetty(9.4.26.v20200117-redhat-00001)", "curl -v -X POST -H 'User-Task-ID: 274b8095-d739-4840-85b9-f4cfaaf5c201' 'cruise-control-server:9090/kafkacruisecontrol/rebalance'", "curl -X POST 'cruise-control-server:9090/kafkacruisecontrol/rebalance?dryrun=false'", "curl 'cruise-control-server:9090/kafkacruisecontrol/user_tasks' USER TASK ID CLIENT ADDRESS START TIME STATUS REQUEST URL c459316f-9eb5-482f-9d2d-97b5a4cd294d 0:0:0:0:0:0:0:1 2020-06-01_16:10:29 UTC Active POST /kafkacruisecontrol/rebalance?dryrun=false 445e2fc3-6531-4243-b0a6-36ef7c5059b4 0:0:0:0:0:0:0:1 2020-06-01_14:21:26 UTC Completed GET /kafkacruisecontrol/state?json=true 05c37737-16d1-4e33-8e2b-800dee9f1b01 0:0:0:0:0:0:0:1 2020-06-01_14:36:11 UTC Completed GET /kafkacruisecontrol/state?json=true aebae987-985d-4871-8cfb-6134ecd504ab 0:0:0:0:0:0:0:1 2020-06-01_16:10:04 UTC", "curl 'cruise-control-server:9090/kafkacruisecontrol/user_tasks?user_task_ids=c459316f-9eb5-482f-9d2d-97b5a4cd294d'", "curl -X POST 'cruise-control-server:9090/kafkacruisecontrol/stop_proposal_execution'" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_amq_streams_on_rhel/assembly-cc-cluster-rebalancing-str
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/high_availability_for_compute_instances/making-open-source-more-inclusive
Installation overview
Installation overview OpenShift Container Platform 4.16 Overview content for installing OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "oc get nodes", "NAME STATUS ROLES AGE VERSION example-compute1.example.com Ready worker 13m v1.21.6+bb8d50a example-compute2.example.com Ready worker 13m v1.21.6+bb8d50a example-compute4.example.com Ready worker 14m v1.21.6+bb8d50a example-control1.example.com Ready master 52m v1.21.6+bb8d50a example-control2.example.com Ready master 55m v1.21.6+bb8d50a example-control3.example.com Ready master 55m v1.21.6+bb8d50a", "oc get machines -A", "NAMESPACE NAME PHASE TYPE REGION ZONE AGE openshift-machine-api example-zbbt6-master-0 Running 95m openshift-machine-api example-zbbt6-master-1 Running 95m openshift-machine-api example-zbbt6-master-2 Running 95m openshift-machine-api example-zbbt6-worker-0-25bhp Running 49m openshift-machine-api example-zbbt6-worker-0-8b4c2 Running 49m openshift-machine-api example-zbbt6-worker-0-jkbqt Running 49m openshift-machine-api example-zbbt6-worker-0-qrl5b Running 49m", "capabilities: baselineCapabilitySet: v4.11 1 additionalEnabledCapabilities: 2 - CSISnapshot - Console - Storage", "oc get clusteringresses.ingress.openshift.io -n openshift-ingress-operator default -o yaml", "oc get deployment -n openshift-ingress", "oc get network/cluster -o jsonpath='{.status.clusterNetwork[*]}'", "map[cidr:10.128.0.0/14 hostPrefix:23]", "oc get clusterversion version -o jsonpath='{.spec.capabilities}{\"\\n\"}{.status.capabilities}{\"\\n\"}'", "{\"additionalEnabledCapabilities\":[\"openshift-samples\"],\"baselineCapabilitySet\":\"None\"} {\"enabledCapabilities\":[\"openshift-samples\"],\"knownCapabilities\":[\"CSISnapshot\",\"Console\",\"Insights\",\"Storage\",\"baremetal\",\"marketplace\",\"openshift-samples\"]}", "oc patch clusterversion version --type merge -p '{\"spec\":{\"capabilities\":{\"baselineCapabilitySet\":\"vCurrent\"}}}' 1", "oc get clusterversion version -o jsonpath='{.spec.capabilities.additionalEnabledCapabilities}{\"\\n\"}'", "[\"openshift-samples\"]", "oc patch clusterversion/version --type merge -p '{\"spec\":{\"capabilities\":{\"additionalEnabledCapabilities\":[\"openshift-samples\", \"marketplace\"]}}}'", "oc get clusterversion version -o jsonpath='{.status.conditions[?(@.type==\"ImplicitlyEnabledCapabilities\")]}{\"\\n\"}'", "{\"lastTransitionTime\":\"2022-07-22T03:14:35Z\",\"message\":\"The following capabilities could not be disabled: openshift-samples\",\"reason\":\"CapabilitiesImplicitlyEnabled\",\"status\":\"True\",\"type\":\"ImplicitlyEnabledCapabilities\"}", "oc adm release extract --registry-config \"USD{pullsecret_file}\" --command=openshift-install-fips --to \"USD{extract_dir}\" USD{RELEASE_IMAGE}", "tar -xvf openshift-install-rhel9-amd64.tar.gz" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/installation_overview/index
Monitoring APIs
Monitoring APIs OpenShift Container Platform 4.16 Reference guide for monitoring APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/monitoring_apis/index
5.2. scl command does not exist
5.2. scl command does not exist This error message is usually caused by a missing package scl-utils . To install the scl-utils package, run the following command: For more information, see Section 1.3, "Enabling Support for Software Collections" .
[ "yum install scl-utils" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-scl_command_does_not_exist
Chapter 4. About OpenShift Kubernetes Engine
Chapter 4. About OpenShift Kubernetes Engine As of 27 April 2020, Red Hat has decided to rename Red Hat OpenShift Container Engine to Red Hat OpenShift Kubernetes Engine to better communicate what value the product offering delivers. Red Hat OpenShift Kubernetes Engine is a product offering from Red Hat that lets you use an enterprise class Kubernetes platform as a production platform for launching containers. You download and install OpenShift Kubernetes Engine the same way as OpenShift Container Platform as they are the same binary distribution, but OpenShift Kubernetes Engine offers a subset of the features that OpenShift Container Platform offers. 4.1. Similarities and differences You can see the similarities and differences between OpenShift Kubernetes Engine and OpenShift Container Platform in the following table: Table 4.1. Product comparison for OpenShift Kubernetes Engine and OpenShift Container Platform OpenShift Kubernetes Engine OpenShift Container Platform Fully Automated Installers Yes Yes Over the Air Smart Upgrades Yes Yes Enterprise Secured Kubernetes Yes Yes Kubectl and oc automated command line Yes Yes Operator Lifecycle Manager (OLM) Yes Yes Administrator Web console Yes Yes OpenShift Virtualization Yes Yes User Workload Monitoring Yes Cluster Monitoring Yes Yes Cost Management SaaS Service Yes Yes Platform Logging Yes Developer Web Console Yes Developer Application Catalog Yes Source to Image and Builder Automation (Tekton) Yes OpenShift Service Mesh (Maistra, Kiali, and Jaeger) Yes OpenShift distributed tracing (Jaeger) Yes OpenShift Serverless (Knative) Yes OpenShift Pipelines (Jenkins and Tekton) Yes Embedded Component of IBM Cloud(R) Pak and RHT MW Bundles Yes OpenShift sandboxed containers Yes 4.1.1. Core Kubernetes and container orchestration OpenShift Kubernetes Engine offers full access to an enterprise-ready Kubernetes environment that is easy to install and offers an extensive compatibility test matrix with many of the software elements that you might use in your data center. OpenShift Kubernetes Engine offers the same service level agreements, bug fixes, and common vulnerabilities and errors protection as OpenShift Container Platform. OpenShift Kubernetes Engine includes a Red Hat Enterprise Linux (RHEL) Virtual Datacenter and Red Hat Enterprise Linux CoreOS (RHCOS) entitlement that allows you to use an integrated Linux operating system with container runtime from the same technology provider. The OpenShift Kubernetes Engine subscription is compatible with the Red Hat OpenShift support for Windows Containers subscription. 4.1.2. Enterprise-ready configurations OpenShift Kubernetes Engine uses the same security options and default settings as the OpenShift Container Platform. Default security context constraints, pod security policies, best practice network and storage settings, service account configuration, SELinux integration, HAproxy edge routing configuration, and all other standard protections that OpenShift Container Platform offers are available in OpenShift Kubernetes Engine. OpenShift Kubernetes Engine offers full access to the integrated monitoring solution that OpenShift Container Platform uses, which is based on Prometheus and offers deep coverage and alerting for common Kubernetes issues. OpenShift Kubernetes Engine uses the same installation and upgrade automation as OpenShift Container Platform. 4.1.3. Standard infrastructure services With an OpenShift Kubernetes Engine subscription, you receive support for all storage plugins that OpenShift Container Platform supports. In terms of networking, OpenShift Kubernetes Engine offers full and supported access to the Kubernetes Container Network Interface (CNI) and therefore allows you to use any third-party SDN that supports OpenShift Container Platform. It also allows you to use the included Open vSwitch software defined network to its fullest extent. OpenShift Kubernetes Engine allows you to take full advantage of the OVN Kubernetes overlay, Multus, and Multus plugins that are supported on OpenShift Container Platform. OpenShift Kubernetes Engine allows customers to use a Kubernetes Network Policy to create microsegmentation between deployed application services on the cluster. You can also use the Route API objects that are found in OpenShift Container Platform, including its sophisticated integration with the HAproxy edge routing layer as an out of the box Kubernetes Ingress Controller. 4.1.4. Core user experience OpenShift Kubernetes Engine users have full access to Kubernetes Operators, pod deployment strategies, Helm, and OpenShift Container Platform templates. OpenShift Kubernetes Engine users can use both the oc and kubectl command line interfaces. OpenShift Kubernetes Engine also offers an administrator web-based console that shows all aspects of the deployed container services and offers a container-as-a service experience. OpenShift Kubernetes Engine grants access to the Operator Life Cycle Manager that helps you control access to content on the cluster and life cycle operator-enabled services that you use. With an OpenShift Kubernetes Engine subscription, you receive access to the Kubernetes namespace, the OpenShift Project API object, and cluster-level Prometheus monitoring metrics and events. 4.1.5. Maintained and curated content With an OpenShift Kubernetes Engine subscription, you receive access to the OpenShift Container Platform content from the Red Hat Ecosystem Catalog and Red Hat Connect ISV marketplace. You can access all maintained and curated content that the OpenShift Container Platform eco-system offers. 4.1.6. OpenShift Data Foundation compatible OpenShift Kubernetes Engine is compatible and supported with your purchase of OpenShift Data Foundation. 4.1.7. Red Hat Middleware compatible OpenShift Kubernetes Engine is compatible and supported with individual Red Hat Middleware product solutions. Red Hat Middleware Bundles that include OpenShift embedded in them only contain OpenShift Container Platform. 4.1.8. OpenShift Serverless OpenShift Kubernetes Engine does not include OpenShift Serverless support. Use OpenShift Container Platform for this support. 4.1.9. Quay Integration compatible OpenShift Kubernetes Engine is compatible and supported with a Red Hat Quay purchase. 4.1.10. OpenShift Virtualization OpenShift Kubernetes Engine includes support for the Red Hat product offerings derived from the kubevirt.io open source project. 4.1.11. Advanced cluster management OpenShift Kubernetes Engine is compatible with your additional purchase of Red Hat Advanced Cluster Management (RHACM) for Kubernetes. An OpenShift Kubernetes Engine subscription does not offer a cluster-wide log aggregation solution or support Elasticsearch, Fluentd, or Kibana-based logging solutions. Red Hat OpenShift Service Mesh capabilities derived from the open-source istio.io and kiali.io projects that offer OpenTracing observability for containerized services on OpenShift Container Platform are not supported in OpenShift Kubernetes Engine. 4.1.12. Advanced networking The standard networking solutions in OpenShift Container Platform are supported with an OpenShift Kubernetes Engine subscription. The OpenShift Container Platform Kubernetes CNI plugin for automation of multi-tenant network segmentation between OpenShift Container Platform projects is entitled for use with OpenShift Kubernetes Engine. OpenShift Kubernetes Engine offers all the granular control of the source IP addresses that are used by application services on the cluster. Those egress IP address controls are entitled for use with OpenShift Kubernetes Engine. OpenShift Container Platform offers ingress routing to on cluster services that use non-standard ports when no public cloud provider is in use via the VIP pods found in OpenShift Container Platform. That ingress solution is supported in OpenShift Kubernetes Engine. OpenShift Kubernetes Engine users are supported for the Kubernetes ingress control object, which offers integrations with public cloud providers. Red Hat Service Mesh, which is derived from the istio.io open source project, is not supported in OpenShift Kubernetes Engine. Also, the Kourier Ingress Controller found in OpenShift Serverless is not supported on OpenShift Kubernetes Engine. 4.1.13. OpenShift sandboxed containers OpenShift Kubernetes Engine does not include OpenShift sandboxed containers. Use OpenShift Container Platform for this support. 4.1.14. Developer experience With OpenShift Kubernetes Engine, the following capabilities are not supported: The OpenShift Container Platform developer experience utilities and tools, such as Red Hat OpenShift Dev Spaces. The OpenShift Container Platform pipeline feature that integrates a streamlined, Kubernetes-enabled Jenkins and Tekton experience in the user's project space. The OpenShift Container Platform source-to-image feature, which allows you to easily deploy source code, dockerfiles, or container images across the cluster. Build strategies, builder pods, or Tekton for end user container deployments. The odo developer command line. The developer persona in the OpenShift Container Platform web console. 4.1.15. Feature summary The following table is a summary of the feature availability in OpenShift Kubernetes Engine and OpenShift Container Platform. Where applicable, it includes the name of the Operator that enables a feature. Table 4.2. Features in OpenShift Kubernetes Engine and OpenShift Container Platform Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name Fully Automated Installers (IPI) Included Included N/A Customizable Installers (UPI) Included Included N/A Disconnected Installation Included Included N/A Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) entitlement Included Included N/A Existing RHEL manual attach to cluster (BYO) Included Included N/A CRIO Runtime Included Included N/A Over the Air Smart Upgrades and Operating System (RHCOS) Management Included Included N/A Enterprise Secured Kubernetes Included Included N/A Kubectl and oc automated command line Included Included N/A Auth Integrations, RBAC, SCC, Multi-Tenancy Admission Controller Included Included N/A Operator Lifecycle Manager (OLM) Included Included N/A Administrator web console Included Included N/A OpenShift Virtualization Included Included OpenShift Virtualization Operator Compliance Operator provided by Red Hat Included Included Compliance Operator File Integrity Operator Included Included File Integrity Operator Gatekeeper Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Gatekeeper Operator Klusterlet Not Included - Requires separate subscription Not Included - Requires separate subscription N/A Kube Descheduler Operator provided by Red Hat Included Included Kube Descheduler Operator Local Storage provided by Red Hat Included Included Local Storage Operator Node Feature Discovery provided by Red Hat Included Included Node Feature Discovery Operator Performance Profile controller Included Included N/A PTP Operator provided by Red Hat Included Included PTP Operator Service Telemetry Operator provided by Red Hat Not Included Included Service Telemetry Operator SR-IOV Network Operator Included Included SR-IOV Network Operator Vertical Pod Autoscaler Included Included Vertical Pod Autoscaler Cluster Monitoring (Prometheus) Included Included Cluster Monitoring Device Manager (for example, GPU) Included Included N/A Log Forwarding Included Included Red Hat OpenShift Logging Operator Telemeter and Insights Connected Experience Included Included N/A Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name OpenShift Cloud Manager SaaS Service Included Included N/A OVS and OVN SDN Included Included N/A MetalLB Included Included MetalLB Operator HAProxy Ingress Controller Included Included N/A Red Hat OpenStack Platform (RHOSP) Kuryr Integration Included Included N/A Ingress Cluster-wide Firewall Included Included N/A Egress Pod and Namespace Granular Control Included Included N/A Ingress Non-Standard Ports Included Included N/A Multus and Available Multus Plugins Included Included N/A Network Policies Included Included N/A IPv6 Single and Dual Stack Included Included N/A CNI Plugin ISV Compatibility Included Included N/A CSI Plugin ISV Compatibility Included Included N/A RHT and IBM(R) middleware a la carte purchases (not included in OpenShift Container Platform or OpenShift Kubernetes Engine) Included Included N/A ISV or Partner Operator and Container Compatibility (not included in OpenShift Container Platform or OpenShift Kubernetes Engine) Included Included N/A Embedded OperatorHub Included Included N/A Embedded Marketplace Included Included N/A Quay Compatibility (not included) Included Included N/A OpenShift API for Data Protection (OADP) Included Included OADP Operator RHEL Software Collections and RHT SSO Common Service (included) Included Included N/A Embedded Registry Included Included N/A Helm Included Included N/A User Workload Monitoring Not Included Included N/A Cost Management SaaS Service Included Included Cost Management Metrics Operator Platform Logging Not Included Included Red Hat OpenShift Logging Operator OpenShift Elasticsearch Operator provided by Red Hat Not Included Cannot be run standalone N/A Developer Web Console Not Included Included N/A Developer Application Catalog Not Included Included N/A Source to Image and Builder Automation (Tekton) Not Included Included N/A OpenShift Service Mesh Not Included Included OpenShift Service Mesh Operator Service Binding Operator Not Included Included Service Binding Operator Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name Red Hat OpenShift Serverless Not Included Included OpenShift Serverless Operator Web Terminal provided by Red Hat Not Included Included Web Terminal Operator Red Hat OpenShift Pipelines Operator Not Included Included OpenShift Pipelines Operator Embedded Component of IBM Cloud(R) Pak and RHT MW Bundles Not Included Included N/A Red Hat OpenShift GitOps Not Included Included OpenShift GitOps Red Hat OpenShift Dev Spaces Not Included Included Red Hat OpenShift Dev Spaces Red Hat OpenShift Local Not Included Included N/A Quay Bridge Operator provided by Red Hat Not Included Included Quay Bridge Operator Quay Container Security provided by Red Hat Not Included Included Quay Operator Red Hat OpenShift distributed tracing platform Not Included Included Red Hat OpenShift distributed tracing platform Operator Red Hat OpenShift Kiali Not Included Included Kiali Operator Metering provided by Red Hat (deprecated) Not Included Included N/A Migration Toolkit for Containers Operator Not Included Included Migration Toolkit for Containers Operator Cost management for OpenShift Not included Included N/A JBoss Web Server provided by Red Hat Not included Included JWS Operator Red Hat Build of Quarkus Not included Included N/A Kourier Ingress Controller Not included Included N/A RHT Middleware Bundles Sub Compatibility (not included in OpenShift Container Platform) Not included Included N/A IBM Cloud(R) Pak Sub Compatibility (not included in OpenShift Container Platform) Not included Included N/A OpenShift Do ( odo ) Not included Included N/A Source to Image and Tekton Builders Not included Included N/A OpenShift Serverless FaaS Not included Included N/A IDE Integrations Not included Included N/A OpenShift sandboxed containers Not included Not included OpenShift sandboxed containers Operator Windows Machine Config Operator Community Windows Machine Config Operator included - no subscription required Red Hat Windows Machine Config Operator included - Requires separate subscription Windows Machine Config Operator Red Hat Quay Not Included - Requires separate subscription Not Included - Requires separate subscription Quay Operator Red Hat Advanced Cluster Management Not Included - Requires separate subscription Not Included - Requires separate subscription Advanced Cluster Management for Kubernetes Red Hat Advanced Cluster Security Not Included - Requires separate subscription Not Included - Requires separate subscription N/A OpenShift Data Foundation Not Included - Requires separate subscription Not Included - Requires separate subscription OpenShift Data Foundation Feature OpenShift Kubernetes Engine OpenShift Container Platform Operator name Ansible Automation Platform Resource Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Ansible Automation Platform Resource Operator Business Automation provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription Business Automation Operator Data Grid provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription Data Grid Operator Red Hat Integration provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription Red Hat Integration Operator Red Hat Integration - 3Scale provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription 3scale Red Hat Integration - 3Scale APICast gateway provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription 3scale APIcast Red Hat Integration - AMQ Broker Not Included - Requires separate subscription Not Included - Requires separate subscription AMQ Broker Red Hat Integration - AMQ Broker LTS Not Included - Requires separate subscription Not Included - Requires separate subscription Red Hat Integration - AMQ Interconnect Not Included - Requires separate subscription Not Included - Requires separate subscription AMQ Interconnect Red Hat Integration - AMQ Online Not Included - Requires separate subscription Not Included - Requires separate subscription Red Hat Integration - AMQ Streams Not Included - Requires separate subscription Not Included - Requires separate subscription AMQ Streams Red Hat Integration - Camel K Not Included - Requires separate subscription Not Included - Requires separate subscription Camel K Red Hat Integration - Fuse Console Not Included - Requires separate subscription Not Included - Requires separate subscription Fuse Console Red Hat Integration - Fuse Online Not Included - Requires separate subscription Not Included - Requires separate subscription Fuse Online Red Hat Integration - Service Registry Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Service Registry API Designer provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription API Designer JBoss EAP provided by Red Hat Not Included - Requires separate subscription Not Included - Requires separate subscription JBoss EAP Smart Gateway Operator Not Included - Requires separate subscription Not Included - Requires separate subscription Smart Gateway Operator Kubernetes NMState Operator Included Included N/A 4.2. Subscription limitations OpenShift Kubernetes Engine is a subscription offering that provides OpenShift Container Platform with a limited set of supported features at a lower list price. OpenShift Kubernetes Engine and OpenShift Container Platform are the same product and, therefore, all software and features are delivered in both. There is only one download, OpenShift Container Platform. OpenShift Kubernetes Engine uses the OpenShift Container Platform documentation and support services and bug errata for this reason.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/about/oke-about
Chapter 4. Tuna
Chapter 4. Tuna You can use the Tuna tool to adjust scheduler tunables, tune thread priority, IRQ handlers, and isolate CPU cores and sockets. Tuna aims to reduce the complexity of performing tuning tasks. After installing the tuna package, use the tuna command without any arguments to start the Tuna graphical user interface (GUI). Use the tuna -h command to display available command-line interface (CLI) options. Note that the tuna (8) manual page distinguishes between action and modifier options. The Tuna GUI and CLI provide equivalent functionality. The GUI displays the CPU topology on one screen to help you identify problems. The Tuna GUI also allows you to make changes to the running threads, and see the results of those changes immediately. In the CLI, Tuna accepts multiple command-line parameters and processes them sequentially. You can use such commands in application initialization scripts as configuration commands. The Monitoring tab of the Tuna GUI Important Use the tuna --save= filename command with a descriptive file name to save the current configuration. Note that this command does not save every option that Tuna can change, but saves the kernel thread changes only. Any processes that are not currently running when they are changed are not saved. 4.1. Reviewing the System with Tuna Before you make any changes, you can use Tuna to show you what is currently happening on the system. To view the current policies and priorities, use the tuna --show_threads command: To show only a specific thread corresponding to a PID or matching a command name, add the --threads option before --show_threads : The pid_or_cmd_list argument is a list of comma-separated PIDs or command-name patterns. To view the current interrupt requests (IRQs) and their affinity, use the tuna --show_irqs command: To show only a specific interrupt request corresponding to an IRQ number or matching an IRQ user name, add the --irqs option before --show_irqs : The number_or_user_list argument is a list of comma-separated IRQ numbers or user-name patterns.
[ "tuna --show_threads thread pid SCHED_ rtpri affinity cmd 1 OTHER 0 0,1 init 2 FIFO 99 0 migration/0 3 OTHER 0 0 ksoftirqd/0 4 FIFO 99 0 watchdog/0", "tuna --threads= pid_or_cmd_list --show_threads", "tuna --show_irqs users affinity 0 timer 0 1 i8042 0 7 parport0 0", "tuna --irqs= number_or_user_list --show_irqs" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/chap-tuna
Chapter 5. AWS Kinesis Source
Chapter 5. AWS Kinesis Source Receive data from AWS Kinesis. 5.1. Configuration Options The following table summarizes the configuration options available for the aws-kinesis-source Kamelet: Property Name Description Type Default Example accessKey * Access Key The access key obtained from AWS string region * AWS Region The AWS region to connect to string "eu-west-1" secretKey * Secret Key The secret key obtained from AWS string stream * Stream Name The Kinesis stream that you want to access (needs to be created in advance) string Note Fields marked with an asterisk (*) are mandatory. 5.2. Dependencies At runtime, the aws-kinesis-source Kamelet relies upon the presence of the following dependencies: camel:gson camel:kamelet camel:aws2-kinesis 5.3. Usage This section describes how you can use the aws-kinesis-source . 5.3.1. Knative Source You can use the aws-kinesis-source Kamelet as a Knative source by binding it to a Knative object. aws-kinesis-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-kinesis-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-kinesis-source properties: accessKey: "The Access Key" region: "eu-west-1" secretKey: "The Secret Key" stream: "The Stream Name" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 5.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 5.3.1.2. Procedure for using the cluster CLI Save the aws-kinesis-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f aws-kinesis-source-binding.yaml 5.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind aws-kinesis-source -p "source.accessKey=The Access Key" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" -p "source.stream=The Stream Name" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 5.3.2. Kafka Source You can use the aws-kinesis-source Kamelet as a Kafka source by binding it to a Kafka topic. aws-kinesis-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-kinesis-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-kinesis-source properties: accessKey: "The Access Key" region: "eu-west-1" secretKey: "The Secret Key" stream: "The Stream Name" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 5.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 5.3.2.2. Procedure for using the cluster CLI Save the aws-kinesis-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f aws-kinesis-source-binding.yaml 5.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind aws-kinesis-source -p "source.accessKey=The Access Key" -p "source.region=eu-west-1" -p "source.secretKey=The Secret Key" -p "source.stream=The Stream Name" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 5.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/aws-kinesis-source.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-kinesis-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-kinesis-source properties: accessKey: \"The Access Key\" region: \"eu-west-1\" secretKey: \"The Secret Key\" stream: \"The Stream Name\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel", "apply -f aws-kinesis-source-binding.yaml", "kamel bind aws-kinesis-source -p \"source.accessKey=The Access Key\" -p \"source.region=eu-west-1\" -p \"source.secretKey=The Secret Key\" -p \"source.stream=The Stream Name\" channel:mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-kinesis-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-kinesis-source properties: accessKey: \"The Access Key\" region: \"eu-west-1\" secretKey: \"The Secret Key\" stream: \"The Stream Name\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f aws-kinesis-source-binding.yaml", "kamel bind aws-kinesis-source -p \"source.accessKey=The Access Key\" -p \"source.region=eu-west-1\" -p \"source.secretKey=The Secret Key\" -p \"source.stream=The Stream Name\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/aws-kinesis-source
Chapter 1. Overview of deploying in external mode
Chapter 1. Overview of deploying in external mode Red Hat OpenShift Data Foundation can make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters running on any platform. See Planning your deployment for more information. For instructions regarding how to install a RHCS cluster, see the installation guide . Follow these steps to deploy OpenShift Data Foundation in external mode: Deploy OpenShift Data Foundation using Red Hat Ceph Storage . 1.1. Disaster recovery requirements Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . For detailed disaster recovery solution requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation. 1.2. Network ports required between OpenShift Container Platform and Ceph when using external mode deployment List of TCP ports, source OpenShift Container Platform and destination RHCS TCP ports To be used for 6789, 3300 Ceph Monitor 6800 - 7300 Ceph OSD, MGR, MDS 9283 Ceph MGR Prometheus Exporter For more information about why these ports are required, see Chapter 2. Ceph network configuration of RHCS Configuration Guide .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_in_external_mode/overview-of-deploying-in-external-mode_rhodf
Installing on RHV
Installing on RHV OpenShift Container Platform 4.13 Installing OpenShift Container Platform on Red Hat Virtualization Red Hat OpenShift Documentation Team
[ "curl -k -u <username>@<profile>:<password> \\ 1 https://<engine-fqdn>/ovirt-engine/api 2", "curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api", "arp 10.35.1.19", "10.35.1.19 (10.35.1.19) -- no entry", "api.<cluster-name>.<base-domain> <ip-address> 1 *.apps.<cluster-name>.<base-domain> <ip-address> 2", "api.my-cluster.virtlab.example.com 10.35.1.19 *.apps.my-cluster.virtlab.example.com 10.35.1.20", "ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: \"\" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "rhv-env.virtlab.example.com:443", "<username>@<profile> 1", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "console-openshift-console.apps.<clustername>.<basedomain> 1", "console-openshift-console.apps.my-cluster.virtlab.example.com", "systemctl restart kubelet", "oc get --insecure-skip-tls-verify --server=https://localhost:<port> --raw=/metrics", "oc login -u kubeadmin -p *** <apiurl>", "./openshift-install wait-for bootstrap-complete", "./openshift-install destroy bootstrap", "curl -k -u <username>@<profile>:<password> \\ 1 https://<engine-fqdn>/ovirt-engine/api 2", "curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api", "arp 10.35.1.19", "10.35.1.19 (10.35.1.19) -- no entry", "api.<cluster-name>.<base-domain> <ip-address> 1 *.apps.<cluster-name>.<base-domain> <ip-address> 2", "api.my-cluster.virtlab.example.com 10.35.1.19 *.apps.my-cluster.virtlab.example.com 10.35.1.20", "ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: \"\" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "https://<engine-fqdn>/ovirt-engine/api 1", "curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api", "<username>@<profile> 1", "ocpadmin@internal", "[ovirt_ca_bundle]: | -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <INTERMEDIATE_CA> -----END CERTIFICATE-----", "[additionalTrustBundle]: | -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA> -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- <INTERMEDIATE_CA> -----END CERTIFICATE-----", "./openshift-install create install-config --dir <installation_directory>", "apiVersion: v1 baseDomain: example.com compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: ovirt: sparse: false 1 format: raw 2 replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: ovirt: sparse: false 3 format: raw 4 replicas: 3 metadata: creationTimestamp: null name: my-cluster networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 5 serviceNetwork: - 172.30.0.0/16 platform: ovirt: api_vips: - 10.0.0.10 ingress_vips: - 10.0.0.11 ovirt_cluster_id: 68833f9f-e89c-4891-b768-e2ba0815b76b ovirt_storage_domain_id: ed7b0f4e-0e96-492a-8fff-279213ee1468 ovirt_network_name: ovirtmgmt vnicProfileID: 3fa86930-0be5-4052-b667-b79f0a729692 publish: External pullSecret: '{\"auths\": ...}' sshKey: ssh-ed12345 AAAA", "apiVersion: v1 baseDomain: example.com metadata: name: test-cluster platform: ovirt: api_vips: - 10.46.8.230 ingress_vips: - 10.46.8.232 ovirt_cluster_id: 68833f9f-e89c-4891-b768-e2ba0815b76b ovirt_storage_domain_id: ed7b0f4e-0e96-492a-8fff-279213ee1468 ovirt_network_name: ovirtmgmt vnicProfileID: 3fa86930-0be5-4052-b667-b79f0a729692 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed12345 AAAA", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: ovirt: cpu: cores: 4 sockets: 2 memoryMB: 65536 osDisk: sizeGB: 100 vmType: server replicas: 3 compute: - name: worker platform: ovirt: cpu: cores: 4 sockets: 4 memoryMB: 65536 osDisk: sizeGB: 200 vmType: server replicas: 5 metadata: name: test-cluster platform: ovirt: api_vips: - 10.46.8.230 ingress_vips: - 10.46.8.232 ovirt_cluster_id: 68833f9f-e89c-4891-b768-e2ba0815b76b ovirt_storage_domain_id: ed7b0f4e-0e96-492a-8fff-279213ee1468 ovirt_network_name: ovirtmgmt vnicProfileID: 3fa86930-0be5-4052-b667-b79f0a729692 pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA", "platform: ovirt: affinityGroups: - description: AffinityGroup to place each compute machine on a separate host enforcing: true name: compute priority: 3 - description: AffinityGroup to place each control plane machine on a separate host enforcing: true name: controlplane priority: 5 - description: AffinityGroup to place worker nodes and control plane nodes on separate hosts enforcing: false name: openshift priority: 5 compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: ovirt: affinityGroupsNames: - compute - openshift replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: ovirt: affinityGroupsNames: - controlplane - openshift replicas: 3", "platform: ovirt: affinityGroups: [] compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: ovirt: affinityGroupsNames: [] replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: ovirt: affinityGroupsNames: [] replicas: 3", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "<machine-pool>: platform: ovirt: affinityGroupNames: - compute - clusterWideNonEnforcing", "<machine-pool>: platform: ovirt: affinityGroupNames: []", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "console-openshift-console.apps.<clustername>.<basedomain> 1", "console-openshift-console.apps.my-cluster.virtlab.example.com", "systemctl restart kubelet", "oc get --insecure-skip-tls-verify --server=https://localhost:<port> --raw=/metrics", "oc login -u kubeadmin -p *** <apiurl>", "./openshift-install wait-for bootstrap-complete", "./openshift-install destroy bootstrap", "curl -k -u <username>@<profile>:<password> \\ 1 https://<engine-fqdn>/ovirt-engine/api 2", "curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api", "dnf update python3 ansible", "dnf install ovirt-ansible-image-template", "dnf install ovirt-ansible-vm-infra", "export ASSETS_DIR=./wrk", "ovirt_url: https://ovirt.example.com/ovirt-engine/api 1 ovirt_fqdn: ovirt.example.com 2 ovirt_pem_url: \"\" ovirt_username: ocpadmin@internal ovirt_password: super-secret-password 3 ovirt_insecure: true", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "mkdir playbooks", "cd playbooks", "xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/bootstrap.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/common-auth.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/create-templates-and-vms.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/inventory.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/masters.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/retire-bootstrap.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/retire-masters.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/retire-workers.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/workers.yml'", "--- all: vars: ovirt_cluster: \"Default\" ocp: assets_dir: \"{{ lookup('env', 'ASSETS_DIR') }}\" ovirt_config_path: \"{{ lookup('env', 'HOME') }}/.ovirt/ovirt-config.yaml\" # --- # {op-system} section # --- rhcos: image_url: \"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/latest/rhcos-openstack.x86_64.qcow2.gz\" local_cmp_image_path: \"/tmp/rhcos.qcow2.gz\" local_image_path: \"/tmp/rhcos.qcow2\" # --- # Profiles section # --- control_plane: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: rhcos_tpl operating_system: \"rhcos_x64\" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab compute: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: worker_rhcos_tpl operating_system: \"rhcos_x64\" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab # --- # Virtual machines section # --- vms: - name: \"{{ metadata.infraID }}-bootstrap\" ocp_type: bootstrap profile: \"{{ control_plane }}\" type: server - name: \"{{ metadata.infraID }}-master0\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-master1\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-master2\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-worker0\" ocp_type: worker profile: \"{{ compute }}\" - name: \"{{ metadata.infraID }}-worker1\" ocp_type: worker profile: \"{{ compute }}\" - name: \"{{ metadata.infraID }}-worker2\" ocp_type: worker profile: \"{{ compute }}\"", "--- - name: include metadata.json vars include_vars: file: \"{{ ocp.assets_dir }}/metadata.json\" name: metadata", "rhcos: \"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/latest/rhcos-openstack.x86_64.qcow2.gz\"", "openshift-install create install-config --dir USDASSETS_DIR", "? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********>", "? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********>", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) conf[\"compute\"][0][\"replicas\"] = 0 open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) conf[\"networking\"][\"machineNetwork\"][0][\"cidr\"] = \"172.16.0.0/16\" open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) platform = conf[\"platform\"] del platform[\"ovirt\"] platform[\"none\"] = {} open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "cp install-config.yaml install-config.yaml.backup", "openshift-install create manifests --dir USDASSETS_DIR", "INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings", "tree . └── wrk ├── manifests │ ├── 04-openshift-machine-config-operator.yaml │ ├── cluster-config.yaml │ ├── cluster-dns-02-config.yml │ ├── cluster-infrastructure-02-config.yml │ ├── cluster-ingress-02-config.yml │ ├── cluster-network-01-crd.yml │ ├── cluster-network-02-config.yml │ ├── cluster-proxy-01-config.yaml │ ├── cluster-scheduler-02-config.yml │ ├── cvo-overrides.yaml │ ├── etcd-ca-bundle-configmap.yaml │ ├── etcd-client-secret.yaml │ ├── etcd-host-service-endpoints.yaml │ ├── etcd-host-service.yaml │ ├── etcd-metric-client-secret.yaml │ ├── etcd-metric-serving-ca-configmap.yaml │ ├── etcd-metric-signer-secret.yaml │ ├── etcd-namespace.yaml │ ├── etcd-service.yaml │ ├── etcd-serving-ca-configmap.yaml │ ├── etcd-signer-secret.yaml │ ├── kube-cloud-config.yaml │ ├── kube-system-configmap-root-ca.yaml │ ├── machine-config-server-tls-secret.yaml │ └── openshift-config-secret-pull-secret.yaml └── openshift ├── 99_kubeadmin-password-secret.yaml ├── 99_openshift-cluster-api_master-user-data-secret.yaml ├── 99_openshift-cluster-api_worker-user-data-secret.yaml ├── 99_openshift-machineconfig_99-master-ssh.yaml ├── 99_openshift-machineconfig_99-worker-ssh.yaml └── openshift-install-manifests.yaml", "python3 -c 'import os, yaml path = \"%s/manifests/cluster-scheduler-02-config.yml\" % os.environ[\"ASSETS_DIR\"] data = yaml.safe_load(open(path)) data[\"spec\"][\"mastersSchedulable\"] = False open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "openshift-install create ignition-configs --dir USDASSETS_DIR", "tree . └── wrk ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "control_plane: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: \"{{ metadata.infraID }}-rhcos_tpl\" operating_system: \"rhcos_x64\"", "ansible-playbook -i inventory.yml create-templates-and-vms.yml", "ansible-playbook -i inventory.yml bootstrap.yml", "ssh core@<boostrap.ip>", "[core@ocp4-lk6b4-bootstrap ~]USD journalctl -b -f -u release-image.service -u bootkube.service", "ansible-playbook -i inventory.yml masters.yml", "openshift-install wait-for bootstrap-complete --dir USDASSETS_DIR", "INFO API v1.26.0 up INFO Waiting up to 40m0s for bootstrapping to complete", "INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=USDASSETS_DIR/auth/kubeconfig", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "ansible-playbook -i inventory.yml retire-bootstrap.yml", "ansible-playbook -i inventory.yml workers.yml", "oc get csr -A", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2lnxd 63m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master0.ocp4.example.org Approved,Issued csr-hff4q 64m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-hsn96 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master2.ocp4.example.org Approved,Issued csr-m724n 6m2s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-p4dz2 60m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-t9vfj 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master1.ocp4.example.org Approved,Issued csr-tggtr 61m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-wcbrf 7m6s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "watch \"oc get csr -A | grep pending -i\"", "Every 2.0s: oc get csr -A | grep pending -i csr-m724n 10m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-wcbrf 11m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc describe csr csr-m724n", "Name: csr-m724n Labels: <none> Annotations: <none> CreationTimestamp: Sun, 19 Jul 2020 15:59:37 +0200 Requesting User: system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Signer: kubernetes.io/kube-apiserver-client-kubelet Status: Pending Subject: Common Name: system:node:ocp4-lk6b4-worker1.ocp4.example.org Serial Number: Organization: system:nodes Events: <none>", "oc adm certificate approve csr-m724n", "openshift-install wait-for install-complete --dir USDASSETS_DIR --log-level debug", "curl -k -u <username>@<profile>:<password> \\ 1 https://<engine-fqdn>/ovirt-engine/api 2", "curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dnf update python3 ansible", "dnf install ovirt-ansible-image-template", "dnf install ovirt-ansible-vm-infra", "export ASSETS_DIR=./wrk", "curl -k 'https://<engine-fqdn>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA' -o /tmp/ca.pem 1", "sudo chmod 0644 /tmp/ca.pem", "sudo cp -p /tmp/ca.pem /etc/pki/ca-trust/source/anchors/ca.pem", "sudo update-ca-trust", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir playbooks", "cd playbooks", "xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/bootstrap.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/common-auth.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/create-templates-and-vms.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/inventory.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/masters.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/retire-bootstrap.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/retire-masters.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/retire-workers.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/workers.yml'", "--- all: vars: ovirt_cluster: \"Default\" ocp: assets_dir: \"{{ lookup('env', 'ASSETS_DIR') }}\" ovirt_config_path: \"{{ lookup('env', 'HOME') }}/.ovirt/ovirt-config.yaml\" # --- # {op-system} section # --- rhcos: image_url: \"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/latest/rhcos-openstack.x86_64.qcow2.gz\" local_cmp_image_path: \"/tmp/rhcos.qcow2.gz\" local_image_path: \"/tmp/rhcos.qcow2\" # --- # Profiles section # --- control_plane: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: rhcos_tpl operating_system: \"rhcos_x64\" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab compute: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: worker_rhcos_tpl operating_system: \"rhcos_x64\" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab # --- # Virtual machines section # --- vms: - name: \"{{ metadata.infraID }}-bootstrap\" ocp_type: bootstrap profile: \"{{ control_plane }}\" type: server - name: \"{{ metadata.infraID }}-master0\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-master1\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-master2\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-worker0\" ocp_type: worker profile: \"{{ compute }}\" - name: \"{{ metadata.infraID }}-worker1\" ocp_type: worker profile: \"{{ compute }}\" - name: \"{{ metadata.infraID }}-worker2\" ocp_type: worker profile: \"{{ compute }}\"", "--- - name: include metadata.json vars include_vars: file: \"{{ ocp.assets_dir }}/metadata.json\" name: metadata", "rhcos: \"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/latest/rhcos-openstack.x86_64.qcow2.gz\"", "openshift-install create install-config --dir USDASSETS_DIR", "? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********>", "? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) conf[\"compute\"][0][\"replicas\"] = 0 open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) conf[\"networking\"][\"machineNetwork\"][0][\"cidr\"] = \"172.16.0.0/16\" open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) platform = conf[\"platform\"] del platform[\"ovirt\"] platform[\"none\"] = {} open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'", "cp install-config.yaml install-config.yaml.backup", "openshift-install create manifests --dir USDASSETS_DIR", "INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings", "tree . └── wrk ├── manifests │ ├── 04-openshift-machine-config-operator.yaml │ ├── cluster-config.yaml │ ├── cluster-dns-02-config.yml │ ├── cluster-infrastructure-02-config.yml │ ├── cluster-ingress-02-config.yml │ ├── cluster-network-01-crd.yml │ ├── cluster-network-02-config.yml │ ├── cluster-proxy-01-config.yaml │ ├── cluster-scheduler-02-config.yml │ ├── cvo-overrides.yaml │ ├── etcd-ca-bundle-configmap.yaml │ ├── etcd-client-secret.yaml │ ├── etcd-host-service-endpoints.yaml │ ├── etcd-host-service.yaml │ ├── etcd-metric-client-secret.yaml │ ├── etcd-metric-serving-ca-configmap.yaml │ ├── etcd-metric-signer-secret.yaml │ ├── etcd-namespace.yaml │ ├── etcd-service.yaml │ ├── etcd-serving-ca-configmap.yaml │ ├── etcd-signer-secret.yaml │ ├── kube-cloud-config.yaml │ ├── kube-system-configmap-root-ca.yaml │ ├── machine-config-server-tls-secret.yaml │ └── openshift-config-secret-pull-secret.yaml └── openshift ├── 99_kubeadmin-password-secret.yaml ├── 99_openshift-cluster-api_master-user-data-secret.yaml ├── 99_openshift-cluster-api_worker-user-data-secret.yaml ├── 99_openshift-machineconfig_99-master-ssh.yaml ├── 99_openshift-machineconfig_99-worker-ssh.yaml └── openshift-install-manifests.yaml", "python3 -c 'import os, yaml path = \"%s/manifests/cluster-scheduler-02-config.yml\" % os.environ[\"ASSETS_DIR\"] data = yaml.safe_load(open(path)) data[\"spec\"][\"mastersSchedulable\"] = False open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'", "openshift-install create ignition-configs --dir USDASSETS_DIR", "tree . └── wrk ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "control_plane: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: \"{{ metadata.infraID }}-rhcos_tpl\" operating_system: \"rhcos_x64\"", "ansible-playbook -i inventory.yml create-templates-and-vms.yml", "ansible-playbook -i inventory.yml bootstrap.yml", "ssh core@<boostrap.ip>", "[core@ocp4-lk6b4-bootstrap ~]USD journalctl -b -f -u release-image.service -u bootkube.service", "ansible-playbook -i inventory.yml masters.yml", "openshift-install wait-for bootstrap-complete --dir USDASSETS_DIR", "INFO API v1.26.0 up INFO Waiting up to 40m0s for bootstrapping to complete", "INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=USDASSETS_DIR/auth/kubeconfig", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "ansible-playbook -i inventory.yml retire-bootstrap.yml", "ansible-playbook -i inventory.yml workers.yml", "oc get csr -A", "NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2lnxd 63m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master0.ocp4.example.org Approved,Issued csr-hff4q 64m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-hsn96 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master2.ocp4.example.org Approved,Issued csr-m724n 6m2s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-p4dz2 60m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-t9vfj 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master1.ocp4.example.org Approved,Issued csr-tggtr 61m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-wcbrf 7m6s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "watch \"oc get csr -A | grep pending -i\"", "Every 2.0s: oc get csr -A | grep pending -i csr-m724n 10m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-wcbrf 11m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc describe csr csr-m724n", "Name: csr-m724n Labels: <none> Annotations: <none> CreationTimestamp: Sun, 19 Jul 2020 15:59:37 +0200 Requesting User: system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Signer: kubernetes.io/kube-apiserver-client-kubelet Status: Pending Subject: Common Name: system:node:ocp4-lk6b4-worker1.ocp4.example.org Serial Number: Organization: system:nodes Events: <none>", "oc adm certificate approve csr-m724n", "openshift-install wait-for install-complete --dir USDASSETS_DIR --log-level debug", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "./openshift-install destroy cluster --dir <installation_directory> --log-level info 1 2", "ansible-playbook -i inventory.yml retire-bootstrap.yml retire-masters.yml retire-workers.yml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/installing_on_rhv/index
Chapter 8. Migrating your applications
Chapter 8. Migrating your applications You can migrate your applications by using the Migration Toolkit for Containers (MTC) web console or the command line . Most cluster-scoped resources are not yet handled by MTC. If your applications require cluster-scoped resources, you might have to create them manually on the target cluster. You can use stage migration and cutover migration to migrate an application between clusters: Stage migration copies data from the source cluster to the target cluster without stopping the application. You can run a stage migration multiple times to reduce the duration of the cutover migration. Cutover migration stops the transactions on the source cluster and moves the resources to the target cluster. You can use state migration to migrate an application's state: State migration copies selected persistent volume claims (PVCs). You can use state migration to migrate a namespace within the same cluster. During migration, the Migration Toolkit for Containers (MTC) preserves the following namespace annotations: openshift.io/sa.scc.mcs openshift.io/sa.scc.supplemental-groups openshift.io/sa.scc.uid-range These annotations preserve the UID range, ensuring that the containers retain their file system permissions on the target cluster. There is a risk that the migrated UIDs could duplicate UIDs within an existing or future namespace on the target cluster. 8.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure OpenShift image registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. 8.2. Migrating your applications by using the MTC web console You can configure clusters and a replication repository by using the MTC web console. Then, you can create and run a migration plan. 8.2.1. Launching the MTC web console You can launch the Migration Toolkit for Containers (MTC) web console in a browser. Prerequisites The MTC web console must have network access to the OpenShift Container Platform web console. The MTC web console must have network access to the OAuth authorization server. Procedure Log in to the OpenShift Container Platform cluster on which you have installed MTC. Obtain the MTC web console URL by entering the following command: USD oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}' The output resembles the following: https://migration-openshift-migration.apps.cluster.openshift.com . Launch a browser and navigate to the MTC web console. Note If you try to access the MTC web console immediately after installing the Migration Toolkit for Containers Operator, the console might not load because the Operator is still configuring the cluster. Wait a few minutes and retry. If you are using self-signed CA certificates, you will be prompted to accept the CA certificate of the source cluster API server. The web page guides you through the process of accepting the remaining certificates. Log in with your OpenShift Container Platform username and password . 8.2.2. Adding a cluster to the MTC web console You can add a cluster to the Migration Toolkit for Containers (MTC) web console. Prerequisites If you are using Azure snapshots to copy data: You must specify the Azure resource group name for the cluster. The clusters must be in the same Azure resource group. The clusters must be in the same geographic location. If you are using direct image migration, you must expose a route to the image registry of the source cluster. Procedure Log in to the cluster. Obtain the migration-controller service account token: USD oc sa get-token migration-controller -n openshift-migration Example output eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ In the MTC web console, click Clusters . Click Add cluster . Fill in the following fields: Cluster name : The cluster name can contain lower-case letters ( a-z ) and numbers ( 0-9 ). It must not contain spaces or international characters. URL : Specify the API server URL, for example, https://<www.example.com>:8443 . Service account token : Paste the migration-controller service account token. Exposed route host to image registry : If you are using direct image migration, specify the exposed route to the image registry of the source cluster. To create the route, run the following command: For OpenShift Container Platform 3: USD oc create route passthrough --service=docker-registry --port=5000 -n default For OpenShift Container Platform 4: USD oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry Azure cluster : You must select this option if you use Azure snapshots to copy your data. Azure resource group : This field is displayed if Azure cluster is selected. Specify the Azure resource group. Require SSL verification : Optional: Select this option to verify SSL connections to the cluster. CA bundle file : This field is displayed if Require SSL verification is selected. If you created a custom CA certificate bundle file for self-signed certificates, click Browse , select the CA bundle file, and upload it. Click Add cluster . The cluster appears in the Clusters list. 8.2.3. Adding a replication repository to the MTC web console You can add an object storage as a replication repository to the Migration Toolkit for Containers (MTC) web console. MTC supports the following storage providers: Amazon Web Services (AWS) S3 Multi-Cloud Object Gateway (MCG) Generic S3 object storage, for example, Minio or Ceph S3 Google Cloud Provider (GCP) Microsoft Azure Blob Prerequisites You must configure the object storage as a replication repository. Procedure In the MTC web console, click Replication repositories . Click Add repository . Select a Storage provider type and fill in the following fields: AWS for S3 providers, including AWS and MCG: Replication repository name : Specify the replication repository name in the MTC web console. S3 bucket name : Specify the name of the S3 bucket. S3 bucket region : Specify the S3 bucket region. Required for AWS S3. Optional for some S3 providers. Check the product documentation of your S3 provider for expected values. S3 endpoint : Specify the URL of the S3 service, not the bucket, for example, https://<s3-storage.apps.cluster.com> . Required for a generic S3 provider. You must use the https:// prefix. S3 provider access key : Specify the <AWS_SECRET_ACCESS_KEY> for AWS or the S3 provider access key for MCG and other S3 providers. S3 provider secret access key : Specify the <AWS_ACCESS_KEY_ID> for AWS or the S3 provider secret access key for MCG and other S3 providers. Require SSL verification : Clear this checkbox if you are using a generic S3 provider. If you created a custom CA certificate bundle for self-signed certificates, click Browse and browse to the Base64-encoded file. GCP : Replication repository name : Specify the replication repository name in the MTC web console. GCP bucket name : Specify the name of the GCP bucket. GCP credential JSON blob : Specify the string in the credentials-velero file. Azure : Replication repository name : Specify the replication repository name in the MTC web console. Azure resource group : Specify the resource group of the Azure Blob storage. Azure storage account name : Specify the Azure Blob storage account name. Azure credentials - INI file contents : Specify the string in the credentials-velero file. Click Add repository and wait for connection validation. Click Close . The new repository appears in the Replication repositories list. 8.2.4. Creating a migration plan in the MTC web console You can create a migration plan in the Migration Toolkit for Containers (MTC) web console. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must ensure that the same MTC version is installed on all clusters. You must add the clusters and the replication repository to the MTC web console. If you want to use the move data copy method to migrate a persistent volume (PV), the source and target clusters must have uninterrupted network access to the remote volume. If you want to use direct image migration, you must specify the exposed route to the image registry of the source cluster. This can be done by using the MTC web console or by updating the MigCluster custom resource manifest. Procedure In the MTC web console, click Migration plans . Click Add migration plan . Enter the Plan name . The migration plan name must not exceed 253 lower-case alphanumeric characters ( a-z, 0-9 ) and must not contain spaces or underscores ( _ ). Select a Source cluster , a Target cluster , and a Repository . Click . Select the projects for migration. Optional: Click the edit icon beside a project to change the target namespace. Click . Select a Migration type for each PV: The Copy option copies the data from the PV of a source cluster to the replication repository and then restores the data on a newly created PV, with similar characteristics, in the target cluster. The Move option unmounts a remote volume, for example, NFS, from the source cluster, creates a PV resource on the target cluster pointing to the remote volume, and then mounts the remote volume on the target cluster. Applications running on the target cluster use the same remote volume that the source cluster was using. Click . Select a Copy method for each PV: Snapshot copy backs up and restores data using the cloud provider's snapshot functionality. It is significantly faster than Filesystem copy . Filesystem copy backs up the files on the source cluster and restores them on the target cluster. The file system copy method is required for direct volume migration. You can select Verify copy to verify data migrated with Filesystem copy . Data is verified by generating a checksum for each source file and checking the checksum after restoration. Data verification significantly reduces performance. Select a Target storage class . If you selected Filesystem copy , you can change the target storage class. Click . On the Migration options page, the Direct image migration option is selected if you specified an exposed image registry route for the source cluster. The Direct PV migration option is selected if you are migrating data with Filesystem copy . The direct migration options copy images and files directly from the source cluster to the target cluster. This option is much faster than copying images and files from the source cluster to the replication repository and then from the replication repository to the target cluster. Click . Optional: Click Add Hook to add a hook to the migration plan. A hook runs custom code. You can add up to four hooks to a single migration plan. Each hook runs during a different migration step. Enter the name of the hook to display in the web console. If the hook is an Ansible playbook, select Ansible playbook and click Browse to upload the playbook or paste the contents of the playbook in the field. Optional: Specify an Ansible runtime image if you are not using the default hook image. If the hook is not an Ansible playbook, select Custom container image and specify the image name and path. A custom container image can include Ansible playbooks. Select Source cluster or Target cluster . Enter the Service account name and the Service account namespace . Select the migration step for the hook: preBackup : Before the application workload is backed up on the source cluster postBackup : After the application workload is backed up on the source cluster preRestore : Before the application workload is restored on the target cluster postRestore : After the application workload is restored on the target cluster Click Add . Click Finish . The migration plan is displayed in the Migration plans list. Additional resources for persistent volume copy methods MTC file system copy method MTC snapshot copy method 8.2.5. Running a migration plan in the MTC web console You can migrate applications and data with the migration plan you created in the Migration Toolkit for Containers (MTC) web console. Note During migration, MTC sets the reclaim policy of migrated persistent volumes (PVs) to Retain on the target cluster. The Backup custom resource contains a PVOriginalReclaimPolicy annotation that indicates the original reclaim policy. You can manually restore the reclaim policy of the migrated PVs. Prerequisites The MTC web console must contain the following: Source cluster in a Ready state Target cluster in a Ready state Replication repository Valid migration plan Procedure Log in to the MTC web console and click Migration plans . Click the Options menu to a migration plan and select one of the following options under Migration : Stage copies data from the source cluster to the target cluster without stopping the application. Cutover stops the transactions on the source cluster and moves the resources to the target cluster. Optional: In the Cutover migration dialog, you can clear the Halt transactions on the source cluster during migration checkbox. State copies selected persistent volume claims (PVCs). Important Do not use state migration to migrate a namespace between clusters. Use stage or cutover migration instead. Select one or more PVCs in the State migration dialog and click Migrate . When the migration is complete, verify that the application migrated successfully in the OpenShift Container Platform web console: Click Home Projects . Click the migrated project to view its status. In the Routes section, click Location to verify that the application is functioning, if applicable. Click Workloads Pods to verify that the pods are running in the migrated namespace. Click Storage Persistent volumes to verify that the migrated persistent volumes are correctly provisioned.
[ "oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'", "oc sa get-token migration-controller -n openshift-migration", "eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ", "oc create route passthrough --service=docker-registry --port=5000 -n default", "oc create route passthrough --service=image-registry --port=5000 -n openshift-image-registry" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/migration_toolkit_for_containers/migrating-applications-with-mtc
1.6. LVS - A Block Diagram
1.6. LVS - A Block Diagram LVS routers use a collection of programs to monitor cluster members and cluster services. Figure 1.5, "LVS Components" illustrates how these various programs on both the active and backup LVS routers work together to manage the cluster. Figure 1.5. LVS Components The pulse daemon runs on both the active and passive LVS routers. On the backup router, pulse sends a heartbeat to the public interface of the active router to make sure the active router is still properly functioning. On the active router, pulse starts the lvs daemon and responds to heartbeat queries from the backup LVS router. Once started, the lvs daemon calls the ipvsadm utility to configure and maintain the IPVS routing table in the kernel and starts a nanny process for each configured virtual server on each real server. Each nanny process checks the state of one configured service on one real server, and tells the lvs daemon if the service on that real server is malfunctioning. If a malfunction is detected, the lvs daemon instructs ipvsadm to remove that real server from the IPVS routing table. If the backup router does not receive a response from the active router, it initiates failover by calling send_arp to reassign all virtual IP addresses to the NIC hardware addresses ( MAC address) of the backup node, sends a command to the active router via both the public and private network interfaces to shut down the lvs daemon on the active router, and starts the lvs daemon on the backup node to accept requests for the configured virtual servers. 1.6.1. LVS Components Section 1.6.1.1, " pulse " shows a detailed list of each software component in an LVS router. 1.6.1.1. pulse This is the controlling process which starts all other daemons related to LVS routers. At boot time, the daemon is started by the /etc/rc.d/init.d/pulse script. It then reads the configuration file /etc/sysconfig/ha/lvs.cf . On the active router, pulse starts the LVS daemon. On the backup router, pulse determines the health of the active router by executing a simple heartbeat at a user-configurable interval. If the active router fails to respond after a user-configurable interval, it initiates failover. During failover, pulse on the backup router instructs the pulse daemon on the active router to shut down all LVS services, starts the send_arp program to reassign the floating IP addresses to the backup router's MAC address, and starts the lvs daemon. 1.6.1.2. lvs The lvs daemon runs on the active LVS router once called by pulse . It reads the configuration file /etc/sysconfig/ha/lvs.cf , calls the ipvsadm utility to build and maintain the IPVS routing table, and assigns a nanny process for each configured LVS service. If nanny reports a real server is down, lvs instructs the ipvsadm utility to remove the real server from the IPVS routing table. 1.6.1.3. ipvsadm This service updates the IPVS routing table in the kernel. The lvs daemon sets up and administers LVS by calling ipvsadm to add, change, or delete entries in the IPVS routing table. 1.6.1.4. nanny The nanny monitoring daemon runs on the active LVS router. Through this daemon, the active router determines the health of each real server and, optionally, monitors its workload. A separate process runs for each service defined on each real server. 1.6.1.5. /etc/sysconfig/ha/lvs.cf This is the LVS configuration file. Directly or indirectly, all daemons get their configuration information from this file. 1.6.1.6. Piranha Configuration Tool This is the Web-based tool for monitoring, configuring, and administering LVS. This is the default tool to maintain the /etc/sysconfig/ha/lvs.cf LVS configuration file. 1.6.1.7. send_arp This program sends out ARP broadcasts when the floating IP address changes from one node to another during failover. Chapter 2, Initial LVS Configuration reviews important post-installation configuration steps you should take before configuring Red Hat Enterprise Linux to be an LVS router.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-lvs-block-diagram-vsa
Provisioning hosts
Provisioning hosts Red Hat Satellite 6.15 Configure provisioning resources and networking, provision physical machines, provision virtual machines on cloud providers or virtualization infrastructure, create hosts manually or by using the Discovery service Red Hat Satellite Documentation Team [email protected]
[ "hammer host list --organization \" My_Organization \" --location \" My_Location \"", "hammer os create --architectures \"x86_64\" --description \" My_Operating_System \" --family \"Redhat\" --major 8 --media \"Red Hat\" --minor 8 --name \"Red Hat Enterprise Linux\" --partition-tables \" My_Partition_Table \" --provisioning-templates \" My_Provisioning_Template \"", "PARTID=USD(hammer --csv partition-table list | grep \"Kickstart default,\" | cut -d, -f1) PXEID=USD(hammer --csv template list --per-page=1000 | grep \"Kickstart default PXELinux\" | cut -d, -f1) SATID=USD(hammer --csv template list --per-page=1000 | grep \"provision\" | grep \",Kickstart default\" | cut -d, -f1) for i in USD(hammer --no-headers --csv os list | awk -F, {'print USD1'}) do hammer partition-table add-operatingsystem --id=\"USD{PARTID}\" --operatingsystem-id=\"USD{i}\" hammer template add-operatingsystem --id=\"USD{PXEID}\" --operatingsystem-id=\"USD{i}\" hammer os set-default-template --id=\"USD{i}\" --config-template-id=USD{PXEID} hammer os add-config-template --id=\"USD{i}\" --config-template-id=USD{SATID} hammer os set-default-template --id=\"USD{i}\" --config-template-id=USD{SATID} done", "hammer os info --id 1", "hammer architecture create --name \" My_Architecture \" --operatingsystems \" My_Operating_System \"", "hammer model create --hardware-model \" My_Hardware_Model \" --info \" My_Description \" --name \" My_Hardware_Model_Name \" --vendor-class \" My_Vendor_Class \"", "hammer medium list --organization \" My_Organization \"", "http://download.example.com/centos/USDversion/Server/USDarch/os/", "hammer medium create --locations \" My_Location \" --name \" My_Operating_System \" --organizations \" My_Organization \" --os-family \"Redhat\" --path \"http://download.example.com/centos/USDversion/Server/USDarch/os/\"", "zerombr clearpart --all --initlabel autopart", "zerombr clearpart --all --initlabel autopart", "hammer partition-table create --file \" ~/My_Partition_Table \" --locations \" My_Location \" --name \" My_Partition_Table \" --organizations \" My_Organization \" --os-family \"Redhat\" --snippet false", "zerombr clearpart --all --initlabel autopart <%= host_param('autopart_options') %>", "#Dynamic (do not remove this line) MEMORY=USD((`grep MemTotal: /proc/meminfo | sed 's/^MemTotal: *//'|sed 's/ .*//'` / 1024)) if [ \"USDMEMORY\" -lt 2048 ]; then SWAP_MEMORY=USD((USDMEMORY * 2)) elif [ \"USDMEMORY\" -lt 8192 ]; then SWAP_MEMORY=USDMEMORY elif [ \"USDMEMORY\" -lt 65536 ]; then SWAP_MEMORY=USD((USDMEMORY / 2)) else SWAP_MEMORY=32768 fi cat <<EOF > /tmp/diskpart.cfg zerombr clearpart --all --initlabel part /boot --fstype ext4 --size 200 --asprimary part swap --size \"USDSWAP_MEMORY\" part / --fstype ext4 --size 1024 --grow EOF", "hammer template create --file ~/my-template --locations \" My_Location \" --name \" My_Provisioning_Template \" --organizations \" My_Organization \" --type provision", "hammer template create --file \" /path/to/My_Snippet \" --locations \" My_Location \" --name \" My_Template_Name_custom_pre\" \\ --organizations \"_My_Organization \" --type snippet", "echo \"Calling API to report successful host deployment\" install -y curl ca-certificates curl -X POST -H \"Content-Type: application/json\" -d '{\"name\": \"<%= @host.name %>\", \"operating_system\": \"<%= @host.operatingsystem.name %>\", \"status\": \"provisioned\",}' \"https://api.example.com/\"", "hammer template list", "hammer os list", "hammer template add-operatingsystem --id My_Template_ID --operatingsystem-id My_Operating_System_ID", "hammer compute-profile create --name \" My_Compute_Profile \"", "hammer compute-profile values create --compute-attributes \" flavor=m1.small,cpus=2,memory=4GB,cpu_mode=default --compute-resource \" My_Compute_Resource \" --compute-profile \" My_Compute_Profile \" --volume size= 40GB", "hammer compute-profile values update --compute-resource \" My_Compute_Resource \" --compute-profile \" My_Compute_Profile \" --attributes \" cpus=2,memory=4GB \" --interface \" type=network,bridge=br1,index=1 \" --volume \"size= 40GB \"", "hammer compute-profile update --name \" My_Compute_Profile \" --new-name \" My_New_Compute_Profile \"", "python3 -c 'import crypt,getpass;pw=getpass.getpass(); print(crypt.crypt(pw)) if (pw==getpass.getpass(\"Confirm: \")) else exit()'", "firewall-cmd --add-port=5900-5930/tcp firewall-cmd --add-port=5900-5930/tcp --permanent", "foreman-rake facts:clean", "foreman-rake interfaces:clean", "satellite-installer --foreman-proxy-tftp-servername 1.2.3.4", "foreman-rake orchestration:dhcp:add_missing subnet_name=NAME", "foreman-rake orchestration:dhcp:add_missing subnet_name=NAME perform=1", "foreman-rake orchestration:dhcp:remove_offending subnet_name=NAME", "foreman-rake orchestration:dhcp:remove_offending subnet_name=NAME perform=1", "satellite-installer --foreman-proxy-dhcp true --foreman-proxy-dhcp-gateway \" 192.168.140.1 \" --foreman-proxy-dhcp-managed true --foreman-proxy-dhcp-nameservers \" 192.168.140.2 \" --foreman-proxy-dhcp-range \" 192.168.140.10 192.168.140.110 \" --foreman-proxy-dhcp-server \" 192.168.140.2 \" --foreman-proxy-dns true --foreman-proxy-dns-forwarders \" 8.8.8.8 \" --foreman-proxy-dns-forwarders \" 8.8.4.4 \" --foreman-proxy-dns-managed true --foreman-proxy-dns-reverse \" 140.168.192.in-addr.arpa \" --foreman-proxy-dns-server \" 127.0.0.1 \" --foreman-proxy-dns-zone \" example.com \" --foreman-proxy-tftp true --foreman-proxy-tftp-managed true", "hammer capsule list", "hammer capsule refresh-features --name \" satellite.example.com \"", "hammer capsule info --name \" satellite.example.com \"", "dhcp::pools: isolated.lan: network: 192.168.99.0 mask: 255.255.255.0 gateway: 192.168.99.1 range: 192.168.99.5 192.168.99.49 dns::zones: # creates @ SOA USD::fqdn root.example.com. # creates USD::fqdn A USD::ipaddress example.com: {} # creates @ SOA test.example.net. hostmaster.example.com. # creates test.example.net A 192.0.2.100 example.net: soa: test.example.net soaip: 192.0.2.100 contact: hostmaster.example.com. # creates @ SOA USD::fqdn root.example.org. # does NOT create an A record example.org: reverse: true # creates @ SOA USD::fqdn hostmaster.example.com. 2.0.192.in-addr.arpa: reverse: true contact: hostmaster.example.com.", "firewall-cmd --add-service=tftp", "firewall-cmd --runtime-to-permanent", "iptables --sport 69 --state ESTABLISHED -A OUTPUT -i eth0 -j ACCEPT -m state -p udp service iptables save", "IPTABLES_MODULES=\"ip_conntrack_tftp\"", "hammer domain create --description \" My_Domain \" --dns-id My_DNS_ID --locations \" My_Location \" --name \" my-domain.tld \" --organizations \" My_Organization \"", "hammer subnet create --boot-mode \"DHCP\" --description \" My_Description \" --dhcp-id My_DHCP_ID --dns-id My_DNS_ID --dns-primary \"192.168.140.2\" --dns-secondary \"8.8.8.8\" --domains \" my-domain.tld\" \\ --from \"192.168.140.111\" \\ --gateway \"192.168.140.1\" \\ --ipam \"DHCP\" \\ --locations \"_My_Location \" --mask \"255.255.255.0\" --name \" My_Network \" --network \"192.168.140.0\" --organizations \" My_Organization \" --tftp-id My_TFTP_ID --to \"192.168.140.250\"", "update-ca-trust enable openssl s_client -showcerts -connect infoblox.example.com :443 </dev/null | openssl x509 -text >/etc/pki/ca-trust/source/anchors/infoblox.crt update-ca-trust extract", "curl -u admin:password https:// infoblox.example.com /wapi/v2.0/network", "[ { \"_ref\": \"network/ZG5zLm5ldHdvcmskMTkyLjE2OC4yMDIuMC8yNC8w: infoblox.example.com /24/default\", \"network\": \"192.168.202.0/24\", \"network_view\": \"default\" } ]", "satellite-installer --enable-foreman-proxy-plugin-dhcp-infoblox --foreman-proxy-dhcp true --foreman-proxy-dhcp-provider infoblox --foreman-proxy-dhcp-server infoblox.example.com --foreman-proxy-plugin-dhcp-infoblox-username admin --foreman-proxy-plugin-dhcp-infoblox-password infoblox --foreman-proxy-plugin-dhcp-infoblox-record-type fixedaddress --foreman-proxy-plugin-dhcp-infoblox-dns-view default --foreman-proxy-plugin-dhcp-infoblox-network-view default", "satellite-installer --enable-foreman-proxy-plugin-dns-infoblox --foreman-proxy-dns true --foreman-proxy-dns-provider infoblox --foreman-proxy-plugin-dns-infoblox-dns-server infoblox.example.com --foreman-proxy-plugin-dns-infoblox-username admin --foreman-proxy-plugin-dns-infoblox-password infoblox --foreman-proxy-plugin-dns-infoblox-dns-view default", "hammer host create --build true --enabled true --hostgroup \" My_Host_Group \" --location \" My_Location \" --mac \" My_MAC_Address \" --managed true --name \" My_Host_Name \" --organization \" My_Organization \"", "hammer host interface update --host \"_My_Host_Name_\" --managed true --primary true --provision true", "hammer host create --build true --enabled true --hostgroup \" My_Host_Group \" --location \" My_Location \" --mac \" My_MAC_Address \" --managed true --name \" My_Host_Name \" --organization \" My_Organization \"", "hammer host interface update --host \" My_Host_Name \" --managed true --primary true --provision true", "hammer bootdisk host --full true --host My_Host_Name", "hammer bootdisk subnet --subnet My_Subnet_Name", "satellite-maintain packages update grub2-efi", "satellite-installer --foreman-proxy-http true --foreman-proxy-httpboot true --foreman-proxy-tftp true", "satellite-maintain packages update grub2-efi", "satellite-installer --foreman-proxy-http true --foreman-proxy-httpboot true --foreman-proxy-tftp true", "hammer host create --build true --enabled true --hostgroup \" My_Host_Group \" --location \" My_Location \" --mac \" My_MAC_Address \" --managed true --name \" My_Host_Name \" --organization \" My_Organization \" --pxe-loader \"Grub2 UEFI HTTP\"", "hammer host interface update --host \" My_Host_Name \" --managed true --primary true --provision true", "<%= snippet('create_users') %>", "satellite-installer --foreman-proxy-httpboot true --foreman-proxy-tftp true", "satellite-maintain packages install ipxe-bootimgs", "cp /usr/share/ipxe/ipxe.lkrn /var/lib/tftpboot/", "cp /usr/share/ipxe/undionly.kpxe /var/lib/tftpboot/undionly-ipxe.0", "restorecon -RvF /var/lib/tftpboot/", "satellite-installer --foreman-proxy-dhcp-ipxefilename \"http:// satellite.example.com /unattended/iPXE?bootstrap=1\"", "satellite-installer --foreman-proxy-dhcp-ipxe-bootstrap true", "satellite-installer --enable-foreman-plugin-discovery --enable-foreman-proxy-plugin-discovery", "satellite-maintain packages install foreman-discovery-image", "satellite-installer --enable-foreman-proxy-plugin-discovery", "satellite-maintain packages install foreman-discovery-image", "LABEL discovery MENU LABEL Foreman Discovery Image KERNEL boot/fdi-image/vmlinuz0 APPEND initrd=boot/fdi-image/initrd0.img rootflags=loop root=live:/fdi.iso rootfstype=auto ro rd.live.image acpi=force rd.luks=0 rd.md=0 rd.dm=0 rd.lvm=0 rd.bootif=0 rd.neednet=0 nomodeset proxy.url=<%= foreman_server_url %> proxy.type=foreman IPAPPEND 2", "menuentry 'Foreman Discovery Image' --id discovery { linuxefi boot/fdi-image/vmlinuz0 rootflags=loop root=live:/fdi.iso rootfstype=auto ro rd.live.image acpi=force rd.luks=0 rd.md=0 rd.dm=0 rd.lvm=0 rd.bootif=0 rd.neednet=0 nomodeset proxy.url=<%= foreman_server_url %> proxy.type=foreman BOOTIF=01-USDmac initrdefi boot/fdi-image/initrd0.img }", "proxy.url=https:// capsule.example.com :9090 proxy.type=proxy", "fdi.vlan.primary= example_VLAN_ID", "hammer discovery list", "hammer discovery provision --build true --enabled true --hostgroup \" My_Host_Group \" --location \" My_Location \" --managed true --name \" My_Host_Name \" --new-name \" My_New_Host_Name \" --organization \" My_Organization \"", "hammer discovery-rule create --enabled true --hostgroup \" My_Host_Group \" --hostname \"hypervisor-<%= rand(99999) %>\" --hosts-limit 5 --name \" My_Hypervisor \" --priority 5 --search \"cpu_count > 8\"", "hammer discovery auto-provision --name \"macabcdef123456\"", "dd bs=4M if=/usr/share/foreman-discovery-image/foreman-discovery-image-3.4.4-5.iso of=/dev/sdb", "https://satellite.example.com:9090", "discovery-remaster ~/iso/foreman-discovery-image-3.4.4-5.iso \"fdi.pxip=192.168.140.20/24 fdi.pxgw=192.168.140.1 fdi.pxdns=192.168.140.2 proxy.url=https:// satellite.example.com :9090 proxy.type=proxy fdi.pxfactname1= customhostname fdi.pxfactvalue1= myhost fdi.pxmac=52:54:00:be:8e:8c fdi.pxauto=1\"", "dd bs=4M if=/usr/share/foreman-discovery-image/foreman-discovery-image-3.4.4-5.iso of=/dev/sdb", ". ├── autostart.d │ └── 01_zip.sh ├── bin │ └── ntpdate ├── facts │ └── test.rb └── lib ├── libcrypto.so.1.0.0 └── ruby └── test.rb", "zip -r my_extension.zip .", "fdi.zips=zip1.zip,boot/zip2.zip", "fdi.ssh=1 fdi.rootpw= My_Password", "usermod -a -G libvirt non_root_user", "su foreman -s /bin/bash", "ssh-keygen", "ssh-copy-id [email protected]", "exit", "satellite-maintain packages install libvirt-client", "su foreman -s /bin/bash -c 'virsh -c qemu+ssh://[email protected]/system list'", "qemu+ssh:// [email protected] /system", "virsh edit your_VM_name <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' passwd=' your_randomly_generated_password '>", "hammer compute-resource create --name \" My_KVM_Server \" --provider \"Libvirt\" --description \"KVM server at kvm.example.com \" --url \"qemu+ssh://root@ kvm.example.com/system \" --locations \"New York\" --organizations \" My_Organization \"", "/var/lib/libvirt/images/TestImage.qcow2", "hammer compute-resource image create --name \" KVM Image \" --compute-resource \" My_KVM_Server \" --operatingsystem \"RedHat version \" --architecture \"x86_64\" --username root --user-data false --uuid \"/var/lib/libvirt/images/ KVMimage .qcow2\" \\", "hammer compute-profile create --name \"Libvirt CP\"", "hammer compute-profile values create --compute-profile \"Libvirt CP\" --compute-resource \" My_KVM_Server \" --interface \"compute_type=network,compute_model=virtio,compute_network= examplenetwork \" --volume \"pool_name=default,capacity=20G,format_type=qcow2\" --compute-attributes \"cpus=1,memory=1073741824\"", "hammer host create --build true --compute-attributes=\"cpus=1,memory=1073741824\" --compute-resource \" My_KVM_Server \" --enabled true --hostgroup \" My_Host_Group \" --interface \"managed=true,primary=true,provision=true,compute_type=network,compute_network= examplenetwork \" --location \" My_Location \" --managed true --name \" My_Host_Name \" --organization \" My_Organization \" --provision-method \"build\" --root-password \" My_Password \" --volume=\"pool_name=default,capacity=20G,format_type=qcow2\"", "hammer host create --compute-attributes=\"cpus=1,memory=1073741824\" --compute-resource \" My_KVM_Server \" --enabled true --hostgroup \" My_Host_Group \" --image \" My_KVM_Image \" --interface \"managed=true,primary=true,provision=true,compute_type=network,compute_network=examplenetwork\" --location \" My_Location \" --managed true --name \" My_Host_Name \" --organization \" My_Organization \" --provision-method \"image\" --volume=\"pool_name=default,capacity=20G,format_type=qcow2\"", "hammer compute-resource create --name \" My_RHV \" --provider \"Ovirt\" --description \"RHV server at rhv.example.com \" --url \" https://rhv.example.com/ovirt-engine/api/v4 \" --user \" Satellite_User \" --password \" My_Password \" --locations \"New York\" --organizations \" My_Organization \" --datacenter \" My_Datacenter \"", "dnf install cloud-init", "datasource_list: [\"NoCloud\", \"ConfigDrive\"]", "hammer compute-resource image create --name \" RHV_Image \" --compute-resource \" My_RHV \" --operatingsystem \"RedHat version \" --architecture \"x86_64\" --username root --uuid \"9788910c-4030-4ae0-bad7-603375dd72b1\" \\", "<%# kind: user_data name: Cloud-init -%> #cloud-config hostname: <%= @host.shortname %> <%# Allow user to specify additional SSH key as host parameter -%> <% if @host.params['sshkey'].present? || @host.params['remote_execution_ssh_keys'].present? -%> ssh_authorized_keys: <% if @host.params['sshkey'].present? -%> - <%= @host.params['sshkey'] %> <% end -%> <% if @host.params['remote_execution_ssh_keys'].present? -%> <% @host.params['remote_execution_ssh_keys'].each do |key| -%> - <%= key %> <% end -%> <% end -%> <% end -%> runcmd: - | #!/bin/bash <%= indent 4 do snippet 'subscription_manager_registration' end %> <% if @host.info['parameters']['realm'] && @host.realm && @host.realm.realm_type == 'Red Hat Identity Management' -%> <%= indent 4 do snippet 'freeipa_register' end %> <% end -%> <% unless @host.operatingsystem.atomic? -%> # update all the base packages from the updates repository yum -t -y -e 0 update <% end -%> <% # safemode renderer does not support unary negation non_atomic = @host.operatingsystem.atomic? ? false : true pm_set = @host.puppetmaster.empty? ? false : true puppet_enabled = non_atomic && (pm_set || @host.params['force-puppet']) %> <% if puppet_enabled %> yum install -y puppet cat > /etc/puppet/puppet.conf << EOF <%= indent 4 do snippet 'puppet.conf' end %> EOF # Setup puppet to run on system reboot /sbin/chkconfig --level 345 puppet on /usr/bin/puppet agent --config /etc/puppet/puppet.conf --onetime --tags no_such_tag <%= @host.puppetmaster.blank? ? '' : \"--server #{@host.puppetmaster}\" %> --no-daemonize /sbin/service puppet start <% end -%> phone_home: url: <%= foreman_url('built') %> post: [] tries: 10pp", "hammer compute-profile create --name \"Red Hat Virtualization CP\"", "hammer compute-profile values create --compute-profile \"Red Hat Virtualization CP\" --compute-resource \" My_RHV \" --interface \"compute_interface= Interface_Type ,compute_name=eth0,compute_network=satnetwork\" --volume \"size_gb=20G,storage_domain=Data,bootable=true\" --compute-attributes \"cluster=Default,cores=1,memory=1073741824,start=true\"\"", "hammer host create --build true --compute-attributes=\"cluster=Default,cores=1,memory=1073741824,start=true\" --compute-resource \" My RHV_\" --enabled true --hostgroup \" My_Host_Group \" --interface \"managed=true,primary=true,provision=true,compute_name=eth0,compute_network=satnetwork\" --location \" My_Location \" --managed true --name \" My_Host_Name \" --organization \" My_Organization \" --provision-method build --volume=\"size_gb=20G,storage_domain=Data,bootable=true\"", "hammer host create --compute-attributes=\"cluster=Default,cores=1,memory=1073741824,start=true\" --compute-resource \" My RHV_\" --enabled true --hostgroup \" My_Host_Group \" --image \" My RHV_Image_\" --interface \"managed=true,primary=true,provision=true,compute_name=eth0,compute_network=satnetwork\" --location \" My_Location \" --managed true --name \" My_Host_Name \" --organization \" My_Organization \" --provision-method \"image\" --volume=\"size_gb=20G,storage_domain=Data,bootable=true\"", "virsh edit your_VM_name <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' passwd=' your_randomly_generated_password '>", "hammer compute-resource create --datacenter \" My_Datacenter \" --description \"vSphere server at vsphere.example.com \" --locations \" My_Location \" --name \"My_vSphere\" --organizations \" My_Organization \" --password \" My_Password \" --provider \"Vmware\" --server \" vsphere.example.com \" --user \" My_User \"", "hammer compute-resource image create --architecture \" My_Architecture \" --compute-resource \" My_VMware \" --name \" My_Image \" --operatingsystem \" My_Operating_System \" --username root --uuid \" My_UUID \"", "hammer compute-profile create --name \" My_Compute_Profile \"", "hammer compute-profile values create --compute-attributes \"cpus=1,corespersocket=2,memory_mb=1024,cluster=MyCluster,path=MyVMs,start=true\" --compute-profile \" My_Compute_Profile \" --compute-resource \" My_VMware \" --interface \"compute_type=VirtualE1000,compute_network=mynetwork --volume \"size_gb=20G,datastore=Data,name=myharddisk,thin=true\"", "hammer host create --build true --compute-attributes=\"cpus=1,corespersocket=2,memory_mb=1024,cluster=MyCluster,path=MyVMs,start=true\" --compute-resource \" My_VMware \" --enabled true --hostgroup \" My_Host_Group \" --interface \"managed=true,primary=true,provision=true,compute_type=VirtualE1000,compute_network=mynetwork\" --location \" My_Location \" --managed true --name \" My_Host \" --organization \" My_Organization \" --provision-method build --volume=\"size_gb=20G,datastore=Data,name=myharddisk,thin=true\"", "hammer host create --compute-attributes=\"cpus=1,corespersocket=2,memory_mb=1024,cluster=MyCluster,path=MyVMs,start=true\" --compute-resource \" My_VMware \" --enabled true --hostgroup \" My_Host_Group \" --image \" My_VMware_Image \" --interface \"managed=true,primary=true,provision=true,compute_type=VirtualE1000,compute_network=mynetwork\" --location \" My_Location \" --managed true --name \" My_Host \" --organization \" My_Organization \" --provision-method image --volume=\"size_gb=20G,datastore=Data,name=myharddisk,thin=true\"", "dnf install cloud-init open-vm-tools perl-interpreter perl-File-Temp", "cat << EOM > /etc/cloud/cloud.cfg.d/01_network.cfg network: config: disabled EOM", "cat << EOM > /etc/cloud/cloud.cfg.d/10_datasource.cfg datasource_list: [NoCloud] datasource: NoCloud: seedfrom: https://satellite.example.com/userdata/ EOM", "cat << EOM > /etc/cloud/cloud.cfg cloud_init_modules: - bootcmd - ssh cloud_config_modules: - runcmd cloud_final_modules: - scripts-per-once - scripts-per-boot - scripts-per-instance - scripts-user - phone-home system_info: distro: rhel paths: cloud_dir: /var/lib/cloud templates_dir: /etc/cloud/templates ssh_svcname: sshd EOM", "update-ca-trust enable", "wget -O /etc/pki/ca-trust/source/anchors/cloud-init-ca.crt https:// satellite.example.com /pub/katello-server-ca.crt", "update-ca-trust extract", "systemctl stop rsyslog systemctl stop auditd", "dnf remove --oldinstallonly", "package-cleanup --oldkernels --count=1 dnf clean all", "logrotate -f /etc/logrotate.conf rm -f /var/log/*-???????? /var/log/*.gz rm -f /var/log/dmesg.old rm -rf /var/log/anaconda cat /dev/null > /var/log/audit/audit.log cat /dev/null > /var/log/wtmp cat /dev/null > /var/log/lastlog cat /dev/null > /var/log/grubby", "rm -f /etc/udev/rules.d/70*", "rm -f /etc/sysconfig/network-scripts/ifcfg-ens* rm -f /etc/sysconfig/network-scripts/ifcfg-eth*", "rm -f /etc/ssh/ssh_host_*", "rm -rf ~root/.ssh/known_hosts", "rm -f ~root/.bash_history unset HISTFILE", "curl -H \"Accept:application/json\" -H \"Content-Type:application/json\" -X PUT -u username : password -k https:// satellite.example.com /api/compute_resources/ compute_resource_id /refresh_cache", "satellite-installer --enable-foreman-plugin-kubevirt", "oc get secrets", "oc get secrets MY_SECRET -o jsonpath='{.data.token}' | base64 -d | xargs", "hammer compute-resource create --name \" My_OpenStack \" --provider \"OpenStack\" --description \" My OpenStack environment at openstack.example.com \" --url \" http://openstack.example.com :5000/v3/auth/tokens\" --user \" My_Username \" --password \" My_Password \" --tenant \" My_Openstack \" --domain \" My_User_Domain \" --project-domain-id \" My_Project_Domain_ID \" --project-domain-name \" My_Project_Domain_Name \" --locations \"New York\" --organizations \" My_Organization \"", "hammer compute-resource image create --name \"OpenStack Image\" --compute-resource \" My_OpenStack_Platform \" --operatingsystem \"RedHat version \" --architecture \"x86_64\" --username root --user-data true --uuid \" /path/to/OpenstackImage.qcow2 \"", "hammer compute-profile values create --compute-resource \" My_Laptop \" --compute-profile \" My_Compute_Profile \" --compute-attributes \"availability_zone= My_Zone ,image_ref= My_Image ,flavor_ref=m1.small,tenant_id=openstack,security_groups=default,network= My_Network ,boot_from_volume=false\"", "hammer host create --compute-attributes=\"flavor_ref=m1.small,tenant_id=openstack,security_groups=default,network=mynetwork\" --compute-resource \" My_OpenStack_Platform \" --enabled true --hostgroup \" My_Host_Group \" --image \" My_OpenStack_Image \" --interface \"managed=true,primary=true,provision=true\" --location \" My_Location \" --managed true --name \" My_Host_Name \" --organization \" My_Organization \" --provision-method image", "satellite-installer --enable-foreman-compute-ec2", "hammer compute-resource create --description \"Amazon EC2 Public Cloud` --locations \" My_Location \" --name \" My_EC2_Compute_Resource \" --organizations \" My_Organization \" --password \" My_Secret_Key \" --provider \"EC2\" --region \" My_Region \" --user \" My_User_Name \"", "hammer compute-resource image create --architecture \" My_Architecture \" --compute-resource \" My_EC2_Compute_Resource \" --name \" My_Amazon_EC2_Image \" --operatingsystem \" My_Operating_System \" --user-data true --username root --uuid \"ami- My_AMI_ID \"", "hammer compute-profile values create --compute-resource \" My_Laptop \" --compute-profile \" My_Compute_Profile \" --compute-attributes \"flavor_id=1,availability_zone= My_Zone ,subnet_id=1,security_group_ids=1,managed_ip=public_ip\"", "hammer host create --compute-attributes=\"flavor_id=m1.small,image_id=TestImage,availability_zones=us-east-1a,security_group_ids=Default,managed_ip=Public\" --compute-resource \" My_EC2_Compute_Resource \" --enabled true --hostgroup \" My_Host_Group \" --image \" My_Amazon_EC2_Image \" --interface \"managed=true,primary=true,provision=true,subnet_id=EC2\" --location \" My_Location \" --managed true --name \"My_Host_Name_\" --organization \" My_Organization \" --provision-method image", "hammer compute-resource list", "su - postgres", "psql", "postgres=# \\c foreman", "select secret from key_pairs where compute_resource_id = 3; secret", "vim Keyname .pem", "chmod 600 Keyname .pem", "ssh -i Keyname .pem ec2-user@ example.aws.com", "sudo -s << EOS _Template_ _Body_ EOS", "scp My_GCE_Key .json [email protected]:/etc/foreman/ My_GCE_Key .json", "chown root:foreman /etc/foreman/ My_GCE_Key .json", "chmod 0640 /etc/foreman/ My_GCE_Key .json", "restorecon -vv /etc/foreman/ My_GCE_Key .json", "hammer compute-resource create --key-path \"/etc/foreman/ My_GCE_Key .json\" --name \" My_GCE_Compute_Resource \" --provider \"gce\" --zone \" My_Zone \"", "hammer compute-resource image create --name ' gce_image_name ' --compute-resource ' gce_cr ' --operatingsystem-id 1 --architecture-id 1 --uuid ' 3780108136525169178 ' --username ' admin '", "hammer compute-profile create --name My_GCE_Compute_Profile", "hammer compute-profile values create --compute-attributes \"machine_type=f1-micro,associate_external_ip=true,network=default\" --compute-profile \" My_GCE_Compute_Profile \" --compute-resource \" My_GCE_Compute_Resource \" --volume \" size_gb=20 \"", "hammer host create --architecture x86_64 --compute-profile \" My_Compute_Profile \" --compute-resource \" My_Compute_Resource \" --image \" My_GCE_Image \" --interface \"type=interface,domain_id=1,managed=true,primary=true,provision=true\" --location \" My_Location \" --name \" My_Host_Name \" --operatingsystem \" My_Operating_System \" --organization \" My_Organization \" --provision-method \"image\" --puppet-ca-proxy-id My_Puppet_CA_Proxy_ID --puppet-environment-id My_Puppet_Environment_ID --puppet-proxy-id My_Puppet_Proxy_ID --root-password \" My_Root_Password \"", "hammer compute-resource create --app-ident My_Client_ID --name My_Compute_Resource_Name --provider azurerm --region \" My_Region \" --secret-key My_Client_Secret --sub-id My_Subscription_ID --tenant My_Tenant_ID", "hammer compute-resource image create --name Azure_image_name --compute-resource azure_cr_name --uuid ' marketplace://RedHat:RHEL:7-RAW:latest ' --username ' azure_username ' --user-data no", "hammer compute-profile create --name compute_profile_name", "hammer compute-profile values create --compute-attributes=\"resource_group= resource_group ,vm_size= Standard_B1s ,username= azure_user ,password= azure_password ,platform=Linux,script_command=touch /var/tmp/text.txt\" --compute-profile \" compute_profile_name \" --compute-resource azure_cr_name --interface=\"compute_public_ip=Dynamic,compute_network=mysubnetID,compute_private_ip=false\" --volume=\"disk_size_gb= 5 ,data_disk_caching= None \"", "hammer host create --architecture x86_64 --compute-profile \" My_Compute_Profile \" --compute-resource \" My_Compute_Resource \" --domain \" My_Domain \" --image \" My_Azure_Image \" --location \" My_Location \" --name \" My_Host_Name \" --operatingsystem \" My_Operating_System \" --organization \" My_Organization \" --provision-method \"image\"", "#!/bin/bash MANIFEST=USD1 Import the content from Red Hat CDN hammer organization create --name \"ACME\" --label \"ACME\" --description \"My example organization for managing content\" hammer subscription upload --file ~/USDMANIFEST --organization \"ACME\" hammer repository-set enable --name \"Red Hat Enterprise Linux 7 Server (RPMs)\" --releasever \"7Server\" --basearch \"x86_64\" --product \"Red Hat Enterprise Linux Server\" --organization \"ACME\" hammer repository-set enable --name \"Red Hat Enterprise Linux 7 Server (Kickstart)\" --releasever \"7Server\" --basearch \"x86_64\" --product \"Red Hat Enterprise Linux Server\" --organization \"ACME\" hammer repository-set enable --name \"Red Hat Satellite Client 6 (for RHEL 7 Server) (RPMs)\" --basearch \"x86_64\" --product \"Red Hat Enterprise Linux Server\" --organization \"ACME\" hammer product synchronize --name \"Red Hat Enterprise Linux Server\" --organization \"ACME\" Create your application lifecycle hammer lifecycle-environment create --name \"Development\" --description \"Environment for ACME's Development Team\" --prior \"Library\" --organization \"ACME\" hammer lifecycle-environment create --name \"Testing\" --description \"Environment for ACME's Quality Engineering Team\" --prior \"Development\" --organization \"ACME\" hammer lifecycle-environment create --name \"Production\" --description \"Environment for ACME's Product Releases\" --prior \"Testing\" --organization \"ACME\" Create and publish your content view hammer content-view create --name \"Base\" --description \"Base operating system\" --repositories \"Red Hat Enterprise Linux 7 Server RPMs x86_64 7Server,Red Hat Satellite Client 6 for RHEL 7 Server RPMs x86_64\" --organization \"ACME\" hammer content-view publish --name \"Base\" --description \"My initial content view for my operating system\" --organization \"ACME\" hammer content-view version promote --content-view \"Base\" --version 1 --to-lifecycle-environment \"Development\" --organization \"ACME\" hammer content-view version promote --content-view \"Base\" --version 1 --to-lifecycle-environment \"Testing\" --organization \"ACME\" hammer content-view version promote --content-view \"Base\" --version 1 --to-lifecycle-environment \"Production\" --organization \"ACME\"", "chmod +x content-init.sh", "./content-init.sh manifest_98f4290e-6c0b-4f37-ba79-3a3ec6e405ba.zip", "hammer os list", "hammer os update --password-hash SHA256 --title \" My_Operating_System \"", "hammer hostgroup set-parameter --hostgroup \" My_Host_Group \" --name fips_enabled --value \"true\"", "cat /proc/sys/crypto/fips_enabled", "yum install virt-manager virt-viewer libvirt qemu-kvm", "yum install virt-install libguestfs-tools-c", "systemctl enable --now chronyd", "chkconfig --add ntpd chkconfig ntpd on service ntpd start", "cp My_SSL_CA_file .pem /etc/pki/ca-trust/source/anchors", "update-ca-trust", "dnf install puppet-agent", "yum install puppet-agent", ". /etc/profile.d/puppet-agent.sh", "puppet config set server satellite.example.com --section agent puppet config set environment My_Puppet_Environment --section agent", "puppet resource service puppet ensure=running enable=true", "puppet ssl bootstrap", "puppet ssl bootstrap", "yum update", "yum install cloud-utils-growpart cloud-init", "vi /etc/cloud/cloud.cfg", "- resolv-conf", "vi /etc/sysconfig/network", "NOZEROCONF=yes", "subscription-manager repos --disable=* subscription-manager unregister", "poweroff", "cd /var/lib/libvirt/images/", "virt-sysprep -d rhel7", "virt-sparsify --compress rhel7.qcow2 rhel7-cloud.qcow2", "yum update", "yum install cloud-utils-growpart cloud-init", "- resolv-conf", "echo \"#\" > /etc/udev/rules.d/75-persistent-net-generator.rules", "NOZEROCONF=yes", "subscription-manager repos --disable=* subscription-manager unregister yum clean all", "poweroff", "virt-sysprep -d rhel6", "virt-sparsify --compress rhel6.qcow2 rhel6-cloud.qcow2" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html-single/provisioning_hosts/index
Chapter 141. KafkaMirrorMaker2MirrorSpec schema reference
Chapter 141. KafkaMirrorMaker2MirrorSpec schema reference Used in: KafkaMirrorMaker2Spec Property Property type Description sourceCluster string The alias of the source cluster used by the Kafka MirrorMaker 2 connectors. The alias must match a cluster in the list at spec.clusters . targetCluster string The alias of the target cluster used by the Kafka MirrorMaker 2 connectors. The alias must match a cluster in the list at spec.clusters . sourceConnector KafkaMirrorMaker2ConnectorSpec The specification of the Kafka MirrorMaker 2 source connector. heartbeatConnector KafkaMirrorMaker2ConnectorSpec The specification of the Kafka MirrorMaker 2 heartbeat connector. checkpointConnector KafkaMirrorMaker2ConnectorSpec The specification of the Kafka MirrorMaker 2 checkpoint connector. topicsPattern string A regular expression matching the topics to be mirrored, for example, "topic1|topic2|topic3". Comma-separated lists are also supported. topicsBlacklistPattern string The topicsBlacklistPattern property has been deprecated, and should now be configured using .spec.mirrors.topicsExcludePattern . A regular expression matching the topics to exclude from mirroring. Comma-separated lists are also supported. topicsExcludePattern string A regular expression matching the topics to exclude from mirroring. Comma-separated lists are also supported. groupsPattern string A regular expression matching the consumer groups to be mirrored. Comma-separated lists are also supported. groupsBlacklistPattern string The groupsBlacklistPattern property has been deprecated, and should now be configured using .spec.mirrors.groupsExcludePattern . A regular expression matching the consumer groups to exclude from mirroring. Comma-separated lists are also supported. groupsExcludePattern string A regular expression matching the consumer groups to exclude from mirroring. Comma-separated lists are also supported.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkamirrormaker2mirrorspec-reference
Appendix D. Preventing kernel modules from loading automatically
Appendix D. Preventing kernel modules from loading automatically You can prevent a kernel module from being loaded automatically, whether the module is loaded directly, loaded as a dependency from another module, or during the boot process. Procedure The module name must be added to a configuration file for the modprobe utility. This file must reside in the configuration directory /etc/modprobe.d . For more information on this configuration directory, see the man page modprobe.d . Ensure the module is not configured to get loaded in any of the following: /etc/modprobe.conf /etc/modprobe.d/* /etc/rc.modules /etc/sysconfig/modules/* # modprobe --showconfig <_configuration_file_name_> If the module appears in the output, ensure it is ignored and not loaded: # modprobe --ignore-install <_module_name_> Unload the module from the running system, if it is loaded: # modprobe -r <_module_name_> Prevent the module from being loaded directly by adding the blacklist line to a configuration file specific to the system - for example /etc/modprobe.d/local-dontload.conf : # echo "blacklist <_module_name_> >> /etc/modprobe.d/local-dontload.conf Note This step does not prevent a module from loading if it is a required or an optional dependency of another module. Prevent optional modules from being loading on demand: # echo "install <_module_name_>/bin/false" >> /etc/modprobe.d/local-dontload.conf Important If the excluded module is required for other hardware, excluding it might cause unexpected side effects. Make a backup copy of your initramfs : # cp /boot/initramfs-USD(uname -r).img /boot/initramfs-USD(uname -r).img.USD(date +%m-%d-%H%M%S).bak If the kernel module is part of the initramfs , rebuild your initial ramdisk image, omitting the module: # dracut --omit-drivers <_module_name_> -f Get the current kernel command line parameters: # grub2-editenv - list | grep kernelopts Append <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_> to the generated output: # grub2-editenv - set kernelopts="<> <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_>" For example: # grub2-editenv - set kernelopts="root=/dev/mapper/rhel_example-root ro crashkernel=auto resume=/dev/mapper/rhel_example-swap rd.lvm.lv=rhel_example/root rd.lvm.lv=rhel_example/swap <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_>" Make a backup copy of the kdump initramfs : # cp /boot/initramfs-USD(uname -r)kdump.img /boot/initramfs-USD(uname -r)kdump.img.USD(date +%m-%d-%H%M%S).bak Append rd.driver.blacklist=<_module_name_> to the KDUMP_COMMANDLINE_APPEND setting in /etc/sysconfig/kdump to omit it from the kdump initramfs : # sed -i '/^KDUMP_COMMANDLINE_APPEND=/s/"USD/ rd.driver.blacklist=module_name"/' /etc/sysconfig/kdump Restart the kdump service to pick up the changes to the kdump initrd : # kdumpctl restart Rebuild the kdump initial ramdisk image: # mkdumprd -f /boot/initramfs-USD(uname -r)kdump.img Reboot the system. D.1. Removing a module temporarily You can remove a module temporarily. Procedure Run modprobe to remove any currently-loaded module: # modprobe -r <module name> If the module cannot be unloaded, a process or another module might still be using the module. If so, terminate the process and run the modpole command written above another time to unload the module.
[ "modprobe --showconfig <_configuration_file_name_>", "modprobe --ignore-install <_module_name_>", "modprobe -r <_module_name_>", "echo \"blacklist <_module_name_> >> /etc/modprobe.d/local-dontload.conf", "echo \"install <_module_name_>/bin/false\" >> /etc/modprobe.d/local-dontload.conf", "cp /boot/initramfs-USD(uname -r).img /boot/initramfs-USD(uname -r).img.USD(date +%m-%d-%H%M%S).bak", "dracut --omit-drivers <_module_name_> -f", "grub2-editenv - list | grep kernelopts", "grub2-editenv - set kernelopts=\"<> <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_>\"", "grub2-editenv - set kernelopts=\"root=/dev/mapper/rhel_example-root ro crashkernel=auto resume=/dev/mapper/rhel_example-swap rd.lvm.lv=rhel_example/root rd.lvm.lv=rhel_example/swap <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_>\"", "cp /boot/initramfs-USD(uname -r)kdump.img /boot/initramfs-USD(uname -r)kdump.img.USD(date +%m-%d-%H%M%S).bak", "sed -i '/^KDUMP_COMMANDLINE_APPEND=/s/\"USD/ rd.driver.blacklist=module_name\"/' /etc/sysconfig/kdump", "kdumpctl restart", "mkdumprd -f /boot/initramfs-USD(uname -r)kdump.img", "modprobe -r <module name>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/proc-Preventing_Kernel_Modules_from_Loading_Automatically_Install_nodes_RHVH
Chapter 3. Getting started
Chapter 3. Getting started Streams for Apache Kafka is distributed in a ZIP file that contains installation artifacts for the Kafka components. Note The Kafka Bridge has separate installation files. For information on installing and using the Kafka Bridge, see Using the Streams for Apache Kafka Bridge . 3.1. Installation environment Streams for Apache Kafka runs on Red Hat Enterprise Linux. The host (node) can be a physical or virtual machine (VM). Use the installation files provided with Streams for Apache Kafka to install Kafka components. You can install Kafka in a single-node or multi-node environment. Single-node environment A single-node Kafka cluster runs instances of Kafka components on a single host. This configuration is not suitable for a production environment. Multi-node environment A multi-node Kafka cluster runs instances of Kafka components on multiple hosts. We recommended that you run Kafka and other Kafka components, such as Kafka Connect, on separate hosts. By running the components in this way, it's easier to maintain and upgrade each component. Kafka clients establish a connection to the Kafka cluster using the bootstrap.servers configuration property. If you are using Kafka Connect, for example, the Kafka Connect configuration properties must include a bootstrap.servers value that specifies the hostname and port of the hosts where the Kafka brokers are running. If the Kafka cluster is running on more than one host with multiple Kafka brokers, you specify a hostname and port for each broker. Each Kafka broker is identified by a node.id . 3.1.1. Data storage considerations An efficient data storage infrastructure is essential to the optimal performance of Streams for Apache Kafka. Block storage is required. File storage, such as NFS, does not work with Kafka. Choose from one of the following options for your block storage: Cloud-based block storage solutions, such as Amazon Elastic Block Store (EBS) Local storage Storage Area Network (SAN) volumes accessed by a protocol such as Fibre Channel or iSCSI 3.1.2. File systems Kafka uses a file system for storing messages. Streams for Apache Kafka is compatible with the XFS and ext4 file systems, which are commonly used with Kafka. Consider the underlying architecture and requirements of your deployment when choosing and setting up your file system. For more information, refer to Filesystem Selection in the Kafka documentation. 3.2. Downloading Streams for Apache Kafka A ZIP file distribution of Streams for Apache Kafka is available for download from the Red Hat website. You can download the latest version of Red Hat Streams for Apache Kafka from the Streams for Apache Kafka software downloads page . For Kafka and other Kafka components, download the amq-streams-<version>-bin.zip file For Kafka Bridge, download the amq-streams-<version>-bridge-bin.zip file. For installation instructions, see Using the Streams for Apache Kafka Bridge . 3.3. Installing Kafka Use the Streams for Apache Kafka ZIP files to install Kafka on Red Hat Enterprise Linux. You can install Kafka in a single-node or multi-node environment. In this procedure, a single Kafka instance is installed on a single host (node). The Streams for Apache Kafka installation files include the binaries for running other Kafka components, like Kafka Connect, Kafka MirrorMaker 2, and Kafka Bridge. In a single-node environment, you can run these components from the same host where you installed Kafka. However, we recommend that you add the installation files and run other Kafka components on separate hosts. Prerequisites You have downloaded the installation files . You have reviewed the supported configurations in the Streams for Apache Kafka 2.7 on Red Hat Enterprise Linux Release Notes . You are logged in to Red Hat Enterprise Linux as admin ( root ) user. Procedure Install Kafka on your host. Add a new kafka user and group: groupadd kafka useradd -g kafka kafka passwd kafka Extract and move the contents of the amq-streams-<version>-bin.zip file into the /opt/kafka directory: unzip amq-streams-<version>-bin.zip -d /opt mv /opt/kafka*redhat* /opt/kafka Change the ownership of the /opt/kafka directory to the kafka user: chown -R kafka:kafka /opt/kafka Create directory /var/lib/kafka for storing Kafka data and set its ownership to the kafka user: mkdir /var/lib/kafka chown -R kafka:kafka /var/lib/kafka You can now run a default configuration of Kafka as a single-node cluster . You can also use the installation to run other Kafka components, like Kafka Connect, on the same host. To run other components, specify the hostname and port to connect to the Kafka broker using the bootstrap.servers property in the component configuration. Example bootstrap servers configuration pointing to a single Kafka broker on the same host bootstrap.servers=localhost:9092 However, we recommend installing and running Kafka components on separate hosts. (Optional) Install Kafka components on separate hosts. Extract the installation files to the /opt/kafka directory on each host. Change the ownership of the /opt/kafka directory to the kafka user. Add bootstrap.servers configuration that connects the component to the host (or hosts in a multi-node environment) running the Kafka brokers. Example bootstrap servers configuration pointing to Kafka brokers on different hosts bootstrap.servers=kafka0.<host_ip_address>:9092,kafka1.<host_ip_address>:9092,kafka2.<host_ip_address>:9092 You can use this configuration for Kafka Connect , MirrorMaker 2 , and the Kafka Bridge . 3.4. Running a Kafka cluster in KRaft mode Configure and run Kafka in KRaft mode. You can run Kafka as a single-node or multi-node Kafka cluster. Run a minimum of three broker and three controller nodes, with topic replication across the brokers, for stability and availability. Kafka nodes perform the role of broker, controller, or both. Broker role A broker, sometimes referred to as a node or server, orchestrates the storage and passing of messages. Controller role A controller coordinates the cluster and manages the metadata used to track the status of brokers and partitions. Note Cluster metadata is stored in the internal __cluster_metadata topic. You can use combined broker and controller nodes, though you might want to separate these functions. Brokers performing combined roles can be more convenient in simpler deployments. To identify a cluster, you create an ID. The ID is used when creating logs for the nodes you add to the cluster. Specify the following in the configuration of each node: A node ID Broker roles A list of nodes (or voters ) that act as controllers You specify a list of controllers, configured as voters , using the node ID and connection details (hostname and port) for each controller. You apply broker configuration, including the setting of roles, using a configuration properties file. Broker configuration differs according to role. KRaft provides three example broker configuration properties files. /opt/kafka/config/kraft/broker.properties has example configuration for a broker role /opt/kafka/config/kraft/controller.properties has example configuration for a controller role /opt/kafka/config/kraft/server.properties has example configuration for a combined role You can base your broker configuration on these example properties files. In this procedure, the example server.properties configuration is used. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. Procedure Generate a unique ID for the Kafka cluster. You can use the kafka-storage tool to do this: /opt/kafka/bin/kafka-storage.sh random-uuid The command returns an ID. A cluster ID is required in KRaft mode. Create a configuration properties file for each node in the cluster. You can base the file on the examples provided with Kafka. Specify a role as broker , controller , or broker, controller For example, specify process.roles=broker, controller for a combined role. Specify a unique node.id for each node in the cluster starting from 0 . For example, node.id=1 . Specify a list of controller.quorum.voters in the format <node_id>@<hostname:port> . For example, controller.quorum.voters=1@localhost:9093 . Specify listeners: Configure the name, hostname and port for each listener. For example, listeners=PLAINTEXT:localhost:9092,CONTROLLER:localhost:9093 . Configure the listener names used for inter-broker communication. For example, inter.broker.listener.name=PLAINTEXT . Configure the listener names used by the controller quorum. For example, controller.listener.names=CONTROLLER . Configure the name, hostname and port for each listener that is advertised to clients for connection to Kafka. For example, advertised.listeners=PLAINTEXT:localhost:9092 . Set up log directories for each node in your Kafka cluster: /opt/kafka/bin/kafka-storage.sh format -t <uuid> -c /opt/kafka/config/kraft/server.properties Returns: Formatting /tmp/kraft-combined-logs Replace <uuid> with the cluster ID you generated. Use the same ID for each node in your cluster. Apply the broker configuration using the properties file you created for the broker. By default, the log directory ( log.dirs ) specified in the server.properties configuration file is set to /tmp/kraft-combined-logs . The /tmp directory is typically cleared on each system reboot, making it suitable for development environments only. You can add a comma-separated list to set up multiple log directories. Start each Kafka node. /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/kraft/server.properties Check that Kafka is running: jcmd | grep kafka Returns: process ID kafka.Kafka /opt/kafka/config/kraft/server.properties Check the logs of each node to ensure that they have successfully joined the KRaft cluster: tail -f /opt/kafka/logs/server.log You can now create topics, and send and receive messages from the brokers. For brokers passing messages, you can use topic replication across the brokers in a cluster for data durability. Configure topics to have a replication factor of at least three and a minimum number of in-sync replicas set to 1 less than the replication factor. For more information, see Section 7.7, "Creating a topic" . 3.5. Stopping the Streams for Apache Kafka services You can stop Kafka services by running a script. After running the script, all connections to the Kafka services are terminated. Procedure Stop the Kafka node. su - kafka /opt/kafka/bin/kafka-server-stop.sh Confirm that the Kafka node is stopped. jcmd | grep kafka 3.6. Performing a graceful rolling restart of Kafka brokers This procedure shows how to do a graceful rolling restart of brokers in a multi-node cluster. A rolling restart is usually required following an upgrade or change to the Kafka cluster configuration properties. Note Some broker configurations do not need a restart of the broker. For more information, see Updating Broker Configs in the Apache Kafka documentation. After you perform a restart of a broker, check for under-replicated topic partitions to make sure that replica partitions have caught up. To achieve a graceful restart with no loss of availability, ensure that you are replicating topics and that at least the minimum number of replicas ( min.insync.replicas ) replicas are in sync. The min.insync.replicas configuration determines the minimum number of replicas that must acknowledge a write for the write to be considered successful. For a multi-node cluster, the standard approach is to have a topic replication factor of at least 3 and a minimum number of in-sync replicas set to 1 less than the replication factor. If you are using acks=all in your producer configuration for data durability, check that the broker you restarted is in sync with all the partitions it's replicating before restarting the broker. Single-node clusters are unavailable during a restart, since all partitions are on the same broker. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. The Kafka cluster is operating as expected. Check for under-replicated partitions or any other issues affecting broker operation. The steps in this procedure describe how to check for under-replicated partitions. Procedure Perform the following steps on each Kafka broker. Complete the steps on the first broker before moving on to the . Perform the steps on the brokers that also act as controllers last. Otherwise, the controllers need to change on more than one restart. Stop the Kafka broker: /opt/kafka/bin/kafka-server-stop.sh Make any changes to the broker configuration that require a restart after completion. For further information, see the following: Configuring Kafka Upgrading Kafka nodes Restart the Kafka broker: /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/server.properties Check that Kafka is running: jcmd | grep kafka Returns: process ID kafka.Kafka /opt/kafka/config/kraft/server.properties Check the logs of each node to ensure that they have successfully joined the KRaft cluster: tail -f /opt/kafka/logs/server.log Wait until the broker has zero under-replicated partitions. You can check from the command line or use metrics. Use the kafka-topics.sh command with the --under-replicated-partitions parameter: /opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_host>:<port> --describe --under-replicated-partitions For example: /opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --under-replicated-partitions The command provides a list of topics with under-replicated partitions in a cluster. Topics with under-replicated partitions Topic: topic3 Partition: 4 Leader: 2 Replicas: 2,3 Isr: 2 Topic: topic3 Partition: 5 Leader: 3 Replicas: 1,2 Isr: 1 Topic: topic1 Partition: 1 Leader: 3 Replicas: 1,3 Isr: 3 # ... Under-replicated partitions are listed if the ISR (in-sync replica) count is less than the number of replicas. If a list is not returned, there are no under-replicated partitions. Use the UnderReplicatedPartitions metric: kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions The metric provides a count of partitions where replicas have not caught up. You wait until the count is zero. Tip Use the Kafka Exporter to create an alert when there are one or more under-replicated partitions for a topic. Checking logs when restarting If a broker fails to start, check the application logs for information. You can also check the status of a broker shutdown and restart in the /opt/kafka/logs/server.log application log.
[ "groupadd kafka useradd -g kafka kafka passwd kafka", "unzip amq-streams-<version>-bin.zip -d /opt mv /opt/kafka*redhat* /opt/kafka", "chown -R kafka:kafka /opt/kafka", "mkdir /var/lib/kafka chown -R kafka:kafka /var/lib/kafka", "bootstrap.servers=localhost:9092", "bootstrap.servers=kafka0.<host_ip_address>:9092,kafka1.<host_ip_address>:9092,kafka2.<host_ip_address>:9092", "/opt/kafka/bin/kafka-storage.sh random-uuid", "/opt/kafka/bin/kafka-storage.sh format -t <uuid> -c /opt/kafka/config/kraft/server.properties", "Formatting /tmp/kraft-combined-logs", "/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/kraft/server.properties", "jcmd | grep kafka", "process ID kafka.Kafka /opt/kafka/config/kraft/server.properties", "tail -f /opt/kafka/logs/server.log", "su - kafka /opt/kafka/bin/kafka-server-stop.sh", "jcmd | grep kafka", "/opt/kafka/bin/kafka-server-stop.sh", "/opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/kraft/server.properties", "jcmd | grep kafka", "process ID kafka.Kafka /opt/kafka/config/kraft/server.properties", "tail -f /opt/kafka/logs/server.log", "/opt/kafka/bin/kafka-topics.sh --bootstrap-server <broker_host>:<port> --describe --under-replicated-partitions", "/opt/kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --under-replicated-partitions", "Topic: topic3 Partition: 4 Leader: 2 Replicas: 2,3 Isr: 2 Topic: topic3 Partition: 5 Leader: 3 Replicas: 1,2 Isr: 1 Topic: topic1 Partition: 1 Leader: 3 Replicas: 1,3 Isr: 3 ...", "kafka.server:type=ReplicaManager,name=UnderReplicatedPartitions" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/assembly-getting-started-str
Chapter 1. Security hardening settings for SAP HANA
Chapter 1. Security hardening settings for SAP HANA You should consider the following before applying the approaches and practices to SAP HANA and SAP application systems: You can install SAP HANA or SAP NetWeaver software and relevant packages with the help of RHEL System Roles for SAP. For more information, refer to Red Hat Enterprise Linux System Roles for SAP and Installing the Minimum Amount of Packages Required . You should implement the recommended settings and steps on a non-production system before making any changes or editing the files according to the Security Hardening guide. It is recommended that you backup the system. You must at least make a backup of the /etc directory. If you follow the steps described in Blocking and allowing applications by using fapolicyd , you must also perform the steps described in the Configuring fapolicyd to allow only SAP HANA executables document. If you follow the steps described in Using SELinux for RHEL, you must also perform the steps described in Using SELinux for SAP HANA. To enhance users' management and access to the RHEL for SAP Solution system, you can configure secure remote communication, sudo access, and set password policy and complexity. For more information, refer to the following: Using secure communications between two systems with OpenSSH Managing sudo access What is pam_faillock and how to use it in Red Hat Enterprise Linux 8 & 9? Set Password Policy & Complexity for RHEL 8 & 9 via pam_pwhistory, pam_pwquality & pam_faillock To keep your Red Hat Enterprise Linux for SAP Solutions systems secured against newly discovered threats and vulnerabilities, refer to Managing and monitoring security updates .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/security_hardening_guide_for_sap_hana/asmb_settings_security-hardening
30.9. Removing sudo Commands, Command Groups, and Rules
30.9. Removing sudo Commands, Command Groups, and Rules Removing sudo Commands, Command Groups, and Rules in the Web UI Under the Policy tab, click Sudo and select Sudo Rules , Sudo Commands , or Sudo Command Groups . Select the command, command group, or rule to delete, and click Delete . Figure 30.16. Deleting a sudo Command Removing sudo Commands, Command Groups, and Rules from the Command Line To delete a command, command group, or rule, use the following commands: ipa sudocmd-del ipa sudocmdgroup-del ipa sudorule-del For more information about these commands and the options they accept, run them with the --help option added.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/removing-sudo
Chapter 2. OpenShift CLI (oc)
Chapter 2. OpenShift CLI (oc) 2.1. Getting started with the OpenShift CLI 2.1.1. About the OpenShift CLI With the OpenShift CLI ( oc ), you can create applications and manage OpenShift Container Platform projects from a terminal. The OpenShift CLI is ideal in the following situations: Working directly with project source code Scripting OpenShift Container Platform operations Managing projects while restricted by bandwidth resources and the web console is unavailable 2.1.2. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) either by downloading the binary or by using an RPM. 2.1.2.1. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 2.1.2.2. Installing the OpenShift CLI by using the web console You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a web console. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . 2.1.2.2.1. Installing the OpenShift CLI on Linux using the web console You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure From the web console, click ? . Click Command Line Tools . Select appropriate oc binary for your Linux platform, and then click Download oc for Linux . Save the file. Unpack the archive. USD tar xvf <file> Move the oc binary to a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.1.2.2.2. Installing the OpenShift CLI on Windows using the web console You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure From the web console, click ? . Click Command Line Tools . Select the oc binary for Windows platform, and then click Download oc for Windows for x86_64 . Save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> 2.1.2.2.3. Installing the OpenShift CLI on macOS using the web console You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure From the web console, click ? . Click Command Line Tools . Select the oc binary for macOS platform, and then click Download oc for Mac for x86_64 . Note For macOS arm64, click Download oc for Mac for ARM 64 . Save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 2.1.2.3. Installing the OpenShift CLI by using an RPM For Red Hat Enterprise Linux (RHEL), you can install the OpenShift CLI ( oc ) as an RPM if you have an active OpenShift Container Platform subscription on your Red Hat account. Important You must install oc for RHEL 9 by downloading the binary. Installing oc by using an RPM package is not supported on Red Hat Enterprise Linux (RHEL) 9. Prerequisites Must have root or sudo privileges. Procedure Register with Red Hat Subscription Manager: # subscription-manager register Pull the latest subscription data: # subscription-manager refresh List the available subscriptions: # subscription-manager list --available --matches '*OpenShift*' In the output for the command, find the pool ID for an OpenShift Container Platform subscription and attach the subscription to the registered system: # subscription-manager attach --pool=<pool_id> Enable the repositories required by OpenShift Container Platform 4.15. # subscription-manager repos --enable="rhocp-4.15-for-rhel-8-x86_64-rpms" Install the openshift-clients package: # yum install openshift-clients Verification Verify your installation by using an oc command: USD oc <command> 2.1.2.4. Installing the OpenShift CLI by using Homebrew For macOS, you can install the OpenShift CLI ( oc ) by using the Homebrew package manager. Prerequisites You must have Homebrew ( brew ) installed. Procedure Install the openshift-cli package by running the following command: USD brew install openshift-cli Verification Verify your installation by using an oc command: USD oc <command> 2.1.3. Logging in to the OpenShift CLI You can log in to the OpenShift CLI ( oc ) to access and manage your cluster. Prerequisites You must have access to an OpenShift Container Platform cluster. The OpenShift CLI ( oc ) is installed. Note To access a cluster that is accessible only over an HTTP proxy server, you can set the HTTP_PROXY , HTTPS_PROXY and NO_PROXY variables. These environment variables are respected by the oc CLI so that all communication with the cluster goes through the HTTP proxy. Authentication headers are sent only when using HTTPS transport. Procedure Enter the oc login command and pass in a user name: USD oc login -u user1 When prompted, enter the required information: Example output Server [https://localhost:8443]: https://openshift.example.com:6443 1 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y 2 Authentication required for https://openshift.example.com:6443 (openshift) Username: user1 Password: 3 Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Welcome! See 'oc help' to get started. 1 Enter the OpenShift Container Platform server URL. 2 Enter whether to use insecure connections. 3 Enter the user's password. Note If you are logged in to the web console, you can generate an oc login command that includes your token and server information. You can use the command to log in to the OpenShift Container Platform CLI without the interactive prompts. To generate the command, select Copy login command from the username drop-down menu at the top right of the web console. You can now create a project or issue other commands for managing your cluster. 2.1.4. Logging in to the OpenShift CLI using a web browser You can log in to the OpenShift CLI ( oc ) with the help of a web browser to access and manage your cluster. This allows users to avoid inserting their access token into the command line. Warning Logging in to the CLI through the web browser runs a server on localhost with HTTP, not HTTPS; use with caution on multi-user workstations. Prerequisites You must have access to an OpenShift Container Platform cluster. You must have installed the OpenShift CLI ( oc ). You must have a browser installed. Procedure Enter the oc login command with the --web flag: USD oc login <cluster_url> --web 1 1 Optionally, you can specify the server URL and callback port. For example, oc login <cluster_url> --web --callback-port 8280 localhost:8443 . The web browser opens automatically. If it does not, click the link in the command output. If you do not specify the OpenShift Container Platform server oc tries to open the web console of the cluster specified in the current oc configuration file. If no oc configuration exists, oc prompts interactively for the server URL. Example output Opening login URL in the default browser: https://openshift.example.com Opening in existing browser session. If more than one identity provider is available, select your choice from the options provided. Enter your username and password into the corresponding browser fields. After you are logged in, the browser displays the text access token received successfully; please return to your terminal . Check the CLI for a login confirmation. Example output Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Note The web console defaults to the profile used in the session. To switch between Administrator and Developer profiles, log out of the OpenShift Container Platform web console and clear the cache. You can now create a project or issue other commands for managing your cluster. 2.1.5. Using the OpenShift CLI Review the following sections to learn how to complete common tasks using the CLI. 2.1.5.1. Creating a project Use the oc new-project command to create a new project. USD oc new-project my-project Example output Now using project "my-project" on server "https://openshift.example.com:6443". 2.1.5.2. Creating a new app Use the oc new-app command to create a new application. USD oc new-app https://github.com/sclorg/cakephp-ex Example output --> Found image 40de956 (9 days old) in imagestream "openshift/php" under tag "7.2" for "php" ... Run 'oc status' to view your app. 2.1.5.3. Viewing pods Use the oc get pods command to view the pods for the current project. Note When you run oc inside a pod and do not specify a namespace, the namespace of the pod is used by default. USD oc get pods -o wide Example output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none> cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none> cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none> 2.1.5.4. Viewing pod logs Use the oc logs command to view logs for a particular pod. USD oc logs cakephp-ex-1-deploy Example output --> Scaling cakephp-ex-1 to 1 --> Success 2.1.5.5. Viewing the current project Use the oc project command to view the current project. USD oc project Example output Using project "my-project" on server "https://openshift.example.com:6443". 2.1.5.6. Viewing the status for the current project Use the oc status command to view information about the current project, such as services, deployments, and build configs. USD oc status Example output In project my-project on server https://openshift.example.com:6443 svc/cakephp-ex - 172.30.236.80 ports 8080, 8443 dc/cakephp-ex deploys istag/cakephp-ex:latest <- bc/cakephp-ex source builds https://github.com/sclorg/cakephp-ex on openshift/php:7.2 deployment #1 deployed 2 minutes ago - 1 pod 3 infos identified, use 'oc status --suggest' to see details. 2.1.5.7. Listing supported API resources Use the oc api-resources command to view the list of supported API resources on the server. USD oc api-resources Example output NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap ... 2.1.6. Getting help You can get help with CLI commands and OpenShift Container Platform resources in the following ways: Use oc help to get a list and description of all available CLI commands: Example: Get general help for the CLI USD oc help Example output OpenShift Client This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand. Usage: oc [flags] Basic Commands: login Log in to a server new-project Request a new project new-app Create a new application ... Use the --help flag to get help about a specific CLI command: Example: Get help for the oc create command USD oc create --help Example output Create a resource by filename or stdin JSON and YAML formats are accepted. Usage: oc create -f FILENAME [flags] ... Use the oc explain command to view the description and fields for a particular resource: Example: View documentation for the Pod resource USD oc explain pods Example output KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources ... 2.1.7. Logging out of the OpenShift CLI You can log out the OpenShift CLI to end your current session. Use the oc logout command. USD oc logout Example output Logged "user1" out on "https://openshift.example.com" This deletes the saved authentication token from the server and removes it from your configuration file. 2.2. Configuring the OpenShift CLI 2.2.1. Enabling tab completion You can enable tab completion for the Bash or Zsh shells. 2.2.1.1. Enabling tab completion for Bash After you install the OpenShift CLI ( oc ), you can enable tab completion to automatically complete oc commands or suggest options when you press Tab. The following procedure enables tab completion for the Bash shell. Prerequisites You must have the OpenShift CLI ( oc ) installed. You must have the package bash-completion installed. Procedure Save the Bash completion code to a file: USD oc completion bash > oc_bash_completion Copy the file to /etc/bash_completion.d/ : USD sudo cp oc_bash_completion /etc/bash_completion.d/ You can also save the file to a local directory and source it from your .bashrc file instead. Tab completion is enabled when you open a new terminal. 2.2.1.2. Enabling tab completion for Zsh After you install the OpenShift CLI ( oc ), you can enable tab completion to automatically complete oc commands or suggest options when you press Tab. The following procedure enables tab completion for the Zsh shell. Prerequisites You must have the OpenShift CLI ( oc ) installed. Procedure To add tab completion for oc to your .zshrc file, run the following command: USD cat >>~/.zshrc<<EOF autoload -Uz compinit compinit if [ USDcommands[oc] ]; then source <(oc completion zsh) compdef _oc oc fi EOF Tab completion is enabled when you open a new terminal. 2.3. Usage of oc and kubectl commands The Kubernetes command-line interface (CLI), kubectl , can be used to run commands against a Kubernetes cluster. Because OpenShift Container Platform is a certified Kubernetes distribution, you can use the supported kubectl binaries that ship with OpenShift Container Platform , or you can gain extended functionality by using the oc binary. 2.3.1. The oc binary The oc binary offers the same capabilities as the kubectl binary, but it extends to natively support additional OpenShift Container Platform features, including: Full support for OpenShift Container Platform resources Resources such as DeploymentConfig , BuildConfig , Route , ImageStream , and ImageStreamTag objects are specific to OpenShift Container Platform distributions, and build upon standard Kubernetes primitives. Authentication The oc binary offers a built-in login command for authentication and lets you work with projects, which map Kubernetes namespaces to authenticated users. Read Understanding authentication for more information. Additional commands The additional command oc new-app , for example, makes it easier to get new applications started using existing source code or pre-built images. Similarly, the additional command oc new-project makes it easier to start a project that you can switch to as your default. Important If you installed an earlier version of the oc binary, you cannot use it to complete all of the commands in OpenShift Container Platform 4.15 . If you want the latest features, you must download and install the latest version of the oc binary corresponding to your OpenShift Container Platform server version. Non-security API changes will involve, at minimum, two minor releases (4.1 to 4.2 to 4.3, for example) to allow older oc binaries to update. Using new capabilities might require newer oc binaries. A 4.3 server might have additional capabilities that a 4.2 oc binary cannot use and a 4.3 oc binary might have additional capabilities that are unsupported by a 4.2 server. Table 2.1. Compatibility Matrix X.Y ( oc Client) X.Y+N footnote:versionpolicyn[Where N is a number greater than or equal to 1.] ( oc Client) X.Y (Server) X.Y+N footnote:versionpolicyn[] (Server) Fully compatible. oc client might not be able to access server features. oc client might provide options and features that might not be compatible with the accessed server. 2.3.2. The kubectl binary The kubectl binary is provided as a means to support existing workflows and scripts for new OpenShift Container Platform users coming from a standard Kubernetes environment, or for those who prefer to use the kubectl CLI. Existing users of kubectl can continue to use the binary to interact with Kubernetes primitives, with no changes required to the OpenShift Container Platform cluster. You can install the supported kubectl binary by following the steps to Install the OpenShift CLI . The kubectl binary is included in the archive if you download the binary, or is installed when you install the CLI by using an RPM. For more information, see the kubectl documentation . 2.4. Managing CLI profiles A CLI configuration file allows you to configure different profiles, or contexts, for use with the CLI tools overview . A context consists of user authentication an OpenShift Container Platform server information associated with a nickname . 2.4.1. About switches between CLI profiles Contexts allow you to easily switch between multiple users across multiple OpenShift Container Platform servers, or clusters, when using CLI operations. Nicknames make managing CLI configurations easier by providing short-hand references to contexts, user credentials, and cluster details. After a user logs in with the oc CLI for the first time, OpenShift Container Platform creates a ~/.kube/config file if one does not already exist. As more authentication and connection details are provided to the CLI, either automatically during an oc login operation or by manually configuring CLI profiles, the updated information is stored in the configuration file: CLI config file apiVersion: v1 clusters: 1 - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com:8443 name: openshift1.example.com:8443 - cluster: insecure-skip-tls-verify: true server: https://openshift2.example.com:8443 name: openshift2.example.com:8443 contexts: 2 - context: cluster: openshift1.example.com:8443 namespace: alice-project user: alice/openshift1.example.com:8443 name: alice-project/openshift1.example.com:8443/alice - context: cluster: openshift1.example.com:8443 namespace: joe-project user: alice/openshift1.example.com:8443 name: joe-project/openshift1/alice current-context: joe-project/openshift1.example.com:8443/alice 3 kind: Config preferences: {} users: 4 - name: alice/openshift1.example.com:8443 user: token: xZHd2piv5_9vQrg-SKXRJ2Dsl9SceNJdhNTljEKTb8k 1 The clusters section defines connection details for OpenShift Container Platform clusters, including the address for their master server. In this example, one cluster is nicknamed openshift1.example.com:8443 and another is nicknamed openshift2.example.com:8443 . 2 This contexts section defines two contexts: one nicknamed alice-project/openshift1.example.com:8443/alice , using the alice-project project, openshift1.example.com:8443 cluster, and alice user, and another nicknamed joe-project/openshift1.example.com:8443/alice , using the joe-project project, openshift1.example.com:8443 cluster and alice user. 3 The current-context parameter shows that the joe-project/openshift1.example.com:8443/alice context is currently in use, allowing the alice user to work in the joe-project project on the openshift1.example.com:8443 cluster. 4 The users section defines user credentials. In this example, the user nickname alice/openshift1.example.com:8443 uses an access token. The CLI can support multiple configuration files which are loaded at runtime and merged together along with any override options specified from the command line. After you are logged in, you can use the oc status or oc project command to verify your current working environment: Verify the current working environment USD oc status Example output oc status In project Joe's Project (joe-project) service database (172.30.43.12:5434 -> 3306) database deploys docker.io/openshift/mysql-55-centos7:latest #1 deployed 25 minutes ago - 1 pod service frontend (172.30.159.137:5432 -> 8080) frontend deploys origin-ruby-sample:latest <- builds https://github.com/openshift/ruby-hello-world with joe-project/ruby-20-centos7:latest #1 deployed 22 minutes ago - 2 pods To see more information about a service or deployment, use 'oc describe service <name>' or 'oc describe dc <name>'. You can use 'oc get all' to see lists of each of the types described in this example. List the current project USD oc project Example output Using project "joe-project" from context named "joe-project/openshift1.example.com:8443/alice" on server "https://openshift1.example.com:8443". You can run the oc login command again and supply the required information during the interactive process, to log in using any other combination of user credentials and cluster details. A context is constructed based on the supplied information if one does not already exist. If you are already logged in and want to switch to another project the current user already has access to, use the oc project command and enter the name of the project: USD oc project alice-project Example output Now using project "alice-project" on server "https://openshift1.example.com:8443". At any time, you can use the oc config view command to view your current CLI configuration, as seen in the output. Additional CLI configuration commands are also available for more advanced usage. Note If you have access to administrator credentials but are no longer logged in as the default system user system:admin , you can log back in as this user at any time as long as the credentials are still present in your CLI config file. The following command logs in and switches to the default project: USD oc login -u system:admin -n default 2.4.2. Manual configuration of CLI profiles Note This section covers more advanced usage of CLI configurations. In most situations, you can use the oc login and oc project commands to log in and switch between contexts and projects. If you want to manually configure your CLI config files, you can use the oc config command instead of directly modifying the files. The oc config command includes a number of helpful sub-commands for this purpose: Table 2.2. CLI configuration subcommands Subcommand Usage set-cluster Sets a cluster entry in the CLI config file. If the referenced cluster nickname already exists, the specified information is merged in. USD oc config set-cluster <cluster_nickname> [--server=<master_ip_or_fqdn>] [--certificate-authority=<path/to/certificate/authority>] [--api-version=<apiversion>] [--insecure-skip-tls-verify=true] set-context Sets a context entry in the CLI config file. If the referenced context nickname already exists, the specified information is merged in. USD oc config set-context <context_nickname> [--cluster=<cluster_nickname>] [--user=<user_nickname>] [--namespace=<namespace>] use-context Sets the current context using the specified context nickname. USD oc config use-context <context_nickname> set Sets an individual value in the CLI config file. USD oc config set <property_name> <property_value> The <property_name> is a dot-delimited name where each token represents either an attribute name or a map key. The <property_value> is the new value being set. unset Unsets individual values in the CLI config file. USD oc config unset <property_name> The <property_name> is a dot-delimited name where each token represents either an attribute name or a map key. view Displays the merged CLI configuration currently in use. USD oc config view Displays the result of the specified CLI config file. USD oc config view --config=<specific_filename> Example usage Log in as a user that uses an access token. This token is used by the alice user: USD oc login https://openshift1.example.com --token=ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0 View the cluster entry automatically created: USD oc config view Example output apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com name: openshift1-example-com contexts: - context: cluster: openshift1-example-com namespace: default user: alice/openshift1-example-com name: default/openshift1-example-com/alice current-context: default/openshift1-example-com/alice kind: Config preferences: {} users: - name: alice/openshift1.example.com user: token: ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0 Update the current context to have users log in to the desired namespace: USD oc config set-context `oc config current-context` --namespace=<project_name> Examine the current context, to confirm that the changes are implemented: USD oc whoami -c All subsequent CLI operations uses the new context, unless otherwise specified by overriding CLI options or until the context is switched. 2.4.3. Load and merge rules You can follow these rules, when issuing CLI operations for the loading and merging order for the CLI configuration: CLI config files are retrieved from your workstation, using the following hierarchy and merge rules: If the --config option is set, then only that file is loaded. The flag is set once and no merging takes place. If the USDKUBECONFIG environment variable is set, then it is used. The variable can be a list of paths, and if so the paths are merged together. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the last file in the list. Otherwise, the ~/.kube/config file is used and no merging takes place. The context to use is determined based on the first match in the following flow: The value of the --context option. The current-context value from the CLI config file. An empty value is allowed at this stage. The user and cluster to use is determined. At this point, you may or may not have a context; they are built based on the first match in the following flow, which is run once for the user and once for the cluster: The value of the --user for user name and --cluster option for cluster name. If the --context option is present, then use the context's value. An empty value is allowed at this stage. The actual cluster information to use is determined. At this point, you may or may not have cluster information. Each piece of the cluster information is built based on the first match in the following flow: The values of any of the following command line options: --server , --api-version --certificate-authority --insecure-skip-tls-verify If cluster information and a value for the attribute is present, then use it. If you do not have a server location, then there is an error. The actual user information to use is determined. Users are built using the same rules as clusters, except that you can only have one authentication technique per user; conflicting techniques cause the operation to fail. Command line options take precedence over config file values. Valid command line options are: --auth-path --client-certificate --client-key --token For any information that is still missing, default values are used and prompts are given for additional information. 2.5. Extending the OpenShift CLI with plugins You can write and install plugins to build on the default oc commands, allowing you to perform new and more complex tasks with the OpenShift Container Platform CLI. 2.5.1. Writing CLI plugins You can write a plugin for the OpenShift Container Platform CLI in any programming language or script that allows you to write command-line commands. Note that you can not use a plugin to overwrite an existing oc command. Procedure This procedure creates a simple Bash plugin that prints a message to the terminal when the oc foo command is issued. Create a file called oc-foo . When naming your plugin file, keep the following in mind: The file must begin with oc- or kubectl- to be recognized as a plugin. The file name determines the command that invokes the plugin. For example, a plugin with the file name oc-foo-bar can be invoked by a command of oc foo bar . You can also use underscores if you want the command to contain dashes. For example, a plugin with the file name oc-foo_bar can be invoked by a command of oc foo-bar . Add the following contents to the file. #!/bin/bash # optional argument handling if [[ "USD1" == "version" ]] then echo "1.0.0" exit 0 fi # optional argument handling if [[ "USD1" == "config" ]] then echo USDKUBECONFIG exit 0 fi echo "I am a plugin named kubectl-foo" After you install this plugin for the OpenShift Container Platform CLI, it can be invoked using the oc foo command. Additional resources Review the Sample plugin repository for an example of a plugin written in Go. Review the CLI runtime repository for a set of utilities to assist in writing plugins in Go. 2.5.2. Installing and using CLI plugins After you write a custom plugin for the OpenShift Container Platform CLI, you must install the plugin before use. Prerequisites You must have the oc CLI tool installed. You must have a CLI plugin file that begins with oc- or kubectl- . Procedure If necessary, update the plugin file to be executable. USD chmod +x <plugin_file> Place the file anywhere in your PATH , such as /usr/local/bin/ . USD sudo mv <plugin_file> /usr/local/bin/. Run oc plugin list to make sure that the plugin is listed. USD oc plugin list Example output The following compatible plugins are available: /usr/local/bin/<plugin_file> If your plugin is not listed here, verify that the file begins with oc- or kubectl- , is executable, and is on your PATH . Invoke the new command or option introduced by the plugin. For example, if you built and installed the kubectl-ns plugin from the Sample plugin repository , you can use the following command to view the current namespace. USD oc ns Note that the command to invoke the plugin depends on the plugin file name. For example, a plugin with the file name of oc-foo-bar is invoked by the oc foo bar command. 2.6. Managing CLI plugins with Krew You can use Krew to install and manage plugins for the OpenShift CLI ( oc ). Important Using Krew to install and manage plugins for the OpenShift CLI is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.6.1. Installing a CLI plugin with Krew You can install a plugin for the OpenShift CLI ( oc ) with Krew. Prerequisites You have installed Krew by following the installation procedure in the Krew documentation. Procedure To list all available plugins, run the following command: USD oc krew search To get information about a plugin, run the following command: USD oc krew info <plugin_name> To install a plugin, run the following command: USD oc krew install <plugin_name> To list all plugins that were installed by Krew, run the following command: USD oc krew list 2.6.2. Updating a CLI plugin with Krew You can update a plugin that was installed for the OpenShift CLI ( oc ) with Krew. Prerequisites You have installed Krew by following the installation procedure in the Krew documentation. You have installed a plugin for the OpenShift CLI with Krew. Procedure To update a single plugin, run the following command: USD oc krew upgrade <plugin_name> To update all plugins that were installed by Krew, run the following command: USD oc krew upgrade 2.6.3. Uninstalling a CLI plugin with Krew You can uninstall a plugin that was installed for the OpenShift CLI ( oc ) with Krew. Prerequisites You have installed Krew by following the installation procedure in the Krew documentation. You have installed a plugin for the OpenShift CLI with Krew. Procedure To uninstall a plugin, run the following command: USD oc krew uninstall <plugin_name> 2.6.4. Additional resources Krew Extending the OpenShift CLI with plugins 2.7. OpenShift CLI developer command reference This reference provides descriptions and example commands for OpenShift CLI ( oc ) developer commands. For administrator commands, see the OpenShift CLI administrator command reference . Run oc help to list all commands or run oc <command> --help to get additional details for a specific command. 2.7.1. OpenShift CLI (oc) developer commands 2.7.1.1. oc annotate Update the annotations on a resource Example usage # Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied oc annotate pods foo description='my frontend' # Update a pod identified by type and name in "pod.json" oc annotate -f pod.json description='my frontend' # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value oc annotate --overwrite pods foo description='my frontend running nginx' # Update all pods in the namespace oc annotate pods --all description='my frontend running nginx' # Update pod 'foo' only if the resource is unchanged from version 1 oc annotate pods foo description='my frontend running nginx' --resource-version=1 # Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag oc annotate pods foo description- 2.7.1.2. oc api-resources Print the supported API resources on the server Example usage # Print the supported API resources oc api-resources # Print the supported API resources with more information oc api-resources -o wide # Print the supported API resources sorted by a column oc api-resources --sort-by=name # Print the supported namespaced resources oc api-resources --namespaced=true # Print the supported non-namespaced resources oc api-resources --namespaced=false # Print the supported API resources with a specific APIGroup oc api-resources --api-group=rbac.authorization.k8s.io 2.7.1.3. oc api-versions Print the supported API versions on the server, in the form of "group/version" Example usage # Print the supported API versions oc api-versions 2.7.1.4. oc apply Apply a configuration to a resource by file name or stdin Example usage # Apply the configuration in pod.json to a pod oc apply -f ./pod.json # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc apply -k dir/ # Apply the JSON passed into stdin to a pod cat pod.json | oc apply -f - # Apply the configuration from all files that end with '.json' oc apply -f '*.json' # Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx oc apply --prune -f manifest.yaml -l app=nginx # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file oc apply --prune -f manifest.yaml --all --prune-allowlist=core/v1/ConfigMap 2.7.1.5. oc apply edit-last-applied Edit latest last-applied-configuration annotations of a resource/object Example usage # Edit the last-applied-configuration annotations by type/name in YAML oc apply edit-last-applied deployment/nginx # Edit the last-applied-configuration annotations by file in JSON oc apply edit-last-applied -f deploy.yaml -o json 2.7.1.6. oc apply set-last-applied Set the last-applied-configuration annotation on a live object to match the contents of a file Example usage # Set the last-applied-configuration of a resource to match the contents of a file oc apply set-last-applied -f deploy.yaml # Execute set-last-applied against each configuration file in a directory oc apply set-last-applied -f path/ # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist oc apply set-last-applied -f deploy.yaml --create-annotation=true 2.7.1.7. oc apply view-last-applied View the latest last-applied-configuration annotations of a resource/object Example usage # View the last-applied-configuration annotations by type/name in YAML oc apply view-last-applied deployment/nginx # View the last-applied-configuration annotations by file in JSON oc apply view-last-applied -f deploy.yaml -o json 2.7.1.8. oc attach Attach to a running container Example usage # Get output from running pod mypod; use the 'oc.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen oc attach mypod # Get output from ruby-container from pod mypod oc attach mypod -c ruby-container # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc attach mypod -c ruby-container -i -t # Get output from the first pod of a replica set named nginx oc attach rs/nginx 2.7.1.9. oc auth can-i Check whether an action is allowed Example usage # Check to see if I can create pods in any namespace oc auth can-i create pods --all-namespaces # Check to see if I can list deployments in my current namespace oc auth can-i list deployments.apps # Check to see if service account "foo" of namespace "dev" can list pods # in the namespace "prod". # You must be allowed to use impersonation for the global option "--as". oc auth can-i list pods --as=system:serviceaccount:dev:foo -n prod # Check to see if I can do everything in my current namespace ("*" means all) oc auth can-i '*' '*' # Check to see if I can get the job named "bar" in namespace "foo" oc auth can-i list jobs.batch/bar -n foo # Check to see if I can read pod logs oc auth can-i get pods --subresource=log # Check to see if I can access the URL /logs/ oc auth can-i get /logs/ # List all allowed actions in namespace "foo" oc auth can-i --list --namespace=foo 2.7.1.10. oc auth reconcile Reconciles rules for RBAC role, role binding, cluster role, and cluster role binding objects Example usage # Reconcile RBAC resources from a file oc auth reconcile -f my-rbac-rules.yaml 2.7.1.11. oc auth whoami Experimental: Check self subject attributes Example usage # Get your subject attributes. oc auth whoami # Get your subject attributes in JSON format. oc auth whoami -o json 2.7.1.12. oc autoscale Autoscale a deployment config, deployment, replica set, stateful set, or replication controller Example usage # Auto scale a deployment "foo", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used oc autoscale deployment foo --min=2 --max=10 # Auto scale a replication controller "foo", with the number of pods between 1 and 5, target CPU utilization at 80% oc autoscale rc foo --max=5 --cpu-percent=80 2.7.1.13. oc cancel-build Cancel running, pending, or new builds Example usage # Cancel the build with the given name oc cancel-build ruby-build-2 # Cancel the named build and print the build logs oc cancel-build ruby-build-2 --dump-logs # Cancel the named build and create a new one with the same parameters oc cancel-build ruby-build-2 --restart # Cancel multiple builds oc cancel-build ruby-build-1 ruby-build-2 ruby-build-3 # Cancel all builds created from the 'ruby-build' build config that are in the 'new' state oc cancel-build bc/ruby-build --state=new 2.7.1.14. oc cluster-info Display cluster information Example usage # Print the address of the control plane and cluster services oc cluster-info 2.7.1.15. oc cluster-info dump Dump relevant information for debugging and diagnosis Example usage # Dump current cluster state to stdout oc cluster-info dump # Dump current cluster state to /path/to/cluster-state oc cluster-info dump --output-directory=/path/to/cluster-state # Dump all namespaces to stdout oc cluster-info dump --all-namespaces # Dump a set of namespaces to /path/to/cluster-state oc cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state 2.7.1.16. oc completion Output shell completion code for the specified shell (bash, zsh, fish, or powershell) Example usage # Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS brew install bash-completion ## or, if running Bash 4.1+ brew install bash-completion@2 ## If oc is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory oc completion bash > USD(brew --prefix)/etc/bash_completion.d/oc # Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the oc completion code for bash into the current shell source <(oc completion bash) ## Write bash completion code to a file and source it from .bash_profile oc completion bash > ~/.kube/completion.bash.inc printf " # oc shell completion source 'USDHOME/.kube/completion.bash.inc' " >> USDHOME/.bash_profile source USDHOME/.bash_profile # Load the oc completion code for zsh[1] into the current shell source <(oc completion zsh) # Set the oc completion code for zsh[1] to autoload on startup oc completion zsh > "USD{fpath[1]}/_oc" # Load the oc completion code for fish[2] into the current shell oc completion fish | source # To load completions for each session, execute once: oc completion fish > ~/.config/fish/completions/oc.fish # Load the oc completion code for powershell into the current shell oc completion powershell | Out-String | Invoke-Expression # Set oc completion code for powershell to run on startup ## Save completion code to a script and execute in the profile oc completion powershell > USDHOME\.kube\completion.ps1 Add-Content USDPROFILE "USDHOME\.kube\completion.ps1" ## Execute completion code in the profile Add-Content USDPROFILE "if (Get-Command oc -ErrorAction SilentlyContinue) { oc completion powershell | Out-String | Invoke-Expression }" ## Add completion code directly to the USDPROFILE script oc completion powershell >> USDPROFILE 2.7.1.17. oc config current-context Display the current-context Example usage # Display the current-context oc config current-context 2.7.1.18. oc config delete-cluster Delete the specified cluster from the kubeconfig Example usage # Delete the minikube cluster oc config delete-cluster minikube 2.7.1.19. oc config delete-context Delete the specified context from the kubeconfig Example usage # Delete the context for the minikube cluster oc config delete-context minikube 2.7.1.20. oc config delete-user Delete the specified user from the kubeconfig Example usage # Delete the minikube user oc config delete-user minikube 2.7.1.21. oc config get-clusters Display clusters defined in the kubeconfig Example usage # List the clusters that oc knows about oc config get-clusters 2.7.1.22. oc config get-contexts Describe one or many contexts Example usage # List all the contexts in your kubeconfig file oc config get-contexts # Describe one context in your kubeconfig file oc config get-contexts my-context 2.7.1.23. oc config get-users Display users defined in the kubeconfig Example usage # List the users that oc knows about oc config get-users 2.7.1.24. oc config new-admin-kubeconfig Generate, make the server trust, and display a new admin.kubeconfig. Example usage # Generate a new admin kubeconfig oc config new-admin-kubeconfig 2.7.1.25. oc config new-kubelet-bootstrap-kubeconfig Generate, make the server trust, and display a new kubelet /etc/kubernetes/kubeconfig. Example usage # Generate a new kubelet bootstrap kubeconfig oc config new-kubelet-bootstrap-kubeconfig 2.7.1.26. oc config refresh-ca-bundle Update the OpenShift CA bundle by contacting the apiserver. Example usage # Refresh the CA bundle for the current context's cluster oc config refresh-ca-bundle # Refresh the CA bundle for the cluster named e2e in your kubeconfig oc config refresh-ca-bundle e2e # Print the CA bundle from the current OpenShift cluster's apiserver. oc config refresh-ca-bundle --dry-run 2.7.1.27. oc config rename-context Rename a context from the kubeconfig file Example usage # Rename the context 'old-name' to 'new-name' in your kubeconfig file oc config rename-context old-name new-name 2.7.1.28. oc config set Set an individual value in a kubeconfig file Example usage # Set the server field on the my-cluster cluster to https://1.2.3.4 oc config set clusters.my-cluster.server https://1.2.3.4 # Set the certificate-authority-data field on the my-cluster cluster oc config set clusters.my-cluster.certificate-authority-data USD(echo "cert_data_here" | base64 -i -) # Set the cluster field in the my-context context to my-cluster oc config set contexts.my-context.cluster my-cluster # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option oc config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true 2.7.1.29. oc config set-cluster Set a cluster entry in kubeconfig Example usage # Set only the server field on the e2e cluster entry without touching other values oc config set-cluster e2e --server=https://1.2.3.4 # Embed certificate authority data for the e2e cluster entry oc config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt # Disable cert checking for the e2e cluster entry oc config set-cluster e2e --insecure-skip-tls-verify=true # Set the custom TLS server name to use for validation for the e2e cluster entry oc config set-cluster e2e --tls-server-name=my-cluster-name # Set the proxy URL for the e2e cluster entry oc config set-cluster e2e --proxy-url=https://1.2.3.4 2.7.1.30. oc config set-context Set a context entry in kubeconfig Example usage # Set the user field on the gce context entry without touching other values oc config set-context gce --user=cluster-admin 2.7.1.31. oc config set-credentials Set a user entry in kubeconfig Example usage # Set only the "client-key" field on the "cluster-admin" # entry, without touching other values oc config set-credentials cluster-admin --client-key=~/.kube/admin.key # Set basic auth for the "cluster-admin" entry oc config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif # Embed client certificate data in the "cluster-admin" entry oc config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true # Enable the Google Compute Platform auth provider for the "cluster-admin" entry oc config set-credentials cluster-admin --auth-provider=gcp # Enable the OpenID Connect auth provider for the "cluster-admin" entry with additional arguments oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar # Remove the "client-secret" config value for the OpenID Connect auth provider for the "cluster-admin" entry oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret- # Enable new exec auth plugin for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 # Define new exec auth plugin arguments for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2 # Create or update exec auth plugin environment variables for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2 # Remove exec auth plugin environment variables for the "cluster-admin" entry oc config set-credentials cluster-admin --exec-env=var-to-remove- 2.7.1.32. oc config unset Unset an individual value in a kubeconfig file Example usage # Unset the current-context oc config unset current-context # Unset namespace in foo context oc config unset contexts.foo.namespace 2.7.1.33. oc config use-context Set the current-context in a kubeconfig file Example usage # Use the context for the minikube cluster oc config use-context minikube 2.7.1.34. oc config view Display merged kubeconfig settings or a specified kubeconfig file Example usage # Show merged kubeconfig settings oc config view # Show merged kubeconfig settings, raw certificate data, and exposed secrets oc config view --raw # Get the password for the e2e user oc config view -o jsonpath='{.users[?(@.name == "e2e")].user.password}' 2.7.1.35. oc cp Copy files and directories to and from containers Example usage # !!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'oc cp' will fail. # # For advanced use cases, such as symlinks, wildcard expansion or # file mode preservation, consider using 'oc exec'. # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> tar cf - /tmp/foo | oc exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace oc cp /tmp/foo_dir <some-pod>:/tmp/bar_dir # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container oc cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container> # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> oc cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar 2.7.1.36. oc create Create a resource from a file or from stdin Example usage # Create a pod using the data in pod.json oc create -f ./pod.json # Create a pod based on the JSON passed into stdin cat pod.json | oc create -f - # Edit the data in registry.yaml in JSON then create the resource using the edited data oc create -f registry.yaml --edit -o json 2.7.1.37. oc create build Create a new build Example usage # Create a new build oc create build myapp 2.7.1.38. oc create clusterresourcequota Create a cluster resource quota Example usage # Create a cluster resource quota limited to 10 pods oc create clusterresourcequota limit-bob --project-annotation-selector=openshift.io/requester=user-bob --hard=pods=10 2.7.1.39. oc create clusterrole Create a cluster role Example usage # Create a cluster role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods oc create clusterrole pod-reader --verb=get,list,watch --resource=pods # Create a cluster role named "pod-reader" with ResourceName specified oc create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a cluster role named "foo" with API Group specified oc create clusterrole foo --verb=get,list,watch --resource=rs.apps # Create a cluster role named "foo" with SubResource specified oc create clusterrole foo --verb=get,list,watch --resource=pods,pods/status # Create a cluster role name "foo" with NonResourceURL specified oc create clusterrole "foo" --verb=get --non-resource-url=/logs/* # Create a cluster role name "monitoring" with AggregationRule specified oc create clusterrole monitoring --aggregation-rule="rbac.example.com/aggregate-to-monitoring=true" 2.7.1.40. oc create clusterrolebinding Create a cluster role binding for a particular cluster role Example usage # Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role oc create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1 2.7.1.41. oc create configmap Create a config map from a local file, directory or literal value Example usage # Create a new config map named my-config based on folder bar oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config with specified keys instead of file basenames on disk oc create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt # Create a new config map named my-config with key1=config1 and key2=config2 oc create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2 # Create a new config map named my-config from the key=value pairs in the file oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config from an env file oc create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env 2.7.1.42. oc create cronjob Create a cron job with the specified name Example usage # Create a cron job oc create cronjob my-job --image=busybox --schedule="*/1 * * * *" # Create a cron job with a command oc create cronjob my-job --image=busybox --schedule="*/1 * * * *" -- date 2.7.1.43. oc create deployment Create a deployment with the specified name Example usage # Create a deployment named my-dep that runs the busybox image oc create deployment my-dep --image=busybox # Create a deployment with a command oc create deployment my-dep --image=busybox -- date # Create a deployment named my-dep that runs the nginx image with 3 replicas oc create deployment my-dep --image=nginx --replicas=3 # Create a deployment named my-dep that runs the busybox image and expose port 5701 oc create deployment my-dep --image=busybox --port=5701 2.7.1.44. oc create deploymentconfig Create a deployment config with default options that uses a given image Example usage # Create an nginx deployment config named my-nginx oc create deploymentconfig my-nginx --image=nginx 2.7.1.45. oc create identity Manually create an identity (only needed if automatic creation is disabled) Example usage # Create an identity with identity provider "acme_ldap" and the identity provider username "adamjones" oc create identity acme_ldap:adamjones 2.7.1.46. oc create imagestream Create a new empty image stream Example usage # Create a new image stream oc create imagestream mysql 2.7.1.47. oc create imagestreamtag Create a new image stream tag Example usage # Create a new image stream tag based on an image in a remote registry oc create imagestreamtag mysql:latest --from-image=myregistry.local/mysql/mysql:5.0 2.7.1.48. oc create ingress Create an ingress with the specified name Example usage # Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a TLS secret "my-cert" oc create ingress simple --rule="foo.com/bar=svc1:8080,tls=my-cert" # Create a catch all ingress of "/path" pointing to service svc:port and Ingress Class as "otheringress" oc create ingress catch-all --class=otheringress --rule="/path=svc:port" # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2 oc create ingress annotated --class=default --rule="foo.com/bar=svc:port" \ --annotation ingress.annotation1=foo \ --annotation ingress.annotation2=bla # Create an ingress with the same host and multiple paths oc create ingress multipath --class=default \ --rule="foo.com/=svc:port" \ --rule="foo.com/admin/=svcadmin:portadmin" # Create an ingress with multiple hosts and the pathType as Prefix oc create ingress ingress1 --class=default \ --rule="foo.com/path*=svc:8080" \ --rule="bar.com/admin*=svc2:http" # Create an ingress with TLS enabled using the default ingress certificate and different path types oc create ingress ingtls --class=default \ --rule="foo.com/=svc:https,tls" \ --rule="foo.com/path/subpath*=othersvc:8080" # Create an ingress with TLS enabled using a specific secret and pathType as Prefix oc create ingress ingsecret --class=default \ --rule="foo.com/*=svc:8080,tls=secret1" # Create an ingress with a default backend oc create ingress ingdefault --class=default \ --default-backend=defaultsvc:http \ --rule="foo.com/*=svc:8080,tls=secret1" 2.7.1.49. oc create job Create a job with the specified name Example usage # Create a job oc create job my-job --image=busybox # Create a job with a command oc create job my-job --image=busybox -- date # Create a job from a cron job named "a-cronjob" oc create job test-job --from=cronjob/a-cronjob 2.7.1.50. oc create namespace Create a namespace with the specified name Example usage # Create a new namespace named my-namespace oc create namespace my-namespace 2.7.1.51. oc create poddisruptionbudget Create a pod disruption budget with the specified name Example usage # Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time oc create poddisruptionbudget my-pdb --selector=app=rails --min-available=1 # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time oc create pdb my-pdb --selector=app=nginx --min-available=50% 2.7.1.52. oc create priorityclass Create a priority class with the specified name Example usage # Create a priority class named high-priority oc create priorityclass high-priority --value=1000 --description="high priority" # Create a priority class named default-priority that is considered as the global default priority oc create priorityclass default-priority --value=1000 --global-default=true --description="default priority" # Create a priority class named high-priority that cannot preempt pods with lower priority oc create priorityclass high-priority --value=1000 --description="high priority" --preemption-policy="Never" 2.7.1.53. oc create quota Create a quota with the specified name Example usage # Create a new resource quota named my-quota oc create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10 # Create a new resource quota named best-effort oc create quota best-effort --hard=pods=100 --scopes=BestEffort 2.7.1.54. oc create role Create a role with single rule Example usage # Create a role named "pod-reader" that allows user to perform "get", "watch" and "list" on pods oc create role pod-reader --verb=get --verb=list --verb=watch --resource=pods # Create a role named "pod-reader" with ResourceName specified oc create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a role named "foo" with API Group specified oc create role foo --verb=get,list,watch --resource=rs.apps # Create a role named "foo" with SubResource specified oc create role foo --verb=get,list,watch --resource=pods,pods/status 2.7.1.55. oc create rolebinding Create a role binding for a particular role or cluster role Example usage # Create a role binding for user1, user2, and group1 using the admin cluster role oc create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1 # Create a role binding for serviceaccount monitoring:sa-dev using the admin role oc create rolebinding admin-binding --role=admin --serviceaccount=monitoring:sa-dev 2.7.1.56. oc create route edge Create a route that uses edge TLS termination Example usage # Create an edge route named "my-route" that exposes the frontend service oc create route edge my-route --service=frontend # Create an edge route that exposes the frontend service and specify a path # If the route name is omitted, the service name will be used oc create route edge --service=frontend --path /assets 2.7.1.57. oc create route passthrough Create a route that uses passthrough TLS termination Example usage # Create a passthrough route named "my-route" that exposes the frontend service oc create route passthrough my-route --service=frontend # Create a passthrough route that exposes the frontend service and specify # a host name. If the route name is omitted, the service name will be used oc create route passthrough --service=frontend --hostname=www.example.com 2.7.1.58. oc create route reencrypt Create a route that uses reencrypt TLS termination Example usage # Create a route named "my-route" that exposes the frontend service oc create route reencrypt my-route --service=frontend --dest-ca-cert cert.cert # Create a reencrypt route that exposes the frontend service, letting the # route name default to the service name and the destination CA certificate # default to the service CA oc create route reencrypt --service=frontend 2.7.1.59. oc create secret docker-registry Create a secret for use with a Docker registry Example usage # If you do not already have a .dockercfg file, create a dockercfg secret directly oc create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL # Create a new secret named my-secret from ~/.docker/config.json oc create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json 2.7.1.60. oc create secret generic Create a secret from a local file, directory, or literal value Example usage # Create a new secret named my-secret with keys for each file in folder bar oc create secret generic my-secret --from-file=path/to/bar # Create a new secret named my-secret with specified keys instead of names on disk oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub # Create a new secret named my-secret with key1=supersecret and key2=topsecret oc create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret # Create a new secret named my-secret using a combination of a file and a literal oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret # Create a new secret named my-secret from env files oc create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env 2.7.1.61. oc create secret tls Create a TLS secret Example usage # Create a new TLS secret named tls-secret with the given key pair oc create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.key 2.7.1.62. oc create service clusterip Create a ClusterIP service Example usage # Create a new ClusterIP service named my-cs oc create service clusterip my-cs --tcp=5678:8080 # Create a new ClusterIP service named my-cs (in headless mode) oc create service clusterip my-cs --clusterip="None" 2.7.1.63. oc create service externalname Create an ExternalName service Example usage # Create a new ExternalName service named my-ns oc create service externalname my-ns --external-name bar.com 2.7.1.64. oc create service loadbalancer Create a LoadBalancer service Example usage # Create a new LoadBalancer service named my-lbs oc create service loadbalancer my-lbs --tcp=5678:8080 2.7.1.65. oc create service nodeport Create a NodePort service Example usage # Create a new NodePort service named my-ns oc create service nodeport my-ns --tcp=5678:8080 2.7.1.66. oc create serviceaccount Create a service account with the specified name Example usage # Create a new service account named my-service-account oc create serviceaccount my-service-account 2.7.1.67. oc create token Request a service account token Example usage # Request a token to authenticate to the kube-apiserver as the service account "myapp" in the current namespace oc create token myapp # Request a token for a service account in a custom namespace oc create token myapp --namespace myns # Request a token with a custom expiration oc create token myapp --duration 10m # Request a token with a custom audience oc create token myapp --audience https://example.com # Request a token bound to an instance of a Secret object oc create token myapp --bound-object-kind Secret --bound-object-name mysecret # Request a token bound to an instance of a Secret object with a specific UID oc create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc 2.7.1.68. oc create user Manually create a user (only needed if automatic creation is disabled) Example usage # Create a user with the username "ajones" and the display name "Adam Jones" oc create user ajones --full-name="Adam Jones" 2.7.1.69. oc create useridentitymapping Manually map an identity to a user Example usage # Map the identity "acme_ldap:adamjones" to the user "ajones" oc create useridentitymapping acme_ldap:adamjones ajones 2.7.1.70. oc debug Launch a new instance of a pod for debugging Example usage # Start a shell session into a pod using the OpenShift tools image oc debug # Debug a currently running deployment by creating a new pod oc debug deploy/test # Debug a node as an administrator oc debug node/master-1 # Debug a Windows Node # Note: the chosen image must match the Windows Server version (2019, 2022) of the Node oc debug node/win-worker-1 --image=mcr.microsoft.com/powershell:lts-nanoserver-ltsc2022 # Launch a shell in a pod using the provided image stream tag oc debug istag/mysql:latest -n openshift # Test running a job as a non-root user oc debug job/test --as-user=1000000 # Debug a specific failing container by running the env command in the 'second' container oc debug daemonset/test -c second -- /bin/env # See the pod that would be created to debug oc debug mypod-9xbc -o yaml # Debug a resource but launch the debug pod in another namespace # Note: Not all resources can be debugged using --to-namespace without modification. For example, # volumes and service accounts are namespace-dependent. Add '-o yaml' to output the debug pod definition # to disk. If necessary, edit the definition then run 'oc debug -f -' or run without --to-namespace oc debug mypod-9xbc --to-namespace testns 2.7.1.71. oc delete Delete resources by file names, stdin, resources and names, or by resources and label selector Example usage # Delete a pod using the type and name specified in pod.json oc delete -f ./pod.json # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc delete -k dir # Delete resources from all files that end with '.json' oc delete -f '*.json' # Delete a pod based on the type and name in the JSON passed into stdin cat pod.json | oc delete -f - # Delete pods and services with same names "baz" and "foo" oc delete pod,service baz foo # Delete pods and services with label name=myLabel oc delete pods,services -l name=myLabel # Delete a pod with minimal delay oc delete pod foo --now # Force delete a pod on a dead node oc delete pod foo --force # Delete all pods oc delete pods --all 2.7.1.72. oc describe Show details of a specific resource or group of resources Example usage # Describe a node oc describe nodes kubernetes-node-emt8.c.myproject.internal # Describe a pod oc describe pods/nginx # Describe a pod identified by type and name in "pod.json" oc describe -f pod.json # Describe all pods oc describe pods # Describe pods by label name=myLabel oc describe pods -l name=myLabel # Describe all pods managed by the 'frontend' replication controller # (rc-created pods get the name of the rc as a prefix in the pod name) oc describe pods frontend 2.7.1.73. oc diff Diff the live version against a would-be applied version Example usage # Diff resources included in pod.json oc diff -f pod.json # Diff file read from stdin cat service.yaml | oc diff -f - 2.7.1.74. oc edit Edit a resource on the server Example usage # Edit the service named 'registry' oc edit svc/registry # Use an alternative editor KUBE_EDITOR="nano" oc edit svc/registry # Edit the job 'myjob' in JSON using the v1 API format oc edit job.v1.batch/myjob -o json # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation oc edit deployment/mydeployment -o yaml --save-config # Edit the 'status' subresource for the 'mydeployment' deployment oc edit deployment mydeployment --subresource='status' 2.7.1.75. oc events List events Example usage # List recent events in the default namespace oc events # List recent events in all namespaces oc events --all-namespaces # List recent events for the specified pod, then wait for more events and list them as they arrive oc events --for pod/web-pod-13je7 --watch # List recent events in YAML format oc events -oyaml # List recent only events of type 'Warning' or 'Normal' oc events --types=Warning,Normal 2.7.1.76. oc exec Execute a command in a container Example usage # Get output from running the 'date' command from pod mypod, using the first container by default oc exec mypod -- date # Get output from running the 'date' command in ruby-container from pod mypod oc exec mypod -c ruby-container -- date # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc exec mypod -c ruby-container -i -t -- bash -il # List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (--) to separate your command's flags/arguments # Also note, do not surround your command and its flags/arguments with quotes # unless that is how you would execute it normally (i.e., do ls -t /usr, not "ls -t /usr") oc exec mypod -i -t -- ls -t /usr # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default oc exec deploy/mydeployment -- date # Get output from running 'date' command from the first pod of the service myservice, using the first container by default oc exec svc/myservice -- date 2.7.1.77. oc explain Get documentation for a resource Example usage # Get the documentation of the resource and its fields oc explain pods # Get all the fields in the resource oc explain pods --recursive # Get the explanation for deployment in supported api versions oc explain deployments --api-version=apps/v1 # Get the documentation of a specific field of a resource oc explain pods.spec.containers # Get the documentation of resources in different format oc explain deployment --output=plaintext-openapiv2 2.7.1.78. oc expose Expose a replicated application as a service or route Example usage # Create a route based on service nginx. The new route will reuse nginx's labels oc expose service nginx # Create a route and specify your own label and route name oc expose service nginx -l name=myroute --name=fromdowntown # Create a route and specify a host name oc expose service nginx --hostname=www.example.com # Create a route with a wildcard oc expose service nginx --hostname=x.example.com --wildcard-policy=Subdomain # This would be equivalent to *.example.com. NOTE: only hosts are matched by the wildcard; subdomains would not be included # Expose a deployment configuration as a service and use the specified port oc expose dc ruby-hello-world --port=8080 # Expose a service as a route in the specified path oc expose service nginx --path=/nginx 2.7.1.79. oc extract Extract secrets or config maps to disk Example usage # Extract the secret "test" to the current directory oc extract secret/test # Extract the config map "nginx" to the /tmp directory oc extract configmap/nginx --to=/tmp # Extract the config map "nginx" to STDOUT oc extract configmap/nginx --to=- # Extract only the key "nginx.conf" from config map "nginx" to the /tmp directory oc extract configmap/nginx --to=/tmp --keys=nginx.conf 2.7.1.80. oc get Display one or many resources Example usage # List all pods in ps output format oc get pods # List all pods in ps output format with more information (such as node name) oc get pods -o wide # List a single replication controller with specified NAME in ps output format oc get replicationcontroller web # List deployments in JSON output format, in the "v1" version of the "apps" API group oc get deployments.v1.apps -o json # List a single pod in JSON output format oc get -o json pod web-pod-13je7 # List a pod identified by type and name specified in "pod.yaml" in JSON output format oc get -f pod.yaml -o json # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml oc get -k dir/ # Return only the phase value of the specified pod oc get -o template pod/web-pod-13je7 --template={{.status.phase}} # List resource information in custom columns oc get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image # List all replication controllers and services together in ps output format oc get rc,services # List one or more resources by their type and names oc get rc/web service/frontend pods/web-pod-13je7 # List the 'status' subresource for a single pod oc get pod web-pod-13je7 --subresource status 2.7.1.81. oc get-token Experimental: Get token from external OIDC issuer as credentials exec plugin Example usage # Starts an auth code flow to the issuer url with the client id and the given extra scopes oc get-token --client-id=client-id --issuer-url=test.issuer.url --extra-scopes=email,profile # Starts an authe code flow to the issuer url with a different callback address. oc get-token --client-id=client-id --issuer-url=test.issuer.url --callback-address=127.0.0.1:8343 2.7.1.82. oc idle Idle scalable resources Example usage # Idle the scalable controllers associated with the services listed in to-idle.txt USD oc idle --resource-names-file to-idle.txt 2.7.1.83. oc image append Add layers to images and push them to a registry Example usage # Remove the entrypoint on the mysql:latest image oc image append --from mysql:latest --to myregistry.com/myimage:latest --image '{"Entrypoint":null}' # Add a new layer to the image oc image append --from mysql:latest --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to the image and store the result on disk # This results in USD(pwd)/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local layer.tar.gz # Add a new layer to the image and store the result on disk in a designated directory # This will result in USD(pwd)/mysql-local/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local --dir mysql-local layer.tar.gz # Add a new layer to an image that is stored on disk (~/mysql-local/v2/image exists) oc image append --from-dir ~/mysql-local --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to an image that was mirrored to the current directory on disk (USD(pwd)/v2/image exists) oc image append --from-dir v2 --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for an os/arch that is different from the system's os/arch # Note: The first image in the manifest list that matches the filter will be returned when --keep-manifest-list is not specified oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests when keep-manifest-list is specified oc image append --from docker.io/library/busybox:latest --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests that is specified by the filter, while preserving the manifestlist oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz 2.7.1.84. oc image extract Copy files from an image to the file system Example usage # Extract the busybox image into the current directory oc image extract docker.io/library/busybox:latest # Extract the busybox image into a designated directory (must exist) oc image extract docker.io/library/busybox:latest --path /:/tmp/busybox # Extract the busybox image into the current directory for linux/s390x platform # Note: Wildcard filter is not supported with extract; pass a single os/arch to extract oc image extract docker.io/library/busybox:latest --filter-by-os=linux/s390x # Extract a single file from the image into the current directory oc image extract docker.io/library/centos:7 --path /bin/bash:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into the current directory oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into a designated directory (must exist) # This results in /tmp/yum.repos.d/*.repo on local system oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:/tmp/yum.repos.d # Extract an image stored on disk into the current directory (USD(pwd)/v2/busybox/blobs,manifests exists) # --confirm is required because the current directory is not empty oc image extract file://busybox:local --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into the current directory # --confirm is required because the current directory is not empty (USD(pwd)/busybox-mirror-dir/v2/busybox exists) oc image extract file://busybox:local --dir busybox-mirror-dir --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into a designated directory (must exist) oc image extract file://busybox:local --dir busybox-mirror-dir --path /:/tmp/busybox # Extract the last layer in the image oc image extract docker.io/library/centos:7[-1] # Extract the first three layers of the image oc image extract docker.io/library/centos:7[:3] # Extract the last three layers of the image oc image extract docker.io/library/centos:7[-3:] 2.7.1.85. oc image info Display information about an image Example usage # Show information about an image oc image info quay.io/openshift/cli:latest # Show information about images matching a wildcard oc image info quay.io/openshift/cli:4.* # Show information about a file mirrored to disk under DIR oc image info --dir=DIR file://library/busybox:latest # Select which image from a multi-OS image to show oc image info library/busybox:latest --filter-by-os=linux/arm64 2.7.1.86. oc image mirror Mirror images from one repository to another Note The following example contains some values that are specific to OpenShift Container Platform on AWS. Example usage # Copy image to another tag oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable # Copy image to another registry oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable # Copy all tags starting with mysql to the destination repository oc image mirror myregistry.com/myimage:mysql* docker.io/myrepository/myimage # Copy image to disk, creating a directory structure that can be served as a registry oc image mirror myregistry.com/myimage:latest file://myrepository/myimage:latest # Copy image to S3 (pull from <bucket>.s3.amazonaws.com/image:latest) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image:latest # Copy image to S3 without setting a tag (pull via @<digest>) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image # Copy image to multiple locations oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable \ docker.io/myrepository/myimage:dev # Copy multiple images oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ myregistry.com/myimage:new=myregistry.com/other:target # Copy manifest list of a multi-architecture image, even if only a single image is found oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --keep-manifest-list=true # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that with multi-arch images, this results in a new manifest list digest that includes only # the filtered manifests oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --filter-by-os=os/arch # Copy all os/arch manifests of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see list of os/arch manifests that will be mirrored oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --keep-manifest-list=true # Note the above command is equivalent to oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --filter-by-os=.* # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that the target registry may reject a manifest list if the platform specific images do not all # exist. You must use a registry with sparse registry support enabled. oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test \ --filter-by-os=linux/386 \ --keep-manifest-list=true 2.7.1.87. oc import-image Import images from a container image registry Example usage # Import tag latest into a new image stream oc import-image mystream --from=registry.io/repo/image:latest --confirm # Update imported data for tag latest in an already existing image stream oc import-image mystream # Update imported data for tag stable in an already existing image stream oc import-image mystream:stable # Update imported data for all tags in an existing image stream oc import-image mystream --all # Update imported data for a tag that points to a manifest list to include the full manifest list oc import-image mystream --import-mode=PreserveOriginal # Import all tags into a new image stream oc import-image mystream --from=registry.io/repo/image --all --confirm # Import all tags into a new image stream using a custom timeout oc --request-timeout=5m import-image mystream --from=registry.io/repo/image --all --confirm 2.7.1.88. oc kustomize Build a kustomization target from a directory or URL Example usage # Build the current working directory oc kustomize # Build some shared configuration directory oc kustomize /home/config/production # Build from github oc kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6 2.7.1.89. oc label Update the labels on a resource Example usage # Update pod 'foo' with the label 'unhealthy' and the value 'true' oc label pods foo unhealthy=true # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value oc label --overwrite pods foo status=unhealthy # Update all pods in the namespace oc label pods --all status=unhealthy # Update a pod identified by the type and name in "pod.json" oc label -f pod.json status=unhealthy # Update pod 'foo' only if the resource is unchanged from version 1 oc label pods foo status=unhealthy --resource-version=1 # Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag oc label pods foo bar- 2.7.1.90. oc login Log in to a server Example usage # Log in interactively oc login --username=myuser # Log in to the given server with the given certificate authority file oc login localhost:8443 --certificate-authority=/path/to/cert.crt # Log in to the given server with the given credentials (will not prompt interactively) oc login localhost:8443 --username=myuser --password=mypass # Log in to the given server through a browser oc login localhost:8443 --web --callback-port 8280 2.7.1.91. oc logout End the current server session Example usage # Log out oc logout 2.7.1.92. oc logs Print the logs for a container in a pod Example usage # Start streaming the logs of the most recent build of the openldap build config oc logs -f bc/openldap # Start streaming the logs of the latest deployment of the mysql deployment config oc logs -f dc/mysql # Get the logs of the first deployment for the mysql deployment config. Note that logs # from older deployments may not exist either because the deployment was successful # or due to deployment pruning or manual deletion of the deployment oc logs --version=1 dc/mysql # Return a snapshot of ruby-container logs from pod backend oc logs backend -c ruby-container # Start streaming of ruby-container logs from pod backend oc logs -f pod/backend -c ruby-container 2.7.1.93. oc new-app Create a new application Example usage # List all local templates and image streams that can be used to create an app oc new-app --list # Create an application based on the source code in the current git repository (with a public remote) and a container image oc new-app . --image=registry/repo/langimage # Create an application myapp with Docker based build strategy expecting binary input oc new-app --strategy=docker --binary --name myapp # Create a Ruby application based on the provided [image]~[source code] combination oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git # Use the public container registry MySQL image to create an app. Generated artifacts will be labeled with db=mysql oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=testdb -l db=mysql # Use a MySQL image in a private registry to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/mysql --name=private # Use an image with the full manifest list to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create an application from a remote repository using its beta4 branch oc new-app https://github.com/openshift/ruby-hello-world#beta4 # Create an application based on a stored template, explicitly setting a parameter value oc new-app --template=ruby-helloworld-sample --param=MYSQL_USER=admin # Create an application from a remote repository and specify a context directory oc new-app https://github.com/youruser/yourgitrepo --context-dir=src/build # Create an application from a remote private repository and specify which existing secret to use oc new-app https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create an application based on a template file, explicitly setting a parameter value oc new-app --file=./example/myapp/template.json --param=MYSQL_USER=admin # Search all templates, image streams, and container images for the ones that match "ruby" oc new-app --search ruby # Search for "ruby", but only in stored templates (--template, --image-stream and --image # can be used to filter search results) oc new-app --search --template=ruby # Search for "ruby" in stored templates and print the output as YAML oc new-app --search --template=ruby --output=yaml 2.7.1.94. oc new-build Create a new build configuration Example usage # Create a build config based on the source code in the current git repository (with a public # remote) and a container image oc new-build . --image=repo/langimage # Create a NodeJS build config based on the provided [image]~[source code] combination oc new-build centos/nodejs-8-centos7~https://github.com/sclorg/nodejs-ex.git # Create a build config from a remote repository using its beta2 branch oc new-build https://github.com/openshift/ruby-hello-world#beta2 # Create a build config using a Dockerfile specified as an argument oc new-build -D USD'FROM centos:7\nRUN yum install -y httpd' # Create a build config from a remote repository and add custom environment variables oc new-build https://github.com/openshift/ruby-hello-world -e RACK_ENV=development # Create a build config from a remote private repository and specify which existing secret to use oc new-build https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create a build config using an image with the full manifest list to create an app and override application artifacts' names oc new-build --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create a build config from a remote repository and inject the npmrc into a build oc new-build https://github.com/openshift/ruby-hello-world --build-secret npmrc:.npmrc # Create a build config from a remote repository and inject environment data into a build oc new-build https://github.com/openshift/ruby-hello-world --build-config-map env:config # Create a build config that gets its input from a remote repository and another container image oc new-build https://github.com/openshift/ruby-hello-world --source-image=openshift/jenkins-1-centos7 --source-image-path=/var/lib/jenkins:tmp 2.7.1.95. oc new-project Request a new project Example usage # Create a new project with minimal information oc new-project web-team-dev # Create a new project with a display name and description oc new-project web-team-dev --display-name="Web Team Development" --description="Development project for the web team." 2.7.1.96. oc observe Observe changes to resources and react to them (experimental) Example usage # Observe changes to services oc observe services # Observe changes to services, including the clusterIP and invoke a script for each oc observe services --template '{ .spec.clusterIP }' -- register_dns.sh # Observe changes to services filtered by a label selector oc observe services -l regist-dns=true --template '{ .spec.clusterIP }' -- register_dns.sh 2.7.1.97. oc patch Update fields of a resource Example usage # Partially update a node using a strategic merge patch, specifying the patch as JSON oc patch node k8s-node-1 -p '{"spec":{"unschedulable":true}}' # Partially update a node using a strategic merge patch, specifying the patch as YAML oc patch node k8s-node-1 -p USD'spec:\n unschedulable: true' # Partially update a node identified by the type and name specified in "node.json" using strategic merge patch oc patch -f node.json -p '{"spec":{"unschedulable":true}}' # Update a container's image; spec.containers[*].name is required because it's a merge key oc patch pod valid-pod -p '{"spec":{"containers":[{"name":"kubernetes-serve-hostname","image":"new image"}]}}' # Update a container's image using a JSON patch with positional arrays oc patch pod valid-pod --type='json' -p='[{"op": "replace", "path": "/spec/containers/0/image", "value":"new image"}]' # Update a deployment's replicas through the 'scale' subresource using a merge patch oc patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{"spec":{"replicas":2}}' 2.7.1.98. oc plugin list List all visible plugin executables on a user's PATH Example usage # List all available plugins oc plugin list 2.7.1.99. oc policy add-role-to-user Add a role to users or service accounts for the current project Example usage # Add the 'view' role to user1 for the current project oc policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc policy add-role-to-user edit -z serviceaccount1 2.7.1.100. oc policy scc-review Check which service account can create a pod Example usage # Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc policy scc-review -f myresource_with_no_sa.yaml 2.7.1.101. oc policy scc-subject-review Check whether a user or a service account can create a pod Example usage # Check whether user bob can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc policy scc-subject-review -f myresourcewithsa.yaml 2.7.1.102. oc port-forward Forward one or more local ports to a pod Example usage # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod oc port-forward pod/mypod 5000 6000 # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment oc port-forward deployment/mydeployment 5000 6000 # Listen on port 8443 locally, forwarding to the targetPort of the service's port named "https" in a pod selected by the service oc port-forward service/myservice 8443:https # Listen on port 8888 locally, forwarding to 5000 in the pod oc port-forward pod/mypod 8888:5000 # Listen on port 8888 on all addresses, forwarding to 5000 in the pod oc port-forward --address 0.0.0.0 pod/mypod 8888:5000 # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod oc port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000 # Listen on a random port locally, forwarding to 5000 in the pod oc port-forward pod/mypod :5000 2.7.1.103. oc process Process a template into list of resources Example usage # Convert the template.json file into a resource list and pass to create oc process -f template.json | oc create -f - # Process a file locally instead of contacting the server oc process -f template.json --local -o yaml # Process template while passing a user-defined label oc process -f template.json -l name=mytemplate # Convert a stored template into a resource list oc process foo # Convert a stored template into a resource list by setting/overriding parameter values oc process foo PARM1=VALUE1 PARM2=VALUE2 # Convert a template stored in different namespace into a resource list oc process openshift//foo # Convert template.json into a resource list cat template.json | oc process -f - 2.7.1.104. oc project Switch to another project Example usage # Switch to the 'myapp' project oc project myapp # Display the project currently in use oc project 2.7.1.105. oc projects Display existing projects Example usage # List all projects oc projects 2.7.1.106. oc proxy Run a proxy to the Kubernetes API server Example usage # To proxy all of the Kubernetes API and nothing else oc proxy --api-prefix=/ # To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods' oc proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/ # To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods' oc proxy --api-prefix=/custom/ # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/ oc proxy --port=8011 --www=./local/www/ # Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout oc proxy --port=0 # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/ oc proxy --api-prefix=/k8s-api 2.7.1.107. oc registry login Log in to the integrated registry Example usage # Log in to the integrated registry oc registry login # Log in to different registry using BASIC auth credentials oc registry login --registry quay.io/myregistry --auth-basic=USER:PASS 2.7.1.108. oc replace Replace a resource by file name or stdin Example usage # Replace a pod using the data in pod.json oc replace -f ./pod.json # Replace a pod based on the JSON passed into stdin cat pod.json | oc replace -f - # Update a single-container pod's image version (tag) to v4 oc get pod mypod -o yaml | sed 's/\(image: myimage\):.*USD/\1:v4/' | oc replace -f - # Force replace, delete and then re-create the resource oc replace --force -f ./pod.json 2.7.1.109. oc rollback Revert part of an application back to a deployment Example usage # Perform a rollback to the last successfully completed deployment for a deployment config oc rollback frontend # See what a rollback to version 3 will look like, but do not perform the rollback oc rollback frontend --to-version=3 --dry-run # Perform a rollback to a specific deployment oc rollback frontend-2 # Perform the rollback manually by piping the JSON of the new config back to oc oc rollback frontend -o json | oc replace dc/frontend -f - # Print the updated deployment configuration in JSON format instead of performing the rollback oc rollback frontend -o json 2.7.1.110. oc rollout cancel Cancel the in-progress deployment Example usage # Cancel the in-progress deployment based on 'nginx' oc rollout cancel dc/nginx 2.7.1.111. oc rollout history View rollout history Example usage # View the rollout history of a deployment oc rollout history dc/nginx # View the details of deployment revision 3 oc rollout history dc/nginx --revision=3 2.7.1.112. oc rollout latest Start a new rollout for a deployment config with the latest state from its triggers Example usage # Start a new rollout based on the latest images defined in the image change triggers oc rollout latest dc/nginx # Print the rolled out deployment config oc rollout latest dc/nginx -o json 2.7.1.113. oc rollout pause Mark the provided resource as paused Example usage # Mark the nginx deployment as paused. Any current state of # the deployment will continue its function, new updates to the deployment will not # have an effect as long as the deployment is paused oc rollout pause dc/nginx 2.7.1.114. oc rollout restart Restart a resource Example usage # Restart a deployment oc rollout restart deployment/nginx # Restart a daemon set oc rollout restart daemonset/abc # Restart deployments with the app=nginx label oc rollout restart deployment --selector=app=nginx 2.7.1.115. oc rollout resume Resume a paused resource Example usage # Resume an already paused deployment oc rollout resume dc/nginx 2.7.1.116. oc rollout retry Retry the latest failed rollout Example usage # Retry the latest failed deployment based on 'frontend' # The deployer pod and any hook pods are deleted for the latest failed deployment oc rollout retry dc/frontend 2.7.1.117. oc rollout status Show the status of the rollout Example usage # Watch the status of the latest rollout oc rollout status dc/nginx 2.7.1.118. oc rollout undo Undo a rollout Example usage # Roll back to the deployment oc rollout undo dc/nginx # Roll back to deployment revision 3. The replication controller for that version must exist oc rollout undo dc/nginx --to-revision=3 2.7.1.119. oc rsh Start a shell session in a container Example usage # Open a shell session on the first container in pod 'foo' oc rsh foo # Open a shell session on the first container in pod 'foo' and namespace 'bar' # (Note that oc client specific arguments must come before the resource name and its arguments) oc rsh -n bar foo # Run the command 'cat /etc/resolv.conf' inside pod 'foo' oc rsh foo cat /etc/resolv.conf # See the configuration of your internal registry oc rsh dc/docker-registry cat config.yml # Open a shell session on the container named 'index' inside a pod of your job oc rsh -c index job/scheduled 2.7.1.120. oc rsync Copy files between a local file system and a pod Example usage # Synchronize a local directory with a pod directory oc rsync ./local/dir/ POD:/remote/dir # Synchronize a pod directory with a local directory oc rsync POD:/remote/dir/ ./local/dir 2.7.1.121. oc run Run a particular image on the cluster Example usage # Start a nginx pod oc run nginx --image=nginx # Start a hazelcast pod and let the container expose port 5701 oc run hazelcast --image=hazelcast/hazelcast --port=5701 # Start a hazelcast pod and set environment variables "DNS_DOMAIN=cluster" and "POD_NAMESPACE=default" in the container oc run hazelcast --image=hazelcast/hazelcast --env="DNS_DOMAIN=cluster" --env="POD_NAMESPACE=default" # Start a hazelcast pod and set labels "app=hazelcast" and "env=prod" in the container oc run hazelcast --image=hazelcast/hazelcast --labels="app=hazelcast,env=prod" # Dry run; print the corresponding API objects without creating them oc run nginx --image=nginx --dry-run=client # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON oc run nginx --image=nginx --overrides='{ "apiVersion": "v1", "spec": { ... } }' # Start a busybox pod and keep it in the foreground, don't restart it if it exits oc run -i -t busybox --image=busybox --restart=Never # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command oc run nginx --image=nginx -- <arg1> <arg2> ... <argN> # Start the nginx pod using a different command and custom arguments oc run nginx --image=nginx --command -- <cmd> <arg1> ... <argN> 2.7.1.122. oc scale Set a new size for a deployment, replica set, or replication controller Example usage # Scale a replica set named 'foo' to 3 oc scale --replicas=3 rs/foo # Scale a resource identified by type and name specified in "foo.yaml" to 3 oc scale --replicas=3 -f foo.yaml # If the deployment named mysql's current size is 2, scale mysql to 3 oc scale --current-replicas=2 --replicas=3 deployment/mysql # Scale multiple replication controllers oc scale --replicas=5 rc/example1 rc/example2 rc/example3 # Scale stateful set named 'web' to 3 oc scale --replicas=3 statefulset/web 2.7.1.123. oc secrets link Link secrets to a service account Example usage # Add an image pull secret to a service account to automatically use it for pulling pod images oc secrets link serviceaccount-name pull-secret --for=pull # Add an image pull secret to a service account to automatically use it for both pulling and pushing build images oc secrets link builder builder-image-secret --for=pull,mount 2.7.1.124. oc secrets unlink Detach secrets from a service account Example usage # Unlink a secret currently associated with a service account oc secrets unlink serviceaccount-name secret-name another-secret-name ... 2.7.1.125. oc set build-hook Update a build hook on a build config Example usage # Clear post-commit hook on a build config oc set build-hook bc/mybuild --post-commit --remove # Set the post-commit hook to execute a test suite using a new entrypoint oc set build-hook bc/mybuild --post-commit --command -- /bin/bash -c /var/lib/test-image.sh # Set the post-commit hook to execute a shell script oc set build-hook bc/mybuild --post-commit --script="/var/lib/test-image.sh param1 param2 && /var/lib/done.sh" 2.7.1.126. oc set build-secret Update a build secret on a build config Example usage # Clear the push secret on a build config oc set build-secret --push --remove bc/mybuild # Set the pull secret on a build config oc set build-secret --pull bc/mybuild mysecret # Set the push and pull secret on a build config oc set build-secret --push --pull bc/mybuild mysecret # Set the source secret on a set of build configs matching a selector oc set build-secret --source -l app=myapp gitsecret 2.7.1.127. oc set data Update the data within a config map or secret Example usage # Set the 'password' key of a secret oc set data secret/foo password=this_is_secret # Remove the 'password' key from a secret oc set data secret/foo password- # Update the 'haproxy.conf' key of a config map from a file on disk oc set data configmap/bar --from-file=../haproxy.conf # Update a secret with the contents of a directory, one key per file oc set data secret/foo --from-file=secret-dir 2.7.1.128. oc set deployment-hook Update a deployment hook on a deployment config Example usage # Clear pre and post hooks on a deployment config oc set deployment-hook dc/myapp --remove --pre --post # Set the pre deployment hook to execute a db migration command for an application # using the data volume from the application oc set deployment-hook dc/myapp --pre --volumes=data -- /var/lib/migrate-db.sh # Set a mid deployment hook along with additional environment variables oc set deployment-hook dc/myapp --mid --volumes=data -e VAR1=value1 -e VAR2=value2 -- /var/lib/prepare-deploy.sh 2.7.1.129. oc set env Update environment variables on a pod template Example usage # Update deployment config 'myapp' with a new environment variable oc set env dc/myapp STORAGE_DIR=/local # List the environment variables defined on a build config 'sample-build' oc set env bc/sample-build --list # List the environment variables defined on all pods oc set env pods --all --list # Output modified build config in YAML oc set env bc/sample-build STORAGE_DIR=/data -o yaml # Update all containers in all replication controllers in the project to have ENV=prod oc set env rc --all ENV=prod # Import environment from a secret oc set env --from=secret/mysecret dc/myapp # Import environment from a config map with a prefix oc set env --from=configmap/myconfigmap --prefix=MYSQL_ dc/myapp # Remove the environment variable ENV from container 'c1' in all deployment configs oc set env dc --all --containers="c1" ENV- # Remove the environment variable ENV from a deployment config definition on disk and # update the deployment config on the server oc set env -f dc.json ENV- # Set some of the local shell environment into a deployment config on the server oc set env | grep RAILS_ | oc env -e - dc/myapp 2.7.1.130. oc set image Update the image of a pod template Example usage # Set a deployment config's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'. oc set image dc/nginx busybox=busybox nginx=nginx:1.9.1 # Set a deployment config's app container image to the image referenced by the imagestream tag 'openshift/ruby:2.3'. oc set image dc/myapp app=openshift/ruby:2.3 --source=imagestreamtag # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1' oc set image deployments,rc nginx=nginx:1.9.1 --all # Update image of all containers of daemonset abc to 'nginx:1.9.1' oc set image daemonset abc *=nginx:1.9.1 # Print result (in YAML format) of updating nginx container image from local file, without hitting the server oc set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml 2.7.1.131. oc set image-lookup Change how images are resolved when deploying applications Example usage # Print all of the image streams and whether they resolve local names oc set image-lookup # Use local name lookup on image stream mysql oc set image-lookup mysql # Force a deployment to use local name lookup oc set image-lookup deploy/mysql # Show the current status of the deployment lookup oc set image-lookup deploy/mysql --list # Disable local name lookup on image stream mysql oc set image-lookup mysql --enabled=false # Set local name lookup on all image streams oc set image-lookup --all 2.7.1.132. oc set probe Update a probe on a pod template Example usage # Clear both readiness and liveness probes off all containers oc set probe dc/myapp --remove --readiness --liveness # Set an exec action as a liveness probe to run 'echo ok' oc set probe dc/myapp --liveness -- echo ok # Set a readiness probe to try to open a TCP socket on 3306 oc set probe rc/mysql --readiness --open-tcp=3306 # Set an HTTP startup probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --startup --get-url=http://:8080/healthz # Set an HTTP readiness probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --readiness --get-url=http://:8080/healthz # Set an HTTP readiness probe over HTTPS on 127.0.0.1 for a hostNetwork pod oc set probe dc/router --readiness --get-url=https://127.0.0.1:1936/stats # Set only the initial-delay-seconds field on all deployments oc set probe dc --all --readiness --initial-delay-seconds=30 2.7.1.133. oc set resources Update resource requests/limits on objects with pod templates Example usage # Set a deployments nginx container CPU limits to "200m and memory to 512Mi" oc set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi # Set the resource request and limits for all containers in nginx oc set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi # Remove the resource requests for resources on containers in nginx oc set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0 # Print the result (in YAML format) of updating nginx container limits locally, without hitting the server oc set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml 2.7.1.134. oc set route-backends Update the backends for a route Example usage # Print the backends on the route 'web' oc set route-backends web # Set two backend services on route 'web' with 2/3rds of traffic going to 'a' oc set route-backends web a=2 b=1 # Increase the traffic percentage going to b by 10%% relative to a oc set route-backends web --adjust b=+10%% # Set traffic percentage going to b to 10%% of the traffic going to a oc set route-backends web --adjust b=10%% # Set weight of b to 10 oc set route-backends web --adjust b=10 # Set the weight to all backends to zero oc set route-backends web --zero 2.7.1.135. oc set selector Set the selector on a resource Example usage # Set the labels and selector before creating a deployment/service pair. oc create service clusterip my-svc --clusterip="None" -o yaml --dry-run | oc set selector --local -f - 'environment=qa' -o yaml | oc create -f - oc create deployment my-dep -o yaml --dry-run | oc label --local -f - environment=qa -o yaml | oc create -f - 2.7.1.136. oc set serviceaccount Update the service account of a resource Example usage # Set deployment nginx-deployment's service account to serviceaccount1 oc set serviceaccount deployment nginx-deployment serviceaccount1 # Print the result (in YAML format) of updated nginx deployment with service account from a local file, without hitting the API server oc set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run -o yaml 2.7.1.137. oc set subject Update the user, group, or service account in a role binding or cluster role binding Example usage # Update a cluster role binding for serviceaccount1 oc set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1 # Update a role binding for user1, user2, and group1 oc set subject rolebinding admin --user=user1 --user=user2 --group=group1 # Print the result (in YAML format) of updating role binding subjects locally, without hitting the server oc create rolebinding admin --role=admin --user=admin -o yaml --dry-run | oc set subject --local -f - --user=foo -o yaml 2.7.1.138. oc set triggers Update the triggers on one or more objects Example usage # Print the triggers on the deployment config 'myapp' oc set triggers dc/myapp # Set all triggers to manual oc set triggers dc/myapp --manual # Enable all automatic triggers oc set triggers dc/myapp --auto # Reset the GitHub webhook on a build to a new, generated secret oc set triggers bc/webapp --from-github oc set triggers bc/webapp --from-webhook # Remove all triggers oc set triggers bc/webapp --remove-all # Stop triggering on config change oc set triggers dc/myapp --from-config --remove # Add an image trigger to a build config oc set triggers bc/webapp --from-image=namespace1/image:latest # Add an image trigger to a stateful set on the main container oc set triggers statefulset/db --from-image=namespace1/image:latest -c main 2.7.1.139. oc set volumes Update volumes on a pod template Example usage # List volumes defined on all deployment configs in the current project oc set volume dc --all # Add a new empty dir volume to deployment config (dc) 'myapp' mounted under # /var/lib/myapp oc set volume dc/myapp --add --mount-path=/var/lib/myapp # Use an existing persistent volume claim (PVC) to overwrite an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-name=pvc1 --overwrite # Remove volume 'v1' from deployment config 'myapp' oc set volume dc/myapp --remove --name=v1 # Create a new persistent volume claim that overwrites an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-size=1G --overwrite # Change the mount point for volume 'v1' to /data oc set volume dc/myapp --add --name=v1 -m /data --overwrite # Modify the deployment config by removing volume mount "v1" from container "c1" # (and by removing the volume "v1" if no other containers have volume mounts that reference it) oc set volume dc/myapp --remove --name=v1 --containers=c1 # Add new volume based on a more complex volume source (AWS EBS, GCE PD, # Ceph, Gluster, NFS, ISCSI, ...) oc set volume dc/myapp --add -m /data --source=<json-string> 2.7.1.140. oc start-build Start a new build Example usage # Starts build from build config "hello-world" oc start-build hello-world # Starts build from a build "hello-world-1" oc start-build --from-build=hello-world-1 # Use the contents of a directory as build input oc start-build hello-world --from-dir=src/ # Send the contents of a Git repository to the server from tag 'v2' oc start-build hello-world --from-repo=../hello-world --commit=v2 # Start a new build for build config "hello-world" and watch the logs until the build # completes or fails oc start-build hello-world --follow # Start a new build for build config "hello-world" and wait until the build completes. It # exits with a non-zero return code if the build fails oc start-build hello-world --wait 2.7.1.141. oc status Show an overview of the current project Example usage # See an overview of the current project oc status # Export the overview of the current project in an svg file oc status -o dot | dot -T svg -o project.svg # See an overview of the current project including details for any identified issues oc status --suggest 2.7.1.142. oc tag Tag existing images into image streams Example usage # Tag the current image for the image stream 'openshift/ruby' and tag '2.0' into the image stream 'yourproject/ruby with tag 'tip' oc tag openshift/ruby:2.0 yourproject/ruby:tip # Tag a specific image oc tag openshift/ruby@sha256:6b646fa6bf5e5e4c7fa41056c27910e679c03ebe7f93e361e6515a9da7e258cc yourproject/ruby:tip # Tag an external container image oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip # Tag an external container image and request pullthrough for it oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --reference-policy=local # Tag an external container image and include the full manifest list oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --import-mode=PreserveOriginal # Remove the specified spec tag from an image stream oc tag openshift/origin-control-plane:latest -d 2.7.1.143. oc version Print the client and server version information Example usage # Print the OpenShift client, kube-apiserver, and openshift-apiserver version information for the current context oc version # Print the OpenShift client, kube-apiserver, and openshift-apiserver version numbers for the current context in json format oc version --output json # Print the OpenShift client version information for the current context oc version --client 2.7.1.144. oc wait Experimental: Wait for a specific condition on one or many resources Example usage # Wait for the pod "busybox1" to contain the status condition of type "Ready" oc wait --for=condition=Ready pod/busybox1 # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity) oc wait --for=condition=Ready=false pod/busybox1 # Wait for the pod "busybox1" to contain the status phase to be "Running" oc wait --for=jsonpath='{.status.phase}'=Running pod/busybox1 # Wait for the service "loadbalancer" to have ingress. oc wait --for=jsonpath='{.status.loadBalancer.ingress}' service/loadbalancer # Wait for the pod "busybox1" to be deleted, with a timeout of 60s, after having issued the "delete" command oc delete pod/busybox1 oc wait --for=delete pod/busybox1 --timeout=60s 2.7.1.145. oc whoami Return information about the current session Example usage # Display the currently authenticated user oc whoami 2.7.2. Additional resources OpenShift CLI administrator command reference 2.8. OpenShift CLI administrator command reference This reference provides descriptions and example commands for OpenShift CLI ( oc ) administrator commands. You must have cluster-admin or equivalent permissions to use these commands. For developer commands, see the OpenShift CLI developer command reference . Run oc adm -h to list all administrator commands or run oc <command> --help to get additional details for a specific command. 2.8.1. OpenShift CLI (oc) administrator commands 2.8.1.1. oc adm build-chain Output the inputs and dependencies of your builds Example usage # Build the dependency tree for the 'latest' tag in <image-stream> oc adm build-chain <image-stream> # Build the dependency tree for the 'v2' tag in dot format and visualize it via the dot utility oc adm build-chain <image-stream>:v2 -o dot | dot -T svg -o deps.svg # Build the dependency tree across all namespaces for the specified image stream tag found in the 'test' namespace oc adm build-chain <image-stream> -n test --all 2.8.1.2. oc adm catalog mirror Mirror an operator-registry catalog Example usage # Mirror an operator-registry image and its contents to a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com # Mirror an operator-registry image and its contents to a particular namespace in a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com/my-namespace # Mirror to an airgapped registry by first mirroring to files oc adm catalog mirror quay.io/my/image:latest file:///local/index oc adm catalog mirror file:///local/index/my/image:latest my-airgapped-registry.com # Configure a cluster to use a mirrored registry oc apply -f manifests/imageDigestMirrorSet.yaml # Edit the mirroring mappings and mirror with "oc image mirror" manually oc adm catalog mirror --manifests-only quay.io/my/image:latest myregistry.com oc image mirror -f manifests/mapping.txt # Delete all ImageDigestMirrorSets generated by oc adm catalog mirror oc delete imagedigestmirrorset -l operators.openshift.org/catalog=true 2.8.1.3. oc adm certificate approve Approve a certificate signing request Example usage # Approve CSR 'csr-sqgzp' oc adm certificate approve csr-sqgzp 2.8.1.4. oc adm certificate deny Deny a certificate signing request Example usage # Deny CSR 'csr-sqgzp' oc adm certificate deny csr-sqgzp 2.8.1.5. oc adm copy-to-node Copies specified files to the node. 2.8.1.6. oc adm cordon Mark node as unschedulable Example usage # Mark node "foo" as unschedulable oc adm cordon foo 2.8.1.7. oc adm create-bootstrap-project-template Create a bootstrap project template Example usage # Output a bootstrap project template in YAML format to stdout oc adm create-bootstrap-project-template -o yaml 2.8.1.8. oc adm create-error-template Create an error page template Example usage # Output a template for the error page to stdout oc adm create-error-template 2.8.1.9. oc adm create-login-template Create a login template Example usage # Output a template for the login page to stdout oc adm create-login-template 2.8.1.10. oc adm create-provider-selection-template Create a provider selection template Example usage # Output a template for the provider selection page to stdout oc adm create-provider-selection-template 2.8.1.11. oc adm drain Drain node in preparation for maintenance Example usage # Drain node "foo", even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set on it oc adm drain foo --force # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set, and use a grace period of 15 minutes oc adm drain foo --grace-period=900 2.8.1.12. oc adm groups add-users Add users to a group Example usage # Add user1 and user2 to my-group oc adm groups add-users my-group user1 user2 2.8.1.13. oc adm groups new Create a new group Example usage # Add a group with no users oc adm groups new my-group # Add a group with two users oc adm groups new my-group user1 user2 # Add a group with one user and shorter output oc adm groups new my-group user1 -o name 2.8.1.14. oc adm groups prune Remove old OpenShift groups referencing missing records from an external provider Example usage # Prune all orphaned groups oc adm groups prune --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm groups prune --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm groups prune --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm groups prune groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm 2.8.1.15. oc adm groups remove-users Remove users from a group Example usage # Remove user1 and user2 from my-group oc adm groups remove-users my-group user1 user2 2.8.1.16. oc adm groups sync Sync OpenShift groups with records from an external provider Example usage # Sync all groups with an LDAP server oc adm groups sync --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync all groups except the ones from the blacklist file with an LDAP server oc adm groups sync --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific groups specified in an allowlist file with an LDAP server oc adm groups sync --whitelist=/path/to/allowlist.txt --sync-config=/path/to/sync-config.yaml --confirm # Sync all OpenShift groups that have been synced previously with an LDAP server oc adm groups sync --type=openshift --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific OpenShift groups if they have been synced previously with an LDAP server oc adm groups sync groups/group1 groups/group2 groups/group3 --sync-config=/path/to/sync-config.yaml --confirm 2.8.1.17. oc adm inspect Collect debugging data for a given resource Example usage # Collect debugging data for the "openshift-apiserver" clusteroperator oc adm inspect clusteroperator/openshift-apiserver # Collect debugging data for the "openshift-apiserver" and "kube-apiserver" clusteroperators oc adm inspect clusteroperator/openshift-apiserver clusteroperator/kube-apiserver # Collect debugging data for all clusteroperators oc adm inspect clusteroperator # Collect debugging data for all clusteroperators and clusterversions oc adm inspect clusteroperators,clusterversions 2.8.1.18. oc adm migrate icsp Update imagecontentsourcepolicy file(s) to imagedigestmirrorset file(s) Example usage # Update the imagecontentsourcepolicy.yaml file to a new imagedigestmirrorset file under the mydir directory oc adm migrate icsp imagecontentsourcepolicy.yaml --dest-dir mydir 2.8.1.19. oc adm migrate template-instances Update template instances to point to the latest group-version-kinds Example usage # Perform a dry-run of updating all objects oc adm migrate template-instances # To actually perform the update, the confirm flag must be appended oc adm migrate template-instances --confirm 2.8.1.20. oc adm must-gather Launch a new instance of a pod for gathering debug information Example usage # Gather information using the default plug-in image and command, writing into ./must-gather.local.<rand> oc adm must-gather # Gather information with a specific local folder to copy to oc adm must-gather --dest-dir=/local/directory # Gather audit information oc adm must-gather -- /usr/bin/gather_audit_logs # Gather information using multiple plug-in images oc adm must-gather --image=quay.io/kubevirt/must-gather --image=quay.io/openshift/origin-must-gather # Gather information using a specific image stream plug-in oc adm must-gather --image-stream=openshift/must-gather:latest # Gather information using a specific image, command, and pod directory oc adm must-gather --image=my/image:tag --source-dir=/pod/directory -- myspecial-command.sh 2.8.1.21. oc adm new-project Create a new project Example usage # Create a new project using a node selector oc adm new-project myproject --node-selector='type=user-node,region=east' 2.8.1.22. oc adm node-logs Display and filter node logs Example usage # Show kubelet logs from all masters oc adm node-logs --role master -u kubelet # See what logs are available in masters in /var/log oc adm node-logs --role master --path=/ # Display cron log file from all masters oc adm node-logs --role master --path=cron 2.8.1.23. oc adm ocp-certificates monitor-certificates Watch platform certificates. Example usage # Watch platform certificates. oc adm ocp-certificates monitor-certificates 2.8.1.24. oc adm ocp-certificates regenerate-leaf Regenerate client and serving certificates of an OpenShift cluster 2.8.1.25. oc adm ocp-certificates regenerate-machine-config-server-serving-cert Regenerate the machine config operator certificates in an OpenShift cluster 2.8.1.26. oc adm ocp-certificates regenerate-top-level Regenerate the top level certificates in an OpenShift cluster 2.8.1.27. oc adm ocp-certificates remove-old-trust Remove old CAs from ConfigMaps representing platform trust bundles in an OpenShift cluster Example usage # Remove only CA certificates created before a certain date from all trust bundles oc adm ocp-certificates remove-old-trust configmaps -A --all --created-before 2023-06-05T14:44:06Z 2.8.1.28. oc adm ocp-certificates update-ignition-ca-bundle-for-machine-config-server Update user-data secrets in an OpenShift cluster to use updated MCO certfs Example usage # Regenerate the MCO certs without modifying user-data secrets oc adm certificates regenerate-machine-config-server-serving-cert --update-ignition=false # Update the user-data secrets to use new MCS certs oc adm certificates update-ignition-ca-bundle-for-machine-config-server 2.8.1.29. oc adm pod-network isolate-projects Isolate project network Example usage # Provide isolation for project p1 oc adm pod-network isolate-projects <p1> # Allow all projects with label name=top-secret to have their own isolated project network oc adm pod-network isolate-projects --selector='name=top-secret' 2.8.1.30. oc adm pod-network join-projects Join project network Example usage # Allow project p2 to use project p1 network oc adm pod-network join-projects --to=<p1> <p2> # Allow all projects with label name=top-secret to use project p1 network oc adm pod-network join-projects --to=<p1> --selector='name=top-secret' 2.8.1.31. oc adm pod-network make-projects-global Make project network global Example usage # Allow project p1 to access all pods in the cluster and vice versa oc adm pod-network make-projects-global <p1> # Allow all projects with label name=share to access all pods in the cluster and vice versa oc adm pod-network make-projects-global --selector='name=share' 2.8.1.32. oc adm policy add-role-to-user Add a role to users or service accounts for the current project Example usage # Add the 'view' role to user1 for the current project oc adm policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc adm policy add-role-to-user edit -z serviceaccount1 2.8.1.33. oc adm policy add-scc-to-group Add a security context constraint to groups Example usage # Add the 'restricted' security context constraint to group1 and group2 oc adm policy add-scc-to-group restricted group1 group2 2.8.1.34. oc adm policy add-scc-to-user Add a security context constraint to users or a service account Example usage # Add the 'restricted' security context constraint to user1 and user2 oc adm policy add-scc-to-user restricted user1 user2 # Add the 'privileged' security context constraint to serviceaccount1 in the current namespace oc adm policy add-scc-to-user privileged -z serviceaccount1 2.8.1.35. oc adm policy scc-review Check which service account can create a pod Example usage # Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc adm policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc adm policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc adm policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc adm policy scc-review -f myresource_with_no_sa.yaml 2.8.1.36. oc adm policy scc-subject-review Check whether a user or a service account can create a pod Example usage # Check whether user bob can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc adm policy scc-subject-review -f myresourcewithsa.yaml 2.8.1.37. oc adm prune builds Remove old completed and failed builds Example usage # Dry run deleting older completed and failed builds and also including # all builds whose associated build config no longer exists oc adm prune builds --orphans # To actually perform the prune operation, the confirm flag must be appended oc adm prune builds --orphans --confirm 2.8.1.38. oc adm prune deployments Remove old completed and failed deployment configs Example usage # Dry run deleting all but the last complete deployment for every deployment config oc adm prune deployments --keep-complete=1 # To actually perform the prune operation, the confirm flag must be appended oc adm prune deployments --keep-complete=1 --confirm 2.8.1.39. oc adm prune groups Remove old OpenShift groups referencing missing records from an external provider Example usage # Prune all orphaned groups oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm prune groups --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm prune groups --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm prune groups groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm 2.8.1.40. oc adm prune images Remove unreferenced images Example usage # See what the prune command would delete if only images and their referrers were more than an hour old # and obsoleted by 3 newer revisions under the same tag were considered oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm # See what the prune command would delete if we are interested in removing images # exceeding currently set limit ranges ('openshift.io/Image') oc adm prune images --prune-over-size-limit # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --prune-over-size-limit --confirm # Force the insecure HTTP protocol with the particular registry host name oc adm prune images --registry-url=http://registry.example.org --confirm # Force a secure connection with a custom certificate authority to the particular registry host name oc adm prune images --registry-url=registry.example.org --certificate-authority=/path/to/custom/ca.crt --confirm 2.8.1.41. oc adm reboot-machine-config-pool Initiate reboot of the specified MachineConfigPool. Example usage # Reboot all MachineConfigPools oc adm reboot-machine-config-pool mcp/worker mcp/master # Reboot all MachineConfigPools that inherit from worker. This include all custom MachineConfigPools and infra. oc adm reboot-machine-config-pool mcp/worker # Reboot masters oc adm reboot-machine-config-pool mcp/master 2.8.1.42. oc adm release extract Extract the contents of an update payload to disk Note The following example contains some values that are specific to OpenShift Container Platform on AWS. Example usage # Use git to check out the source code for the current cluster release to DIR oc adm release extract --git=DIR # Extract cloud credential requests for AWS oc adm release extract --credentials-requests --cloud=aws # Use git to check out the source code for the current cluster release to DIR from linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release extract --git=DIR quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x 2.8.1.43. oc adm release info Display information about a release Example usage # Show information about the cluster's current release oc adm release info # Show the source code that comprises a release oc adm release info 4.11.2 --commit-urls # Show the source code difference between two releases oc adm release info 4.11.0 4.11.2 --commits # Show where the images referenced by the release are located oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --pullspecs # Show information about linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x 2.8.1.44. oc adm release mirror Mirror a release to a different image registry location Example usage # Perform a dry run showing what would be mirrored, including the mirror objects oc adm release mirror 4.11.0 --to myregistry.local/openshift/release \ --release-image-signature-to-dir /tmp/releases --dry-run # Mirror a release into the current directory oc adm release mirror 4.11.0 --to file://openshift/release \ --release-image-signature-to-dir /tmp/releases # Mirror a release to another directory in the default location oc adm release mirror 4.11.0 --to-dir /tmp/releases # Upload a release from the current directory to another server oc adm release mirror --from file://openshift/release --to myregistry.com/openshift/release \ --release-image-signature-to-dir /tmp/releases # Mirror the 4.11.0 release to repository registry.example.com and apply signatures to connected cluster oc adm release mirror --from=quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64 \ --to=registry.example.com/your/repository --apply-release-image-signature 2.8.1.45. oc adm release new Create a new OpenShift release Example usage # Create a release from the latest origin images and push to a DockerHub repository oc adm release new --from-image-stream=4.11 -n origin --to-image docker.io/mycompany/myrepo:latest # Create a new release with updated metadata from a release oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 --name 4.11.1 \ -- 4.11.0 --metadata ... --to-image docker.io/mycompany/myrepo:latest # Create a new release and override a single image oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 \ cli=docker.io/mycompany/cli:latest --to-image docker.io/mycompany/myrepo:latest # Run a verification pass to ensure the release can be reproduced oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 2.8.1.46. oc adm restart-kubelet Restarts kubelet on the specified nodes Example usage # Restart all the nodes, 10% at a time oc adm restart-kubelet nodes --all --directive=RemoveKubeletKubeconfig # Restart all the nodes, 20 nodes at a time oc adm restart-kubelet nodes --all --parallelism=20 --directive=RemoveKubeletKubeconfig # Restart all the nodes, 15% at a time oc adm restart-kubelet nodes --all --parallelism=15% --directive=RemoveKubeletKubeconfig # Restart all the masters at the same time oc adm restart-kubelet nodes -l node-role.kubernetes.io/master --parallelism=100% --directive=RemoveKubeletKubeconfig 2.8.1.47. oc adm taint Update the taints on one or more nodes Example usage # Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified oc adm taint nodes foo dedicated=special-user:NoSchedule # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists oc adm taint nodes foo dedicated:NoSchedule- # Remove from node 'foo' all the taints with key 'dedicated' oc adm taint nodes foo dedicated- # Add a taint with key 'dedicated' on nodes having label myLabel=X oc adm taint node -l myLabel=X dedicated=foo:PreferNoSchedule # Add to node 'foo' a taint with key 'bar' and no value oc adm taint nodes foo bar:NoSchedule 2.8.1.48. oc adm top images Show usage statistics for images Example usage # Show usage statistics for images oc adm top images 2.8.1.49. oc adm top imagestreams Show usage statistics for image streams Example usage # Show usage statistics for image streams oc adm top imagestreams 2.8.1.50. oc adm top node Display resource (CPU/memory) usage of nodes Example usage # Show metrics for all nodes oc adm top node # Show metrics for a given node oc adm top node NODE_NAME 2.8.1.51. oc adm top pod Display resource (CPU/memory) usage of pods Example usage # Show metrics for all pods in the default namespace oc adm top pod # Show metrics for all pods in the given namespace oc adm top pod --namespace=NAMESPACE # Show metrics for a given pod and its containers oc adm top pod POD_NAME --containers # Show metrics for the pods defined by label name=myLabel oc adm top pod -l name=myLabel 2.8.1.52. oc adm uncordon Mark node as schedulable Example usage # Mark node "foo" as schedulable oc adm uncordon foo 2.8.1.53. oc adm upgrade Upgrade a cluster or adjust the upgrade channel Example usage # View the update status and available cluster updates oc adm upgrade # Update to the latest version oc adm upgrade --to-latest=true 2.8.1.54. oc adm verify-image-signature Verify the image identity contained in the image signature Example usage # Verify the image signature and identity using the local GPG keychain oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 # Verify the image signature and identity using the local GPG keychain and save the status oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 --save # Verify the image signature and identity via exposed registry route oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 \ --expected-identity=registry.local:5000/foo/bar:v1 \ --registry-url=docker-registry.foo.com # Remove all signature verifications from the image oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --remove-all 2.8.1.55. oc adm wait-for-node-reboot Wait for nodes to reboot after running oc adm reboot-machine-config-pool Example usage # Wait for all nodes to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/worker mcp/master' oc adm wait-for-node-reboot nodes --all # Wait for masters to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/master' oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master # Wait for masters to complete a specific reboot oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master --reboot-number=4 2.8.1.56. oc adm wait-for-stable-cluster wait for the platform operators to become stable Example usage # Wait for all clusteroperators to become stable oc adm wait-for-stable-cluster # Consider operators to be stable if they report as such for 5 minutes straight oc adm wait-for-stable-cluster --minimum-stable-period 5m 2.8.2. Additional resources OpenShift CLI developer command reference
[ "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "subscription-manager register", "subscription-manager refresh", "subscription-manager list --available --matches '*OpenShift*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"rhocp-4.15-for-rhel-8-x86_64-rpms\"", "yum install openshift-clients", "oc <command>", "brew install openshift-cli", "oc <command>", "oc login -u user1", "Server [https://localhost:8443]: https://openshift.example.com:6443 1 The server uses a certificate signed by an unknown authority. You can bypass the certificate check, but any data you send to the server could be intercepted by others. Use insecure connections? (y/n): y 2 Authentication required for https://openshift.example.com:6443 (openshift) Username: user1 Password: 3 Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname> Welcome! See 'oc help' to get started.", "oc login <cluster_url> --web 1", "Opening login URL in the default browser: https://openshift.example.com Opening in existing browser session.", "Login successful. You don't have any projects. You can try to create a new project, by running oc new-project <projectname>", "oc new-project my-project", "Now using project \"my-project\" on server \"https://openshift.example.com:6443\".", "oc new-app https://github.com/sclorg/cakephp-ex", "--> Found image 40de956 (9 days old) in imagestream \"openshift/php\" under tag \"7.2\" for \"php\" Run 'oc status' to view your app.", "oc get pods -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE cakephp-ex-1-build 0/1 Completed 0 5m45s 10.131.0.10 ip-10-0-141-74.ec2.internal <none> cakephp-ex-1-deploy 0/1 Completed 0 3m44s 10.129.2.9 ip-10-0-147-65.ec2.internal <none> cakephp-ex-1-ktz97 1/1 Running 0 3m33s 10.128.2.11 ip-10-0-168-105.ec2.internal <none>", "oc logs cakephp-ex-1-deploy", "--> Scaling cakephp-ex-1 to 1 --> Success", "oc project", "Using project \"my-project\" on server \"https://openshift.example.com:6443\".", "oc status", "In project my-project on server https://openshift.example.com:6443 svc/cakephp-ex - 172.30.236.80 ports 8080, 8443 dc/cakephp-ex deploys istag/cakephp-ex:latest <- bc/cakephp-ex source builds https://github.com/sclorg/cakephp-ex on openshift/php:7.2 deployment #1 deployed 2 minutes ago - 1 pod 3 infos identified, use 'oc status --suggest' to see details.", "oc api-resources", "NAME SHORTNAMES APIGROUP NAMESPACED KIND bindings true Binding componentstatuses cs false ComponentStatus configmaps cm true ConfigMap", "oc help", "OpenShift Client This client helps you develop, build, deploy, and run your applications on any OpenShift or Kubernetes compatible platform. It also includes the administrative commands for managing a cluster under the 'adm' subcommand. Usage: oc [flags] Basic Commands: login Log in to a server new-project Request a new project new-app Create a new application", "oc create --help", "Create a resource by filename or stdin JSON and YAML formats are accepted. Usage: oc create -f FILENAME [flags]", "oc explain pods", "KIND: Pod VERSION: v1 DESCRIPTION: Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. FIELDS: apiVersion <string> APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#resources", "oc logout", "Logged \"user1\" out on \"https://openshift.example.com\"", "oc completion bash > oc_bash_completion", "sudo cp oc_bash_completion /etc/bash_completion.d/", "cat >>~/.zshrc<<EOF autoload -Uz compinit compinit if [ USDcommands[oc] ]; then source <(oc completion zsh) compdef _oc oc fi EOF", "apiVersion: v1 clusters: 1 - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com:8443 name: openshift1.example.com:8443 - cluster: insecure-skip-tls-verify: true server: https://openshift2.example.com:8443 name: openshift2.example.com:8443 contexts: 2 - context: cluster: openshift1.example.com:8443 namespace: alice-project user: alice/openshift1.example.com:8443 name: alice-project/openshift1.example.com:8443/alice - context: cluster: openshift1.example.com:8443 namespace: joe-project user: alice/openshift1.example.com:8443 name: joe-project/openshift1/alice current-context: joe-project/openshift1.example.com:8443/alice 3 kind: Config preferences: {} users: 4 - name: alice/openshift1.example.com:8443 user: token: xZHd2piv5_9vQrg-SKXRJ2Dsl9SceNJdhNTljEKTb8k", "oc status", "status In project Joe's Project (joe-project) service database (172.30.43.12:5434 -> 3306) database deploys docker.io/openshift/mysql-55-centos7:latest #1 deployed 25 minutes ago - 1 pod service frontend (172.30.159.137:5432 -> 8080) frontend deploys origin-ruby-sample:latest <- builds https://github.com/openshift/ruby-hello-world with joe-project/ruby-20-centos7:latest #1 deployed 22 minutes ago - 2 pods To see more information about a service or deployment, use 'oc describe service <name>' or 'oc describe dc <name>'. You can use 'oc get all' to see lists of each of the types described in this example.", "oc project", "Using project \"joe-project\" from context named \"joe-project/openshift1.example.com:8443/alice\" on server \"https://openshift1.example.com:8443\".", "oc project alice-project", "Now using project \"alice-project\" on server \"https://openshift1.example.com:8443\".", "oc login -u system:admin -n default", "oc config set-cluster <cluster_nickname> [--server=<master_ip_or_fqdn>] [--certificate-authority=<path/to/certificate/authority>] [--api-version=<apiversion>] [--insecure-skip-tls-verify=true]", "oc config set-context <context_nickname> [--cluster=<cluster_nickname>] [--user=<user_nickname>] [--namespace=<namespace>]", "oc config use-context <context_nickname>", "oc config set <property_name> <property_value>", "oc config unset <property_name>", "oc config view", "oc config view --config=<specific_filename>", "oc login https://openshift1.example.com --token=ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0", "oc config view", "apiVersion: v1 clusters: - cluster: insecure-skip-tls-verify: true server: https://openshift1.example.com name: openshift1-example-com contexts: - context: cluster: openshift1-example-com namespace: default user: alice/openshift1-example-com name: default/openshift1-example-com/alice current-context: default/openshift1-example-com/alice kind: Config preferences: {} users: - name: alice/openshift1.example.com user: token: ns7yVhuRNpDM9cgzfhhxQ7bM5s7N2ZVrkZepSRf4LC0", "oc config set-context `oc config current-context` --namespace=<project_name>", "oc whoami -c", "#!/bin/bash optional argument handling if [[ \"USD1\" == \"version\" ]] then echo \"1.0.0\" exit 0 fi optional argument handling if [[ \"USD1\" == \"config\" ]] then echo USDKUBECONFIG exit 0 fi echo \"I am a plugin named kubectl-foo\"", "chmod +x <plugin_file>", "sudo mv <plugin_file> /usr/local/bin/.", "oc plugin list", "The following compatible plugins are available: /usr/local/bin/<plugin_file>", "oc ns", "oc krew search", "oc krew info <plugin_name>", "oc krew install <plugin_name>", "oc krew list", "oc krew upgrade <plugin_name>", "oc krew upgrade", "oc krew uninstall <plugin_name>", "Update pod 'foo' with the annotation 'description' and the value 'my frontend' # If the same annotation is set multiple times, only the last value will be applied oc annotate pods foo description='my frontend' # Update a pod identified by type and name in \"pod.json\" oc annotate -f pod.json description='my frontend' # Update pod 'foo' with the annotation 'description' and the value 'my frontend running nginx', overwriting any existing value oc annotate --overwrite pods foo description='my frontend running nginx' # Update all pods in the namespace oc annotate pods --all description='my frontend running nginx' # Update pod 'foo' only if the resource is unchanged from version 1 oc annotate pods foo description='my frontend running nginx' --resource-version=1 # Update pod 'foo' by removing an annotation named 'description' if it exists # Does not require the --overwrite flag oc annotate pods foo description-", "Print the supported API resources oc api-resources # Print the supported API resources with more information oc api-resources -o wide # Print the supported API resources sorted by a column oc api-resources --sort-by=name # Print the supported namespaced resources oc api-resources --namespaced=true # Print the supported non-namespaced resources oc api-resources --namespaced=false # Print the supported API resources with a specific APIGroup oc api-resources --api-group=rbac.authorization.k8s.io", "Print the supported API versions oc api-versions", "Apply the configuration in pod.json to a pod oc apply -f ./pod.json # Apply resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc apply -k dir/ # Apply the JSON passed into stdin to a pod cat pod.json | oc apply -f - # Apply the configuration from all files that end with '.json' oc apply -f '*.json' # Note: --prune is still in Alpha # Apply the configuration in manifest.yaml that matches label app=nginx and delete all other resources that are not in the file and match label app=nginx oc apply --prune -f manifest.yaml -l app=nginx # Apply the configuration in manifest.yaml and delete all the other config maps that are not in the file oc apply --prune -f manifest.yaml --all --prune-allowlist=core/v1/ConfigMap", "Edit the last-applied-configuration annotations by type/name in YAML oc apply edit-last-applied deployment/nginx # Edit the last-applied-configuration annotations by file in JSON oc apply edit-last-applied -f deploy.yaml -o json", "Set the last-applied-configuration of a resource to match the contents of a file oc apply set-last-applied -f deploy.yaml # Execute set-last-applied against each configuration file in a directory oc apply set-last-applied -f path/ # Set the last-applied-configuration of a resource to match the contents of a file; will create the annotation if it does not already exist oc apply set-last-applied -f deploy.yaml --create-annotation=true", "View the last-applied-configuration annotations by type/name in YAML oc apply view-last-applied deployment/nginx # View the last-applied-configuration annotations by file in JSON oc apply view-last-applied -f deploy.yaml -o json", "Get output from running pod mypod; use the 'oc.kubernetes.io/default-container' annotation # for selecting the container to be attached or the first container in the pod will be chosen oc attach mypod # Get output from ruby-container from pod mypod oc attach mypod -c ruby-container # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc attach mypod -c ruby-container -i -t # Get output from the first pod of a replica set named nginx oc attach rs/nginx", "Check to see if I can create pods in any namespace oc auth can-i create pods --all-namespaces # Check to see if I can list deployments in my current namespace oc auth can-i list deployments.apps # Check to see if service account \"foo\" of namespace \"dev\" can list pods # in the namespace \"prod\". # You must be allowed to use impersonation for the global option \"--as\". oc auth can-i list pods --as=system:serviceaccount:dev:foo -n prod # Check to see if I can do everything in my current namespace (\"*\" means all) oc auth can-i '*' '*' # Check to see if I can get the job named \"bar\" in namespace \"foo\" oc auth can-i list jobs.batch/bar -n foo # Check to see if I can read pod logs oc auth can-i get pods --subresource=log # Check to see if I can access the URL /logs/ oc auth can-i get /logs/ # List all allowed actions in namespace \"foo\" oc auth can-i --list --namespace=foo", "Reconcile RBAC resources from a file oc auth reconcile -f my-rbac-rules.yaml", "Get your subject attributes. oc auth whoami # Get your subject attributes in JSON format. oc auth whoami -o json", "Auto scale a deployment \"foo\", with the number of pods between 2 and 10, no target CPU utilization specified so a default autoscaling policy will be used oc autoscale deployment foo --min=2 --max=10 # Auto scale a replication controller \"foo\", with the number of pods between 1 and 5, target CPU utilization at 80% oc autoscale rc foo --max=5 --cpu-percent=80", "Cancel the build with the given name oc cancel-build ruby-build-2 # Cancel the named build and print the build logs oc cancel-build ruby-build-2 --dump-logs # Cancel the named build and create a new one with the same parameters oc cancel-build ruby-build-2 --restart # Cancel multiple builds oc cancel-build ruby-build-1 ruby-build-2 ruby-build-3 # Cancel all builds created from the 'ruby-build' build config that are in the 'new' state oc cancel-build bc/ruby-build --state=new", "Print the address of the control plane and cluster services oc cluster-info", "Dump current cluster state to stdout oc cluster-info dump # Dump current cluster state to /path/to/cluster-state oc cluster-info dump --output-directory=/path/to/cluster-state # Dump all namespaces to stdout oc cluster-info dump --all-namespaces # Dump a set of namespaces to /path/to/cluster-state oc cluster-info dump --namespaces default,kube-system --output-directory=/path/to/cluster-state", "Installing bash completion on macOS using homebrew ## If running Bash 3.2 included with macOS brew install bash-completion ## or, if running Bash 4.1+ brew install bash-completion@2 ## If oc is installed via homebrew, this should start working immediately ## If you've installed via other means, you may need add the completion to your completion directory oc completion bash > USD(brew --prefix)/etc/bash_completion.d/oc # Installing bash completion on Linux ## If bash-completion is not installed on Linux, install the 'bash-completion' package ## via your distribution's package manager. ## Load the oc completion code for bash into the current shell source <(oc completion bash) ## Write bash completion code to a file and source it from .bash_profile oc completion bash > ~/.kube/completion.bash.inc printf \" # oc shell completion source 'USDHOME/.kube/completion.bash.inc' \" >> USDHOME/.bash_profile source USDHOME/.bash_profile # Load the oc completion code for zsh[1] into the current shell source <(oc completion zsh) # Set the oc completion code for zsh[1] to autoload on startup oc completion zsh > \"USD{fpath[1]}/_oc\" # Load the oc completion code for fish[2] into the current shell oc completion fish | source # To load completions for each session, execute once: oc completion fish > ~/.config/fish/completions/oc.fish # Load the oc completion code for powershell into the current shell oc completion powershell | Out-String | Invoke-Expression # Set oc completion code for powershell to run on startup ## Save completion code to a script and execute in the profile oc completion powershell > USDHOME\\.kube\\completion.ps1 Add-Content USDPROFILE \"USDHOME\\.kube\\completion.ps1\" ## Execute completion code in the profile Add-Content USDPROFILE \"if (Get-Command oc -ErrorAction SilentlyContinue) { oc completion powershell | Out-String | Invoke-Expression }\" ## Add completion code directly to the USDPROFILE script oc completion powershell >> USDPROFILE", "Display the current-context oc config current-context", "Delete the minikube cluster oc config delete-cluster minikube", "Delete the context for the minikube cluster oc config delete-context minikube", "Delete the minikube user oc config delete-user minikube", "List the clusters that oc knows about oc config get-clusters", "List all the contexts in your kubeconfig file oc config get-contexts # Describe one context in your kubeconfig file oc config get-contexts my-context", "List the users that oc knows about oc config get-users", "Generate a new admin kubeconfig oc config new-admin-kubeconfig", "Generate a new kubelet bootstrap kubeconfig oc config new-kubelet-bootstrap-kubeconfig", "Refresh the CA bundle for the current context's cluster oc config refresh-ca-bundle # Refresh the CA bundle for the cluster named e2e in your kubeconfig oc config refresh-ca-bundle e2e # Print the CA bundle from the current OpenShift cluster's apiserver. oc config refresh-ca-bundle --dry-run", "Rename the context 'old-name' to 'new-name' in your kubeconfig file oc config rename-context old-name new-name", "Set the server field on the my-cluster cluster to https://1.2.3.4 oc config set clusters.my-cluster.server https://1.2.3.4 # Set the certificate-authority-data field on the my-cluster cluster oc config set clusters.my-cluster.certificate-authority-data USD(echo \"cert_data_here\" | base64 -i -) # Set the cluster field in the my-context context to my-cluster oc config set contexts.my-context.cluster my-cluster # Set the client-key-data field in the cluster-admin user using --set-raw-bytes option oc config set users.cluster-admin.client-key-data cert_data_here --set-raw-bytes=true", "Set only the server field on the e2e cluster entry without touching other values oc config set-cluster e2e --server=https://1.2.3.4 # Embed certificate authority data for the e2e cluster entry oc config set-cluster e2e --embed-certs --certificate-authority=~/.kube/e2e/kubernetes.ca.crt # Disable cert checking for the e2e cluster entry oc config set-cluster e2e --insecure-skip-tls-verify=true # Set the custom TLS server name to use for validation for the e2e cluster entry oc config set-cluster e2e --tls-server-name=my-cluster-name # Set the proxy URL for the e2e cluster entry oc config set-cluster e2e --proxy-url=https://1.2.3.4", "Set the user field on the gce context entry without touching other values oc config set-context gce --user=cluster-admin", "Set only the \"client-key\" field on the \"cluster-admin\" # entry, without touching other values oc config set-credentials cluster-admin --client-key=~/.kube/admin.key # Set basic auth for the \"cluster-admin\" entry oc config set-credentials cluster-admin --username=admin --password=uXFGweU9l35qcif # Embed client certificate data in the \"cluster-admin\" entry oc config set-credentials cluster-admin --client-certificate=~/.kube/admin.crt --embed-certs=true # Enable the Google Compute Platform auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=gcp # Enable the OpenID Connect auth provider for the \"cluster-admin\" entry with additional arguments oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-id=foo --auth-provider-arg=client-secret=bar # Remove the \"client-secret\" config value for the OpenID Connect auth provider for the \"cluster-admin\" entry oc config set-credentials cluster-admin --auth-provider=oidc --auth-provider-arg=client-secret- # Enable new exec auth plugin for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-command=/path/to/the/executable --exec-api-version=client.authentication.k8s.io/v1beta1 # Define new exec auth plugin arguments for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-arg=arg1 --exec-arg=arg2 # Create or update exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=key1=val1 --exec-env=key2=val2 # Remove exec auth plugin environment variables for the \"cluster-admin\" entry oc config set-credentials cluster-admin --exec-env=var-to-remove-", "Unset the current-context oc config unset current-context # Unset namespace in foo context oc config unset contexts.foo.namespace", "Use the context for the minikube cluster oc config use-context minikube", "Show merged kubeconfig settings oc config view # Show merged kubeconfig settings, raw certificate data, and exposed secrets oc config view --raw # Get the password for the e2e user oc config view -o jsonpath='{.users[?(@.name == \"e2e\")].user.password}'", "!!!Important Note!!! # Requires that the 'tar' binary is present in your container # image. If 'tar' is not present, 'oc cp' will fail. # # For advanced use cases, such as symlinks, wildcard expansion or # file mode preservation, consider using 'oc exec'. # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> tar cf - /tmp/foo | oc exec -i -n <some-namespace> <some-pod> -- tar xf - -C /tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc exec -n <some-namespace> <some-pod> -- tar cf - /tmp/foo | tar xf - -C /tmp/bar # Copy /tmp/foo_dir local directory to /tmp/bar_dir in a remote pod in the default namespace oc cp /tmp/foo_dir <some-pod>:/tmp/bar_dir # Copy /tmp/foo local file to /tmp/bar in a remote pod in a specific container oc cp /tmp/foo <some-pod>:/tmp/bar -c <specific-container> # Copy /tmp/foo local file to /tmp/bar in a remote pod in namespace <some-namespace> oc cp /tmp/foo <some-namespace>/<some-pod>:/tmp/bar # Copy /tmp/foo from a remote pod to /tmp/bar locally oc cp <some-namespace>/<some-pod>:/tmp/foo /tmp/bar", "Create a pod using the data in pod.json oc create -f ./pod.json # Create a pod based on the JSON passed into stdin cat pod.json | oc create -f - # Edit the data in registry.yaml in JSON then create the resource using the edited data oc create -f registry.yaml --edit -o json", "Create a new build oc create build myapp", "Create a cluster resource quota limited to 10 pods oc create clusterresourcequota limit-bob --project-annotation-selector=openshift.io/requester=user-bob --hard=pods=10", "Create a cluster role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods oc create clusterrole pod-reader --verb=get,list,watch --resource=pods # Create a cluster role named \"pod-reader\" with ResourceName specified oc create clusterrole pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a cluster role named \"foo\" with API Group specified oc create clusterrole foo --verb=get,list,watch --resource=rs.apps # Create a cluster role named \"foo\" with SubResource specified oc create clusterrole foo --verb=get,list,watch --resource=pods,pods/status # Create a cluster role name \"foo\" with NonResourceURL specified oc create clusterrole \"foo\" --verb=get --non-resource-url=/logs/* # Create a cluster role name \"monitoring\" with AggregationRule specified oc create clusterrole monitoring --aggregation-rule=\"rbac.example.com/aggregate-to-monitoring=true\"", "Create a cluster role binding for user1, user2, and group1 using the cluster-admin cluster role oc create clusterrolebinding cluster-admin --clusterrole=cluster-admin --user=user1 --user=user2 --group=group1", "Create a new config map named my-config based on folder bar oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config with specified keys instead of file basenames on disk oc create configmap my-config --from-file=key1=/path/to/bar/file1.txt --from-file=key2=/path/to/bar/file2.txt # Create a new config map named my-config with key1=config1 and key2=config2 oc create configmap my-config --from-literal=key1=config1 --from-literal=key2=config2 # Create a new config map named my-config from the key=value pairs in the file oc create configmap my-config --from-file=path/to/bar # Create a new config map named my-config from an env file oc create configmap my-config --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env", "Create a cron job oc create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" # Create a cron job with a command oc create cronjob my-job --image=busybox --schedule=\"*/1 * * * *\" -- date", "Create a deployment named my-dep that runs the busybox image oc create deployment my-dep --image=busybox # Create a deployment with a command oc create deployment my-dep --image=busybox -- date # Create a deployment named my-dep that runs the nginx image with 3 replicas oc create deployment my-dep --image=nginx --replicas=3 # Create a deployment named my-dep that runs the busybox image and expose port 5701 oc create deployment my-dep --image=busybox --port=5701", "Create an nginx deployment config named my-nginx oc create deploymentconfig my-nginx --image=nginx", "Create an identity with identity provider \"acme_ldap\" and the identity provider username \"adamjones\" oc create identity acme_ldap:adamjones", "Create a new image stream oc create imagestream mysql", "Create a new image stream tag based on an image in a remote registry oc create imagestreamtag mysql:latest --from-image=myregistry.local/mysql/mysql:5.0", "Create a single ingress called 'simple' that directs requests to foo.com/bar to svc # svc1:8080 with a TLS secret \"my-cert\" oc create ingress simple --rule=\"foo.com/bar=svc1:8080,tls=my-cert\" # Create a catch all ingress of \"/path\" pointing to service svc:port and Ingress Class as \"otheringress\" oc create ingress catch-all --class=otheringress --rule=\"/path=svc:port\" # Create an ingress with two annotations: ingress.annotation1 and ingress.annotations2 oc create ingress annotated --class=default --rule=\"foo.com/bar=svc:port\" --annotation ingress.annotation1=foo --annotation ingress.annotation2=bla # Create an ingress with the same host and multiple paths oc create ingress multipath --class=default --rule=\"foo.com/=svc:port\" --rule=\"foo.com/admin/=svcadmin:portadmin\" # Create an ingress with multiple hosts and the pathType as Prefix oc create ingress ingress1 --class=default --rule=\"foo.com/path*=svc:8080\" --rule=\"bar.com/admin*=svc2:http\" # Create an ingress with TLS enabled using the default ingress certificate and different path types oc create ingress ingtls --class=default --rule=\"foo.com/=svc:https,tls\" --rule=\"foo.com/path/subpath*=othersvc:8080\" # Create an ingress with TLS enabled using a specific secret and pathType as Prefix oc create ingress ingsecret --class=default --rule=\"foo.com/*=svc:8080,tls=secret1\" # Create an ingress with a default backend oc create ingress ingdefault --class=default --default-backend=defaultsvc:http --rule=\"foo.com/*=svc:8080,tls=secret1\"", "Create a job oc create job my-job --image=busybox # Create a job with a command oc create job my-job --image=busybox -- date # Create a job from a cron job named \"a-cronjob\" oc create job test-job --from=cronjob/a-cronjob", "Create a new namespace named my-namespace oc create namespace my-namespace", "Create a pod disruption budget named my-pdb that will select all pods with the app=rails label # and require at least one of them being available at any point in time oc create poddisruptionbudget my-pdb --selector=app=rails --min-available=1 # Create a pod disruption budget named my-pdb that will select all pods with the app=nginx label # and require at least half of the pods selected to be available at any point in time oc create pdb my-pdb --selector=app=nginx --min-available=50%", "Create a priority class named high-priority oc create priorityclass high-priority --value=1000 --description=\"high priority\" # Create a priority class named default-priority that is considered as the global default priority oc create priorityclass default-priority --value=1000 --global-default=true --description=\"default priority\" # Create a priority class named high-priority that cannot preempt pods with lower priority oc create priorityclass high-priority --value=1000 --description=\"high priority\" --preemption-policy=\"Never\"", "Create a new resource quota named my-quota oc create quota my-quota --hard=cpu=1,memory=1G,pods=2,services=3,replicationcontrollers=2,resourcequotas=1,secrets=5,persistentvolumeclaims=10 # Create a new resource quota named best-effort oc create quota best-effort --hard=pods=100 --scopes=BestEffort", "Create a role named \"pod-reader\" that allows user to perform \"get\", \"watch\" and \"list\" on pods oc create role pod-reader --verb=get --verb=list --verb=watch --resource=pods # Create a role named \"pod-reader\" with ResourceName specified oc create role pod-reader --verb=get --resource=pods --resource-name=readablepod --resource-name=anotherpod # Create a role named \"foo\" with API Group specified oc create role foo --verb=get,list,watch --resource=rs.apps # Create a role named \"foo\" with SubResource specified oc create role foo --verb=get,list,watch --resource=pods,pods/status", "Create a role binding for user1, user2, and group1 using the admin cluster role oc create rolebinding admin --clusterrole=admin --user=user1 --user=user2 --group=group1 # Create a role binding for serviceaccount monitoring:sa-dev using the admin role oc create rolebinding admin-binding --role=admin --serviceaccount=monitoring:sa-dev", "Create an edge route named \"my-route\" that exposes the frontend service oc create route edge my-route --service=frontend # Create an edge route that exposes the frontend service and specify a path # If the route name is omitted, the service name will be used oc create route edge --service=frontend --path /assets", "Create a passthrough route named \"my-route\" that exposes the frontend service oc create route passthrough my-route --service=frontend # Create a passthrough route that exposes the frontend service and specify # a host name. If the route name is omitted, the service name will be used oc create route passthrough --service=frontend --hostname=www.example.com", "Create a route named \"my-route\" that exposes the frontend service oc create route reencrypt my-route --service=frontend --dest-ca-cert cert.cert # Create a reencrypt route that exposes the frontend service, letting the # route name default to the service name and the destination CA certificate # default to the service CA oc create route reencrypt --service=frontend", "If you do not already have a .dockercfg file, create a dockercfg secret directly oc create secret docker-registry my-secret --docker-server=DOCKER_REGISTRY_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL # Create a new secret named my-secret from ~/.docker/config.json oc create secret docker-registry my-secret --from-file=.dockerconfigjson=path/to/.docker/config.json", "Create a new secret named my-secret with keys for each file in folder bar oc create secret generic my-secret --from-file=path/to/bar # Create a new secret named my-secret with specified keys instead of names on disk oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-file=ssh-publickey=path/to/id_rsa.pub # Create a new secret named my-secret with key1=supersecret and key2=topsecret oc create secret generic my-secret --from-literal=key1=supersecret --from-literal=key2=topsecret # Create a new secret named my-secret using a combination of a file and a literal oc create secret generic my-secret --from-file=ssh-privatekey=path/to/id_rsa --from-literal=passphrase=topsecret # Create a new secret named my-secret from env files oc create secret generic my-secret --from-env-file=path/to/foo.env --from-env-file=path/to/bar.env", "Create a new TLS secret named tls-secret with the given key pair oc create secret tls tls-secret --cert=path/to/tls.cert --key=path/to/tls.key", "Create a new ClusterIP service named my-cs oc create service clusterip my-cs --tcp=5678:8080 # Create a new ClusterIP service named my-cs (in headless mode) oc create service clusterip my-cs --clusterip=\"None\"", "Create a new ExternalName service named my-ns oc create service externalname my-ns --external-name bar.com", "Create a new LoadBalancer service named my-lbs oc create service loadbalancer my-lbs --tcp=5678:8080", "Create a new NodePort service named my-ns oc create service nodeport my-ns --tcp=5678:8080", "Create a new service account named my-service-account oc create serviceaccount my-service-account", "Request a token to authenticate to the kube-apiserver as the service account \"myapp\" in the current namespace oc create token myapp # Request a token for a service account in a custom namespace oc create token myapp --namespace myns # Request a token with a custom expiration oc create token myapp --duration 10m # Request a token with a custom audience oc create token myapp --audience https://example.com # Request a token bound to an instance of a Secret object oc create token myapp --bound-object-kind Secret --bound-object-name mysecret # Request a token bound to an instance of a Secret object with a specific UID oc create token myapp --bound-object-kind Secret --bound-object-name mysecret --bound-object-uid 0d4691ed-659b-4935-a832-355f77ee47cc", "Create a user with the username \"ajones\" and the display name \"Adam Jones\" oc create user ajones --full-name=\"Adam Jones\"", "Map the identity \"acme_ldap:adamjones\" to the user \"ajones\" oc create useridentitymapping acme_ldap:adamjones ajones", "Start a shell session into a pod using the OpenShift tools image oc debug # Debug a currently running deployment by creating a new pod oc debug deploy/test # Debug a node as an administrator oc debug node/master-1 # Debug a Windows Node # Note: the chosen image must match the Windows Server version (2019, 2022) of the Node oc debug node/win-worker-1 --image=mcr.microsoft.com/powershell:lts-nanoserver-ltsc2022 # Launch a shell in a pod using the provided image stream tag oc debug istag/mysql:latest -n openshift # Test running a job as a non-root user oc debug job/test --as-user=1000000 # Debug a specific failing container by running the env command in the 'second' container oc debug daemonset/test -c second -- /bin/env # See the pod that would be created to debug oc debug mypod-9xbc -o yaml # Debug a resource but launch the debug pod in another namespace # Note: Not all resources can be debugged using --to-namespace without modification. For example, # volumes and service accounts are namespace-dependent. Add '-o yaml' to output the debug pod definition # to disk. If necessary, edit the definition then run 'oc debug -f -' or run without --to-namespace oc debug mypod-9xbc --to-namespace testns", "Delete a pod using the type and name specified in pod.json oc delete -f ./pod.json # Delete resources from a directory containing kustomization.yaml - e.g. dir/kustomization.yaml oc delete -k dir # Delete resources from all files that end with '.json' oc delete -f '*.json' # Delete a pod based on the type and name in the JSON passed into stdin cat pod.json | oc delete -f - # Delete pods and services with same names \"baz\" and \"foo\" oc delete pod,service baz foo # Delete pods and services with label name=myLabel oc delete pods,services -l name=myLabel # Delete a pod with minimal delay oc delete pod foo --now # Force delete a pod on a dead node oc delete pod foo --force # Delete all pods oc delete pods --all", "Describe a node oc describe nodes kubernetes-node-emt8.c.myproject.internal # Describe a pod oc describe pods/nginx # Describe a pod identified by type and name in \"pod.json\" oc describe -f pod.json # Describe all pods oc describe pods # Describe pods by label name=myLabel oc describe pods -l name=myLabel # Describe all pods managed by the 'frontend' replication controller # (rc-created pods get the name of the rc as a prefix in the pod name) oc describe pods frontend", "Diff resources included in pod.json oc diff -f pod.json # Diff file read from stdin cat service.yaml | oc diff -f -", "Edit the service named 'registry' oc edit svc/registry # Use an alternative editor KUBE_EDITOR=\"nano\" oc edit svc/registry # Edit the job 'myjob' in JSON using the v1 API format oc edit job.v1.batch/myjob -o json # Edit the deployment 'mydeployment' in YAML and save the modified config in its annotation oc edit deployment/mydeployment -o yaml --save-config # Edit the 'status' subresource for the 'mydeployment' deployment oc edit deployment mydeployment --subresource='status'", "List recent events in the default namespace oc events # List recent events in all namespaces oc events --all-namespaces # List recent events for the specified pod, then wait for more events and list them as they arrive oc events --for pod/web-pod-13je7 --watch # List recent events in YAML format oc events -oyaml # List recent only events of type 'Warning' or 'Normal' oc events --types=Warning,Normal", "Get output from running the 'date' command from pod mypod, using the first container by default oc exec mypod -- date # Get output from running the 'date' command in ruby-container from pod mypod oc exec mypod -c ruby-container -- date # Switch to raw terminal mode; sends stdin to 'bash' in ruby-container from pod mypod # and sends stdout/stderr from 'bash' back to the client oc exec mypod -c ruby-container -i -t -- bash -il # List contents of /usr from the first container of pod mypod and sort by modification time # If the command you want to execute in the pod has any flags in common (e.g. -i), # you must use two dashes (--) to separate your command's flags/arguments # Also note, do not surround your command and its flags/arguments with quotes # unless that is how you would execute it normally (i.e., do ls -t /usr, not \"ls -t /usr\") oc exec mypod -i -t -- ls -t /usr # Get output from running 'date' command from the first pod of the deployment mydeployment, using the first container by default oc exec deploy/mydeployment -- date # Get output from running 'date' command from the first pod of the service myservice, using the first container by default oc exec svc/myservice -- date", "Get the documentation of the resource and its fields oc explain pods # Get all the fields in the resource oc explain pods --recursive # Get the explanation for deployment in supported api versions oc explain deployments --api-version=apps/v1 # Get the documentation of a specific field of a resource oc explain pods.spec.containers # Get the documentation of resources in different format oc explain deployment --output=plaintext-openapiv2", "Create a route based on service nginx. The new route will reuse nginx's labels oc expose service nginx # Create a route and specify your own label and route name oc expose service nginx -l name=myroute --name=fromdowntown # Create a route and specify a host name oc expose service nginx --hostname=www.example.com # Create a route with a wildcard oc expose service nginx --hostname=x.example.com --wildcard-policy=Subdomain # This would be equivalent to *.example.com. NOTE: only hosts are matched by the wildcard; subdomains would not be included # Expose a deployment configuration as a service and use the specified port oc expose dc ruby-hello-world --port=8080 # Expose a service as a route in the specified path oc expose service nginx --path=/nginx", "Extract the secret \"test\" to the current directory oc extract secret/test # Extract the config map \"nginx\" to the /tmp directory oc extract configmap/nginx --to=/tmp # Extract the config map \"nginx\" to STDOUT oc extract configmap/nginx --to=- # Extract only the key \"nginx.conf\" from config map \"nginx\" to the /tmp directory oc extract configmap/nginx --to=/tmp --keys=nginx.conf", "List all pods in ps output format oc get pods # List all pods in ps output format with more information (such as node name) oc get pods -o wide # List a single replication controller with specified NAME in ps output format oc get replicationcontroller web # List deployments in JSON output format, in the \"v1\" version of the \"apps\" API group oc get deployments.v1.apps -o json # List a single pod in JSON output format oc get -o json pod web-pod-13je7 # List a pod identified by type and name specified in \"pod.yaml\" in JSON output format oc get -f pod.yaml -o json # List resources from a directory with kustomization.yaml - e.g. dir/kustomization.yaml oc get -k dir/ # Return only the phase value of the specified pod oc get -o template pod/web-pod-13je7 --template={{.status.phase}} # List resource information in custom columns oc get pod test-pod -o custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image # List all replication controllers and services together in ps output format oc get rc,services # List one or more resources by their type and names oc get rc/web service/frontend pods/web-pod-13je7 # List the 'status' subresource for a single pod oc get pod web-pod-13je7 --subresource status", "Starts an auth code flow to the issuer url with the client id and the given extra scopes oc get-token --client-id=client-id --issuer-url=test.issuer.url --extra-scopes=email,profile # Starts an authe code flow to the issuer url with a different callback address. oc get-token --client-id=client-id --issuer-url=test.issuer.url --callback-address=127.0.0.1:8343", "Idle the scalable controllers associated with the services listed in to-idle.txt USD oc idle --resource-names-file to-idle.txt", "Remove the entrypoint on the mysql:latest image oc image append --from mysql:latest --to myregistry.com/myimage:latest --image '{\"Entrypoint\":null}' # Add a new layer to the image oc image append --from mysql:latest --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to the image and store the result on disk # This results in USD(pwd)/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local layer.tar.gz # Add a new layer to the image and store the result on disk in a designated directory # This will result in USD(pwd)/mysql-local/v2/mysql/blobs,manifests oc image append --from mysql:latest --to file://mysql:local --dir mysql-local layer.tar.gz # Add a new layer to an image that is stored on disk (~/mysql-local/v2/image exists) oc image append --from-dir ~/mysql-local --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to an image that was mirrored to the current directory on disk (USD(pwd)/v2/image exists) oc image append --from-dir v2 --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for an os/arch that is different from the system's os/arch # Note: The first image in the manifest list that matches the filter will be returned when --keep-manifest-list is not specified oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests when keep-manifest-list is specified oc image append --from docker.io/library/busybox:latest --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz # Add a new layer to a multi-architecture image for all the os/arch manifests that is specified by the filter, while preserving the manifestlist oc image append --from docker.io/library/busybox:latest --filter-by-os=linux/s390x --keep-manifest-list --to myregistry.com/myimage:latest layer.tar.gz", "Extract the busybox image into the current directory oc image extract docker.io/library/busybox:latest # Extract the busybox image into a designated directory (must exist) oc image extract docker.io/library/busybox:latest --path /:/tmp/busybox # Extract the busybox image into the current directory for linux/s390x platform # Note: Wildcard filter is not supported with extract; pass a single os/arch to extract oc image extract docker.io/library/busybox:latest --filter-by-os=linux/s390x # Extract a single file from the image into the current directory oc image extract docker.io/library/centos:7 --path /bin/bash:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into the current directory oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:. # Extract all .repo files from the image's /etc/yum.repos.d/ folder into a designated directory (must exist) # This results in /tmp/yum.repos.d/*.repo on local system oc image extract docker.io/library/centos:7 --path /etc/yum.repos.d/*.repo:/tmp/yum.repos.d # Extract an image stored on disk into the current directory (USD(pwd)/v2/busybox/blobs,manifests exists) # --confirm is required because the current directory is not empty oc image extract file://busybox:local --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into the current directory # --confirm is required because the current directory is not empty (USD(pwd)/busybox-mirror-dir/v2/busybox exists) oc image extract file://busybox:local --dir busybox-mirror-dir --confirm # Extract an image stored on disk in a directory other than USD(pwd)/v2 into a designated directory (must exist) oc image extract file://busybox:local --dir busybox-mirror-dir --path /:/tmp/busybox # Extract the last layer in the image oc image extract docker.io/library/centos:7[-1] # Extract the first three layers of the image oc image extract docker.io/library/centos:7[:3] # Extract the last three layers of the image oc image extract docker.io/library/centos:7[-3:]", "Show information about an image oc image info quay.io/openshift/cli:latest # Show information about images matching a wildcard oc image info quay.io/openshift/cli:4.* # Show information about a file mirrored to disk under DIR oc image info --dir=DIR file://library/busybox:latest # Select which image from a multi-OS image to show oc image info library/busybox:latest --filter-by-os=linux/arm64", "Copy image to another tag oc image mirror myregistry.com/myimage:latest myregistry.com/myimage:stable # Copy image to another registry oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable # Copy all tags starting with mysql to the destination repository oc image mirror myregistry.com/myimage:mysql* docker.io/myrepository/myimage # Copy image to disk, creating a directory structure that can be served as a registry oc image mirror myregistry.com/myimage:latest file://myrepository/myimage:latest # Copy image to S3 (pull from <bucket>.s3.amazonaws.com/image:latest) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image:latest # Copy image to S3 without setting a tag (pull via @<digest>) oc image mirror myregistry.com/myimage:latest s3://s3.amazonaws.com/<region>/<bucket>/image # Copy image to multiple locations oc image mirror myregistry.com/myimage:latest docker.io/myrepository/myimage:stable docker.io/myrepository/myimage:dev # Copy multiple images oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test myregistry.com/myimage:new=myregistry.com/other:target # Copy manifest list of a multi-architecture image, even if only a single image is found oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --keep-manifest-list=true # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that with multi-arch images, this results in a new manifest list digest that includes only # the filtered manifests oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=os/arch # Copy all os/arch manifests of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see list of os/arch manifests that will be mirrored oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --keep-manifest-list=true # Note the above command is equivalent to oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=.* # Copy specific os/arch manifest of a multi-architecture image # Run 'oc image info myregistry.com/myimage:latest' to see available os/arch for multi-arch images # Note that the target registry may reject a manifest list if the platform specific images do not all # exist. You must use a registry with sparse registry support enabled. oc image mirror myregistry.com/myimage:latest=myregistry.com/other:test --filter-by-os=linux/386 --keep-manifest-list=true", "Import tag latest into a new image stream oc import-image mystream --from=registry.io/repo/image:latest --confirm # Update imported data for tag latest in an already existing image stream oc import-image mystream # Update imported data for tag stable in an already existing image stream oc import-image mystream:stable # Update imported data for all tags in an existing image stream oc import-image mystream --all # Update imported data for a tag that points to a manifest list to include the full manifest list oc import-image mystream --import-mode=PreserveOriginal # Import all tags into a new image stream oc import-image mystream --from=registry.io/repo/image --all --confirm # Import all tags into a new image stream using a custom timeout oc --request-timeout=5m import-image mystream --from=registry.io/repo/image --all --confirm", "Build the current working directory oc kustomize # Build some shared configuration directory oc kustomize /home/config/production # Build from github oc kustomize https://github.com/kubernetes-sigs/kustomize.git/examples/helloWorld?ref=v1.0.6", "Update pod 'foo' with the label 'unhealthy' and the value 'true' oc label pods foo unhealthy=true # Update pod 'foo' with the label 'status' and the value 'unhealthy', overwriting any existing value oc label --overwrite pods foo status=unhealthy # Update all pods in the namespace oc label pods --all status=unhealthy # Update a pod identified by the type and name in \"pod.json\" oc label -f pod.json status=unhealthy # Update pod 'foo' only if the resource is unchanged from version 1 oc label pods foo status=unhealthy --resource-version=1 # Update pod 'foo' by removing a label named 'bar' if it exists # Does not require the --overwrite flag oc label pods foo bar-", "Log in interactively oc login --username=myuser # Log in to the given server with the given certificate authority file oc login localhost:8443 --certificate-authority=/path/to/cert.crt # Log in to the given server with the given credentials (will not prompt interactively) oc login localhost:8443 --username=myuser --password=mypass # Log in to the given server through a browser oc login localhost:8443 --web --callback-port 8280", "Log out oc logout", "Start streaming the logs of the most recent build of the openldap build config oc logs -f bc/openldap # Start streaming the logs of the latest deployment of the mysql deployment config oc logs -f dc/mysql # Get the logs of the first deployment for the mysql deployment config. Note that logs # from older deployments may not exist either because the deployment was successful # or due to deployment pruning or manual deletion of the deployment oc logs --version=1 dc/mysql # Return a snapshot of ruby-container logs from pod backend oc logs backend -c ruby-container # Start streaming of ruby-container logs from pod backend oc logs -f pod/backend -c ruby-container", "List all local templates and image streams that can be used to create an app oc new-app --list # Create an application based on the source code in the current git repository (with a public remote) and a container image oc new-app . --image=registry/repo/langimage # Create an application myapp with Docker based build strategy expecting binary input oc new-app --strategy=docker --binary --name myapp # Create a Ruby application based on the provided [image]~[source code] combination oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git # Use the public container registry MySQL image to create an app. Generated artifacts will be labeled with db=mysql oc new-app mysql MYSQL_USER=user MYSQL_PASSWORD=pass MYSQL_DATABASE=testdb -l db=mysql # Use a MySQL image in a private registry to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/mysql --name=private # Use an image with the full manifest list to create an app and override application artifacts' names oc new-app --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create an application from a remote repository using its beta4 branch oc new-app https://github.com/openshift/ruby-hello-world#beta4 # Create an application based on a stored template, explicitly setting a parameter value oc new-app --template=ruby-helloworld-sample --param=MYSQL_USER=admin # Create an application from a remote repository and specify a context directory oc new-app https://github.com/youruser/yourgitrepo --context-dir=src/build # Create an application from a remote private repository and specify which existing secret to use oc new-app https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create an application based on a template file, explicitly setting a parameter value oc new-app --file=./example/myapp/template.json --param=MYSQL_USER=admin # Search all templates, image streams, and container images for the ones that match \"ruby\" oc new-app --search ruby # Search for \"ruby\", but only in stored templates (--template, --image-stream and --image # can be used to filter search results) oc new-app --search --template=ruby # Search for \"ruby\" in stored templates and print the output as YAML oc new-app --search --template=ruby --output=yaml", "Create a build config based on the source code in the current git repository (with a public # remote) and a container image oc new-build . --image=repo/langimage # Create a NodeJS build config based on the provided [image]~[source code] combination oc new-build centos/nodejs-8-centos7~https://github.com/sclorg/nodejs-ex.git # Create a build config from a remote repository using its beta2 branch oc new-build https://github.com/openshift/ruby-hello-world#beta2 # Create a build config using a Dockerfile specified as an argument oc new-build -D USD'FROM centos:7\\nRUN yum install -y httpd' # Create a build config from a remote repository and add custom environment variables oc new-build https://github.com/openshift/ruby-hello-world -e RACK_ENV=development # Create a build config from a remote private repository and specify which existing secret to use oc new-build https://github.com/youruser/yourgitrepo --source-secret=yoursecret # Create a build config using an image with the full manifest list to create an app and override application artifacts' names oc new-build --image=myregistry.com/mycompany/image --name=private --import-mode=PreserveOriginal # Create a build config from a remote repository and inject the npmrc into a build oc new-build https://github.com/openshift/ruby-hello-world --build-secret npmrc:.npmrc # Create a build config from a remote repository and inject environment data into a build oc new-build https://github.com/openshift/ruby-hello-world --build-config-map env:config # Create a build config that gets its input from a remote repository and another container image oc new-build https://github.com/openshift/ruby-hello-world --source-image=openshift/jenkins-1-centos7 --source-image-path=/var/lib/jenkins:tmp", "Create a new project with minimal information oc new-project web-team-dev # Create a new project with a display name and description oc new-project web-team-dev --display-name=\"Web Team Development\" --description=\"Development project for the web team.\"", "Observe changes to services oc observe services # Observe changes to services, including the clusterIP and invoke a script for each oc observe services --template '{ .spec.clusterIP }' -- register_dns.sh # Observe changes to services filtered by a label selector oc observe services -l regist-dns=true --template '{ .spec.clusterIP }' -- register_dns.sh", "Partially update a node using a strategic merge patch, specifying the patch as JSON oc patch node k8s-node-1 -p '{\"spec\":{\"unschedulable\":true}}' # Partially update a node using a strategic merge patch, specifying the patch as YAML oc patch node k8s-node-1 -p USD'spec:\\n unschedulable: true' # Partially update a node identified by the type and name specified in \"node.json\" using strategic merge patch oc patch -f node.json -p '{\"spec\":{\"unschedulable\":true}}' # Update a container's image; spec.containers[*].name is required because it's a merge key oc patch pod valid-pod -p '{\"spec\":{\"containers\":[{\"name\":\"kubernetes-serve-hostname\",\"image\":\"new image\"}]}}' # Update a container's image using a JSON patch with positional arrays oc patch pod valid-pod --type='json' -p='[{\"op\": \"replace\", \"path\": \"/spec/containers/0/image\", \"value\":\"new image\"}]' # Update a deployment's replicas through the 'scale' subresource using a merge patch oc patch deployment nginx-deployment --subresource='scale' --type='merge' -p '{\"spec\":{\"replicas\":2}}'", "List all available plugins oc plugin list", "Add the 'view' role to user1 for the current project oc policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc policy add-role-to-user edit -z serviceaccount1", "Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc policy scc-review -f myresource_with_no_sa.yaml", "Check whether user bob can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc policy scc-subject-review -f myresourcewithsa.yaml", "Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in the pod oc port-forward pod/mypod 5000 6000 # Listen on ports 5000 and 6000 locally, forwarding data to/from ports 5000 and 6000 in a pod selected by the deployment oc port-forward deployment/mydeployment 5000 6000 # Listen on port 8443 locally, forwarding to the targetPort of the service's port named \"https\" in a pod selected by the service oc port-forward service/myservice 8443:https # Listen on port 8888 locally, forwarding to 5000 in the pod oc port-forward pod/mypod 8888:5000 # Listen on port 8888 on all addresses, forwarding to 5000 in the pod oc port-forward --address 0.0.0.0 pod/mypod 8888:5000 # Listen on port 8888 on localhost and selected IP, forwarding to 5000 in the pod oc port-forward --address localhost,10.19.21.23 pod/mypod 8888:5000 # Listen on a random port locally, forwarding to 5000 in the pod oc port-forward pod/mypod :5000", "Convert the template.json file into a resource list and pass to create oc process -f template.json | oc create -f - # Process a file locally instead of contacting the server oc process -f template.json --local -o yaml # Process template while passing a user-defined label oc process -f template.json -l name=mytemplate # Convert a stored template into a resource list oc process foo # Convert a stored template into a resource list by setting/overriding parameter values oc process foo PARM1=VALUE1 PARM2=VALUE2 # Convert a template stored in different namespace into a resource list oc process openshift//foo # Convert template.json into a resource list cat template.json | oc process -f -", "Switch to the 'myapp' project oc project myapp # Display the project currently in use oc project", "List all projects oc projects", "To proxy all of the Kubernetes API and nothing else oc proxy --api-prefix=/ # To proxy only part of the Kubernetes API and also some static files # You can get pods info with 'curl localhost:8001/api/v1/pods' oc proxy --www=/my/files --www-prefix=/static/ --api-prefix=/api/ # To proxy the entire Kubernetes API at a different root # You can get pods info with 'curl localhost:8001/custom/api/v1/pods' oc proxy --api-prefix=/custom/ # Run a proxy to the Kubernetes API server on port 8011, serving static content from ./local/www/ oc proxy --port=8011 --www=./local/www/ # Run a proxy to the Kubernetes API server on an arbitrary local port # The chosen port for the server will be output to stdout oc proxy --port=0 # Run a proxy to the Kubernetes API server, changing the API prefix to k8s-api # This makes e.g. the pods API available at localhost:8001/k8s-api/v1/pods/ oc proxy --api-prefix=/k8s-api", "Log in to the integrated registry oc registry login # Log in to different registry using BASIC auth credentials oc registry login --registry quay.io/myregistry --auth-basic=USER:PASS", "Replace a pod using the data in pod.json oc replace -f ./pod.json # Replace a pod based on the JSON passed into stdin cat pod.json | oc replace -f - # Update a single-container pod's image version (tag) to v4 oc get pod mypod -o yaml | sed 's/\\(image: myimage\\):.*USD/\\1:v4/' | oc replace -f - # Force replace, delete and then re-create the resource oc replace --force -f ./pod.json", "Perform a rollback to the last successfully completed deployment for a deployment config oc rollback frontend # See what a rollback to version 3 will look like, but do not perform the rollback oc rollback frontend --to-version=3 --dry-run # Perform a rollback to a specific deployment oc rollback frontend-2 # Perform the rollback manually by piping the JSON of the new config back to oc oc rollback frontend -o json | oc replace dc/frontend -f - # Print the updated deployment configuration in JSON format instead of performing the rollback oc rollback frontend -o json", "Cancel the in-progress deployment based on 'nginx' oc rollout cancel dc/nginx", "View the rollout history of a deployment oc rollout history dc/nginx # View the details of deployment revision 3 oc rollout history dc/nginx --revision=3", "Start a new rollout based on the latest images defined in the image change triggers oc rollout latest dc/nginx # Print the rolled out deployment config oc rollout latest dc/nginx -o json", "Mark the nginx deployment as paused. Any current state of # the deployment will continue its function, new updates to the deployment will not # have an effect as long as the deployment is paused oc rollout pause dc/nginx", "Restart a deployment oc rollout restart deployment/nginx # Restart a daemon set oc rollout restart daemonset/abc # Restart deployments with the app=nginx label oc rollout restart deployment --selector=app=nginx", "Resume an already paused deployment oc rollout resume dc/nginx", "Retry the latest failed deployment based on 'frontend' # The deployer pod and any hook pods are deleted for the latest failed deployment oc rollout retry dc/frontend", "Watch the status of the latest rollout oc rollout status dc/nginx", "Roll back to the previous deployment oc rollout undo dc/nginx # Roll back to deployment revision 3. The replication controller for that version must exist oc rollout undo dc/nginx --to-revision=3", "Open a shell session on the first container in pod 'foo' oc rsh foo # Open a shell session on the first container in pod 'foo' and namespace 'bar' # (Note that oc client specific arguments must come before the resource name and its arguments) oc rsh -n bar foo # Run the command 'cat /etc/resolv.conf' inside pod 'foo' oc rsh foo cat /etc/resolv.conf # See the configuration of your internal registry oc rsh dc/docker-registry cat config.yml # Open a shell session on the container named 'index' inside a pod of your job oc rsh -c index job/scheduled", "Synchronize a local directory with a pod directory oc rsync ./local/dir/ POD:/remote/dir # Synchronize a pod directory with a local directory oc rsync POD:/remote/dir/ ./local/dir", "Start a nginx pod oc run nginx --image=nginx # Start a hazelcast pod and let the container expose port 5701 oc run hazelcast --image=hazelcast/hazelcast --port=5701 # Start a hazelcast pod and set environment variables \"DNS_DOMAIN=cluster\" and \"POD_NAMESPACE=default\" in the container oc run hazelcast --image=hazelcast/hazelcast --env=\"DNS_DOMAIN=cluster\" --env=\"POD_NAMESPACE=default\" # Start a hazelcast pod and set labels \"app=hazelcast\" and \"env=prod\" in the container oc run hazelcast --image=hazelcast/hazelcast --labels=\"app=hazelcast,env=prod\" # Dry run; print the corresponding API objects without creating them oc run nginx --image=nginx --dry-run=client # Start a nginx pod, but overload the spec with a partial set of values parsed from JSON oc run nginx --image=nginx --overrides='{ \"apiVersion\": \"v1\", \"spec\": { ... } }' # Start a busybox pod and keep it in the foreground, don't restart it if it exits oc run -i -t busybox --image=busybox --restart=Never # Start the nginx pod using the default command, but use custom arguments (arg1 .. argN) for that command oc run nginx --image=nginx -- <arg1> <arg2> ... <argN> # Start the nginx pod using a different command and custom arguments oc run nginx --image=nginx --command -- <cmd> <arg1> ... <argN>", "Scale a replica set named 'foo' to 3 oc scale --replicas=3 rs/foo # Scale a resource identified by type and name specified in \"foo.yaml\" to 3 oc scale --replicas=3 -f foo.yaml # If the deployment named mysql's current size is 2, scale mysql to 3 oc scale --current-replicas=2 --replicas=3 deployment/mysql # Scale multiple replication controllers oc scale --replicas=5 rc/example1 rc/example2 rc/example3 # Scale stateful set named 'web' to 3 oc scale --replicas=3 statefulset/web", "Add an image pull secret to a service account to automatically use it for pulling pod images oc secrets link serviceaccount-name pull-secret --for=pull # Add an image pull secret to a service account to automatically use it for both pulling and pushing build images oc secrets link builder builder-image-secret --for=pull,mount", "Unlink a secret currently associated with a service account oc secrets unlink serviceaccount-name secret-name another-secret-name", "Clear post-commit hook on a build config oc set build-hook bc/mybuild --post-commit --remove # Set the post-commit hook to execute a test suite using a new entrypoint oc set build-hook bc/mybuild --post-commit --command -- /bin/bash -c /var/lib/test-image.sh # Set the post-commit hook to execute a shell script oc set build-hook bc/mybuild --post-commit --script=\"/var/lib/test-image.sh param1 param2 && /var/lib/done.sh\"", "Clear the push secret on a build config oc set build-secret --push --remove bc/mybuild # Set the pull secret on a build config oc set build-secret --pull bc/mybuild mysecret # Set the push and pull secret on a build config oc set build-secret --push --pull bc/mybuild mysecret # Set the source secret on a set of build configs matching a selector oc set build-secret --source -l app=myapp gitsecret", "Set the 'password' key of a secret oc set data secret/foo password=this_is_secret # Remove the 'password' key from a secret oc set data secret/foo password- # Update the 'haproxy.conf' key of a config map from a file on disk oc set data configmap/bar --from-file=../haproxy.conf # Update a secret with the contents of a directory, one key per file oc set data secret/foo --from-file=secret-dir", "Clear pre and post hooks on a deployment config oc set deployment-hook dc/myapp --remove --pre --post # Set the pre deployment hook to execute a db migration command for an application # using the data volume from the application oc set deployment-hook dc/myapp --pre --volumes=data -- /var/lib/migrate-db.sh # Set a mid deployment hook along with additional environment variables oc set deployment-hook dc/myapp --mid --volumes=data -e VAR1=value1 -e VAR2=value2 -- /var/lib/prepare-deploy.sh", "Update deployment config 'myapp' with a new environment variable oc set env dc/myapp STORAGE_DIR=/local # List the environment variables defined on a build config 'sample-build' oc set env bc/sample-build --list # List the environment variables defined on all pods oc set env pods --all --list # Output modified build config in YAML oc set env bc/sample-build STORAGE_DIR=/data -o yaml # Update all containers in all replication controllers in the project to have ENV=prod oc set env rc --all ENV=prod # Import environment from a secret oc set env --from=secret/mysecret dc/myapp # Import environment from a config map with a prefix oc set env --from=configmap/myconfigmap --prefix=MYSQL_ dc/myapp # Remove the environment variable ENV from container 'c1' in all deployment configs oc set env dc --all --containers=\"c1\" ENV- # Remove the environment variable ENV from a deployment config definition on disk and # update the deployment config on the server oc set env -f dc.json ENV- # Set some of the local shell environment into a deployment config on the server oc set env | grep RAILS_ | oc env -e - dc/myapp", "Set a deployment config's nginx container image to 'nginx:1.9.1', and its busybox container image to 'busybox'. oc set image dc/nginx busybox=busybox nginx=nginx:1.9.1 # Set a deployment config's app container image to the image referenced by the imagestream tag 'openshift/ruby:2.3'. oc set image dc/myapp app=openshift/ruby:2.3 --source=imagestreamtag # Update all deployments' and rc's nginx container's image to 'nginx:1.9.1' oc set image deployments,rc nginx=nginx:1.9.1 --all # Update image of all containers of daemonset abc to 'nginx:1.9.1' oc set image daemonset abc *=nginx:1.9.1 # Print result (in YAML format) of updating nginx container image from local file, without hitting the server oc set image -f path/to/file.yaml nginx=nginx:1.9.1 --local -o yaml", "Print all of the image streams and whether they resolve local names oc set image-lookup # Use local name lookup on image stream mysql oc set image-lookup mysql # Force a deployment to use local name lookup oc set image-lookup deploy/mysql # Show the current status of the deployment lookup oc set image-lookup deploy/mysql --list # Disable local name lookup on image stream mysql oc set image-lookup mysql --enabled=false # Set local name lookup on all image streams oc set image-lookup --all", "Clear both readiness and liveness probes off all containers oc set probe dc/myapp --remove --readiness --liveness # Set an exec action as a liveness probe to run 'echo ok' oc set probe dc/myapp --liveness -- echo ok # Set a readiness probe to try to open a TCP socket on 3306 oc set probe rc/mysql --readiness --open-tcp=3306 # Set an HTTP startup probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --startup --get-url=http://:8080/healthz # Set an HTTP readiness probe for port 8080 and path /healthz over HTTP on the pod IP oc set probe dc/webapp --readiness --get-url=http://:8080/healthz # Set an HTTP readiness probe over HTTPS on 127.0.0.1 for a hostNetwork pod oc set probe dc/router --readiness --get-url=https://127.0.0.1:1936/stats # Set only the initial-delay-seconds field on all deployments oc set probe dc --all --readiness --initial-delay-seconds=30", "Set a deployments nginx container CPU limits to \"200m and memory to 512Mi\" oc set resources deployment nginx -c=nginx --limits=cpu=200m,memory=512Mi # Set the resource request and limits for all containers in nginx oc set resources deployment nginx --limits=cpu=200m,memory=512Mi --requests=cpu=100m,memory=256Mi # Remove the resource requests for resources on containers in nginx oc set resources deployment nginx --limits=cpu=0,memory=0 --requests=cpu=0,memory=0 # Print the result (in YAML format) of updating nginx container limits locally, without hitting the server oc set resources -f path/to/file.yaml --limits=cpu=200m,memory=512Mi --local -o yaml", "Print the backends on the route 'web' oc set route-backends web # Set two backend services on route 'web' with 2/3rds of traffic going to 'a' oc set route-backends web a=2 b=1 # Increase the traffic percentage going to b by 10%% relative to a oc set route-backends web --adjust b=+10%% # Set traffic percentage going to b to 10%% of the traffic going to a oc set route-backends web --adjust b=10%% # Set weight of b to 10 oc set route-backends web --adjust b=10 # Set the weight to all backends to zero oc set route-backends web --zero", "Set the labels and selector before creating a deployment/service pair. oc create service clusterip my-svc --clusterip=\"None\" -o yaml --dry-run | oc set selector --local -f - 'environment=qa' -o yaml | oc create -f - oc create deployment my-dep -o yaml --dry-run | oc label --local -f - environment=qa -o yaml | oc create -f -", "Set deployment nginx-deployment's service account to serviceaccount1 oc set serviceaccount deployment nginx-deployment serviceaccount1 # Print the result (in YAML format) of updated nginx deployment with service account from a local file, without hitting the API server oc set sa -f nginx-deployment.yaml serviceaccount1 --local --dry-run -o yaml", "Update a cluster role binding for serviceaccount1 oc set subject clusterrolebinding admin --serviceaccount=namespace:serviceaccount1 # Update a role binding for user1, user2, and group1 oc set subject rolebinding admin --user=user1 --user=user2 --group=group1 # Print the result (in YAML format) of updating role binding subjects locally, without hitting the server oc create rolebinding admin --role=admin --user=admin -o yaml --dry-run | oc set subject --local -f - --user=foo -o yaml", "Print the triggers on the deployment config 'myapp' oc set triggers dc/myapp # Set all triggers to manual oc set triggers dc/myapp --manual # Enable all automatic triggers oc set triggers dc/myapp --auto # Reset the GitHub webhook on a build to a new, generated secret oc set triggers bc/webapp --from-github oc set triggers bc/webapp --from-webhook # Remove all triggers oc set triggers bc/webapp --remove-all # Stop triggering on config change oc set triggers dc/myapp --from-config --remove # Add an image trigger to a build config oc set triggers bc/webapp --from-image=namespace1/image:latest # Add an image trigger to a stateful set on the main container oc set triggers statefulset/db --from-image=namespace1/image:latest -c main", "List volumes defined on all deployment configs in the current project oc set volume dc --all # Add a new empty dir volume to deployment config (dc) 'myapp' mounted under # /var/lib/myapp oc set volume dc/myapp --add --mount-path=/var/lib/myapp # Use an existing persistent volume claim (PVC) to overwrite an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-name=pvc1 --overwrite # Remove volume 'v1' from deployment config 'myapp' oc set volume dc/myapp --remove --name=v1 # Create a new persistent volume claim that overwrites an existing volume 'v1' oc set volume dc/myapp --add --name=v1 -t pvc --claim-size=1G --overwrite # Change the mount point for volume 'v1' to /data oc set volume dc/myapp --add --name=v1 -m /data --overwrite # Modify the deployment config by removing volume mount \"v1\" from container \"c1\" # (and by removing the volume \"v1\" if no other containers have volume mounts that reference it) oc set volume dc/myapp --remove --name=v1 --containers=c1 # Add new volume based on a more complex volume source (AWS EBS, GCE PD, # Ceph, Gluster, NFS, ISCSI, ...) oc set volume dc/myapp --add -m /data --source=<json-string>", "Starts build from build config \"hello-world\" oc start-build hello-world # Starts build from a previous build \"hello-world-1\" oc start-build --from-build=hello-world-1 # Use the contents of a directory as build input oc start-build hello-world --from-dir=src/ # Send the contents of a Git repository to the server from tag 'v2' oc start-build hello-world --from-repo=../hello-world --commit=v2 # Start a new build for build config \"hello-world\" and watch the logs until the build # completes or fails oc start-build hello-world --follow # Start a new build for build config \"hello-world\" and wait until the build completes. It # exits with a non-zero return code if the build fails oc start-build hello-world --wait", "See an overview of the current project oc status # Export the overview of the current project in an svg file oc status -o dot | dot -T svg -o project.svg # See an overview of the current project including details for any identified issues oc status --suggest", "Tag the current image for the image stream 'openshift/ruby' and tag '2.0' into the image stream 'yourproject/ruby with tag 'tip' oc tag openshift/ruby:2.0 yourproject/ruby:tip # Tag a specific image oc tag openshift/ruby@sha256:6b646fa6bf5e5e4c7fa41056c27910e679c03ebe7f93e361e6515a9da7e258cc yourproject/ruby:tip # Tag an external container image oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip # Tag an external container image and request pullthrough for it oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --reference-policy=local # Tag an external container image and include the full manifest list oc tag --source=docker openshift/origin-control-plane:latest yourproject/ruby:tip --import-mode=PreserveOriginal # Remove the specified spec tag from an image stream oc tag openshift/origin-control-plane:latest -d", "Print the OpenShift client, kube-apiserver, and openshift-apiserver version information for the current context oc version # Print the OpenShift client, kube-apiserver, and openshift-apiserver version numbers for the current context in json format oc version --output json # Print the OpenShift client version information for the current context oc version --client", "Wait for the pod \"busybox1\" to contain the status condition of type \"Ready\" oc wait --for=condition=Ready pod/busybox1 # The default value of status condition is true; you can wait for other targets after an equal delimiter (compared after Unicode simple case folding, which is a more general form of case-insensitivity) oc wait --for=condition=Ready=false pod/busybox1 # Wait for the pod \"busybox1\" to contain the status phase to be \"Running\" oc wait --for=jsonpath='{.status.phase}'=Running pod/busybox1 # Wait for the service \"loadbalancer\" to have ingress. oc wait --for=jsonpath='{.status.loadBalancer.ingress}' service/loadbalancer # Wait for the pod \"busybox1\" to be deleted, with a timeout of 60s, after having issued the \"delete\" command oc delete pod/busybox1 oc wait --for=delete pod/busybox1 --timeout=60s", "Display the currently authenticated user oc whoami", "Build the dependency tree for the 'latest' tag in <image-stream> oc adm build-chain <image-stream> # Build the dependency tree for the 'v2' tag in dot format and visualize it via the dot utility oc adm build-chain <image-stream>:v2 -o dot | dot -T svg -o deps.svg # Build the dependency tree across all namespaces for the specified image stream tag found in the 'test' namespace oc adm build-chain <image-stream> -n test --all", "Mirror an operator-registry image and its contents to a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com # Mirror an operator-registry image and its contents to a particular namespace in a registry oc adm catalog mirror quay.io/my/image:latest myregistry.com/my-namespace # Mirror to an airgapped registry by first mirroring to files oc adm catalog mirror quay.io/my/image:latest file:///local/index oc adm catalog mirror file:///local/index/my/image:latest my-airgapped-registry.com # Configure a cluster to use a mirrored registry oc apply -f manifests/imageDigestMirrorSet.yaml # Edit the mirroring mappings and mirror with \"oc image mirror\" manually oc adm catalog mirror --manifests-only quay.io/my/image:latest myregistry.com oc image mirror -f manifests/mapping.txt # Delete all ImageDigestMirrorSets generated by oc adm catalog mirror oc delete imagedigestmirrorset -l operators.openshift.org/catalog=true", "Approve CSR 'csr-sqgzp' oc adm certificate approve csr-sqgzp", "Deny CSR 'csr-sqgzp' oc adm certificate deny csr-sqgzp", "Mark node \"foo\" as unschedulable oc adm cordon foo", "Output a bootstrap project template in YAML format to stdout oc adm create-bootstrap-project-template -o yaml", "Output a template for the error page to stdout oc adm create-error-template", "Output a template for the login page to stdout oc adm create-login-template", "Output a template for the provider selection page to stdout oc adm create-provider-selection-template", "Drain node \"foo\", even if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set on it oc adm drain foo --force # As above, but abort if there are pods not managed by a replication controller, replica set, job, daemon set, or stateful set, and use a grace period of 15 minutes oc adm drain foo --grace-period=900", "Add user1 and user2 to my-group oc adm groups add-users my-group user1 user2", "Add a group with no users oc adm groups new my-group # Add a group with two users oc adm groups new my-group user1 user2 # Add a group with one user and shorter output oc adm groups new my-group user1 -o name", "Prune all orphaned groups oc adm groups prune --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm groups prune --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm groups prune --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm groups prune groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm", "Remove user1 and user2 from my-group oc adm groups remove-users my-group user1 user2", "Sync all groups with an LDAP server oc adm groups sync --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync all groups except the ones from the blacklist file with an LDAP server oc adm groups sync --blacklist=/path/to/blacklist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific groups specified in an allowlist file with an LDAP server oc adm groups sync --whitelist=/path/to/allowlist.txt --sync-config=/path/to/sync-config.yaml --confirm # Sync all OpenShift groups that have been synced previously with an LDAP server oc adm groups sync --type=openshift --sync-config=/path/to/ldap-sync-config.yaml --confirm # Sync specific OpenShift groups if they have been synced previously with an LDAP server oc adm groups sync groups/group1 groups/group2 groups/group3 --sync-config=/path/to/sync-config.yaml --confirm", "Collect debugging data for the \"openshift-apiserver\" clusteroperator oc adm inspect clusteroperator/openshift-apiserver # Collect debugging data for the \"openshift-apiserver\" and \"kube-apiserver\" clusteroperators oc adm inspect clusteroperator/openshift-apiserver clusteroperator/kube-apiserver # Collect debugging data for all clusteroperators oc adm inspect clusteroperator # Collect debugging data for all clusteroperators and clusterversions oc adm inspect clusteroperators,clusterversions", "Update the imagecontentsourcepolicy.yaml file to a new imagedigestmirrorset file under the mydir directory oc adm migrate icsp imagecontentsourcepolicy.yaml --dest-dir mydir", "Perform a dry-run of updating all objects oc adm migrate template-instances # To actually perform the update, the confirm flag must be appended oc adm migrate template-instances --confirm", "Gather information using the default plug-in image and command, writing into ./must-gather.local.<rand> oc adm must-gather # Gather information with a specific local folder to copy to oc adm must-gather --dest-dir=/local/directory # Gather audit information oc adm must-gather -- /usr/bin/gather_audit_logs # Gather information using multiple plug-in images oc adm must-gather --image=quay.io/kubevirt/must-gather --image=quay.io/openshift/origin-must-gather # Gather information using a specific image stream plug-in oc adm must-gather --image-stream=openshift/must-gather:latest # Gather information using a specific image, command, and pod directory oc adm must-gather --image=my/image:tag --source-dir=/pod/directory -- myspecial-command.sh", "Create a new project using a node selector oc adm new-project myproject --node-selector='type=user-node,region=east'", "Show kubelet logs from all masters oc adm node-logs --role master -u kubelet # See what logs are available in masters in /var/log oc adm node-logs --role master --path=/ # Display cron log file from all masters oc adm node-logs --role master --path=cron", "Watch platform certificates. oc adm ocp-certificates monitor-certificates", "Remove only CA certificates created before a certain date from all trust bundles oc adm ocp-certificates remove-old-trust configmaps -A --all --created-before 2023-06-05T14:44:06Z", "Regenerate the MCO certs without modifying user-data secrets oc adm certificates regenerate-machine-config-server-serving-cert --update-ignition=false # Update the user-data secrets to use new MCS certs oc adm certificates update-ignition-ca-bundle-for-machine-config-server", "Provide isolation for project p1 oc adm pod-network isolate-projects <p1> # Allow all projects with label name=top-secret to have their own isolated project network oc adm pod-network isolate-projects --selector='name=top-secret'", "Allow project p2 to use project p1 network oc adm pod-network join-projects --to=<p1> <p2> # Allow all projects with label name=top-secret to use project p1 network oc adm pod-network join-projects --to=<p1> --selector='name=top-secret'", "Allow project p1 to access all pods in the cluster and vice versa oc adm pod-network make-projects-global <p1> # Allow all projects with label name=share to access all pods in the cluster and vice versa oc adm pod-network make-projects-global --selector='name=share'", "Add the 'view' role to user1 for the current project oc adm policy add-role-to-user view user1 # Add the 'edit' role to serviceaccount1 for the current project oc adm policy add-role-to-user edit -z serviceaccount1", "Add the 'restricted' security context constraint to group1 and group2 oc adm policy add-scc-to-group restricted group1 group2", "Add the 'restricted' security context constraint to user1 and user2 oc adm policy add-scc-to-user restricted user1 user2 # Add the 'privileged' security context constraint to serviceaccount1 in the current namespace oc adm policy add-scc-to-user privileged -z serviceaccount1", "Check whether service accounts sa1 and sa2 can admit a pod with a template pod spec specified in my_resource.yaml # Service Account specified in myresource.yaml file is ignored oc adm policy scc-review -z sa1,sa2 -f my_resource.yaml # Check whether service accounts system:serviceaccount:bob:default can admit a pod with a template pod spec specified in my_resource.yaml oc adm policy scc-review -z system:serviceaccount:bob:default -f my_resource.yaml # Check whether the service account specified in my_resource_with_sa.yaml can admit the pod oc adm policy scc-review -f my_resource_with_sa.yaml # Check whether the default service account can admit the pod; default is taken since no service account is defined in myresource_with_no_sa.yaml oc adm policy scc-review -f myresource_with_no_sa.yaml", "Check whether user bob can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -f myresource.yaml # Check whether user bob who belongs to projectAdmin group can create a pod specified in myresource.yaml oc adm policy scc-subject-review -u bob -g projectAdmin -f myresource.yaml # Check whether a service account specified in the pod template spec in myresourcewithsa.yaml can create the pod oc adm policy scc-subject-review -f myresourcewithsa.yaml", "Dry run deleting older completed and failed builds and also including # all builds whose associated build config no longer exists oc adm prune builds --orphans # To actually perform the prune operation, the confirm flag must be appended oc adm prune builds --orphans --confirm", "Dry run deleting all but the last complete deployment for every deployment config oc adm prune deployments --keep-complete=1 # To actually perform the prune operation, the confirm flag must be appended oc adm prune deployments --keep-complete=1 --confirm", "Prune all orphaned groups oc adm prune groups --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups except the ones from the denylist file oc adm prune groups --blacklist=/path/to/denylist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in an allowlist file oc adm prune groups --whitelist=/path/to/allowlist.txt --sync-config=/path/to/ldap-sync-config.yaml --confirm # Prune all orphaned groups from a list of specific groups specified in a list oc adm prune groups groups/group_name groups/other_name --sync-config=/path/to/ldap-sync-config.yaml --confirm", "See what the prune command would delete if only images and their referrers were more than an hour old # and obsoleted by 3 newer revisions under the same tag were considered oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm # See what the prune command would delete if we are interested in removing images # exceeding currently set limit ranges ('openshift.io/Image') oc adm prune images --prune-over-size-limit # To actually perform the prune operation, the confirm flag must be appended oc adm prune images --prune-over-size-limit --confirm # Force the insecure HTTP protocol with the particular registry host name oc adm prune images --registry-url=http://registry.example.org --confirm # Force a secure connection with a custom certificate authority to the particular registry host name oc adm prune images --registry-url=registry.example.org --certificate-authority=/path/to/custom/ca.crt --confirm", "Reboot all MachineConfigPools oc adm reboot-machine-config-pool mcp/worker mcp/master # Reboot all MachineConfigPools that inherit from worker. This include all custom MachineConfigPools and infra. oc adm reboot-machine-config-pool mcp/worker # Reboot masters oc adm reboot-machine-config-pool mcp/master", "Use git to check out the source code for the current cluster release to DIR oc adm release extract --git=DIR # Extract cloud credential requests for AWS oc adm release extract --credentials-requests --cloud=aws # Use git to check out the source code for the current cluster release to DIR from linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release extract --git=DIR quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x", "Show information about the cluster's current release oc adm release info # Show the source code that comprises a release oc adm release info 4.11.2 --commit-urls # Show the source code difference between two releases oc adm release info 4.11.0 4.11.2 --commits # Show where the images referenced by the release are located oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --pullspecs # Show information about linux/s390x image # Note: Wildcard filter is not supported; pass a single os/arch to extract oc adm release info quay.io/openshift-release-dev/ocp-release:4.11.2 --filter-by-os=linux/s390x", "Perform a dry run showing what would be mirrored, including the mirror objects oc adm release mirror 4.11.0 --to myregistry.local/openshift/release --release-image-signature-to-dir /tmp/releases --dry-run # Mirror a release into the current directory oc adm release mirror 4.11.0 --to file://openshift/release --release-image-signature-to-dir /tmp/releases # Mirror a release to another directory in the default location oc adm release mirror 4.11.0 --to-dir /tmp/releases # Upload a release from the current directory to another server oc adm release mirror --from file://openshift/release --to myregistry.com/openshift/release --release-image-signature-to-dir /tmp/releases # Mirror the 4.11.0 release to repository registry.example.com and apply signatures to connected cluster oc adm release mirror --from=quay.io/openshift-release-dev/ocp-release:4.11.0-x86_64 --to=registry.example.com/your/repository --apply-release-image-signature", "Create a release from the latest origin images and push to a DockerHub repository oc adm release new --from-image-stream=4.11 -n origin --to-image docker.io/mycompany/myrepo:latest # Create a new release with updated metadata from a previous release oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 --name 4.11.1 --previous 4.11.0 --metadata ... --to-image docker.io/mycompany/myrepo:latest # Create a new release and override a single image oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11 cli=docker.io/mycompany/cli:latest --to-image docker.io/mycompany/myrepo:latest # Run a verification pass to ensure the release can be reproduced oc adm release new --from-release registry.ci.openshift.org/origin/release:v4.11", "Restart all the nodes, 10% at a time oc adm restart-kubelet nodes --all --directive=RemoveKubeletKubeconfig # Restart all the nodes, 20 nodes at a time oc adm restart-kubelet nodes --all --parallelism=20 --directive=RemoveKubeletKubeconfig # Restart all the nodes, 15% at a time oc adm restart-kubelet nodes --all --parallelism=15% --directive=RemoveKubeletKubeconfig # Restart all the masters at the same time oc adm restart-kubelet nodes -l node-role.kubernetes.io/master --parallelism=100% --directive=RemoveKubeletKubeconfig", "Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule' # If a taint with that key and effect already exists, its value is replaced as specified oc adm taint nodes foo dedicated=special-user:NoSchedule # Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists oc adm taint nodes foo dedicated:NoSchedule- # Remove from node 'foo' all the taints with key 'dedicated' oc adm taint nodes foo dedicated- # Add a taint with key 'dedicated' on nodes having label myLabel=X oc adm taint node -l myLabel=X dedicated=foo:PreferNoSchedule # Add to node 'foo' a taint with key 'bar' and no value oc adm taint nodes foo bar:NoSchedule", "Show usage statistics for images oc adm top images", "Show usage statistics for image streams oc adm top imagestreams", "Show metrics for all nodes oc adm top node # Show metrics for a given node oc adm top node NODE_NAME", "Show metrics for all pods in the default namespace oc adm top pod # Show metrics for all pods in the given namespace oc adm top pod --namespace=NAMESPACE # Show metrics for a given pod and its containers oc adm top pod POD_NAME --containers # Show metrics for the pods defined by label name=myLabel oc adm top pod -l name=myLabel", "Mark node \"foo\" as schedulable oc adm uncordon foo", "View the update status and available cluster updates oc adm upgrade # Update to the latest version oc adm upgrade --to-latest=true", "Verify the image signature and identity using the local GPG keychain oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 # Verify the image signature and identity using the local GPG keychain and save the status oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 --save # Verify the image signature and identity via exposed registry route oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --expected-identity=registry.local:5000/foo/bar:v1 --registry-url=docker-registry.foo.com # Remove all signature verifications from the image oc adm verify-image-signature sha256:c841e9b64e4579bd56c794bdd7c36e1c257110fd2404bebbb8b613e4935228c4 --remove-all", "Wait for all nodes to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/worker mcp/master' oc adm wait-for-node-reboot nodes --all # Wait for masters to complete a requested reboot from 'oc adm reboot-machine-config-pool mcp/master' oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master # Wait for masters to complete a specific reboot oc adm wait-for-node-reboot nodes -l node-role.kubernetes.io/master --reboot-number=4", "Wait for all clusteroperators to become stable oc adm wait-for-stable-cluster # Consider operators to be stable if they report as such for 5 minutes straight oc adm wait-for-stable-cluster --minimum-stable-period 5m" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/cli_tools/openshift-cli-oc
Chapter 15. Virtualization
Chapter 15. Virtualization KVM processor performance improvement Virtual CPU timeslice sharing Virtual CPU timeslice sharing is a performance enhancing feature at the Linux scheduler level, where an idle virtual CPU can hand the remainder of its timeslice to another virtual CPU before yielding the CPU. This feature addresses an inherent lock holder preemption issue that exists in SMP systems, that can affect performance in virtual CPUs. This feature provides stable performance in multi-processor guests. This feature is supported on both Intel and AMD processors, and is called Pause Loop Exiting (PLE) on Intel processors, and Pause Filter on AMD processors. KVM network performance improvements KVM network performance is a critical requirement for Virtualization and cloud based products and solutions. Red Hat Enterprise Linux 6.2 provides a number of network performance optimizations to improve the KVM network para-virtualized driver performance in various setups. Improved small message KVM performance Red Hat Enterprise Linux 6.2 improves the KVM small message performance to satisfy a variety of networking workloads that generate small messages (< 4K). Wire speed requirement in KVM network drivers Virtualization and cloud products that run networking work loads need to run wire speeds. Up until Red Hat Enterprise Linux 6.1, the only way to reach wire speed on a 10 GB Ethernet NIC with a lower CPU utilization was to use PCI device assignment (passthrough), which limits other features like memory overcommit and guest migration The macvtap / vhost zero-copy capabilities allows the user to use those features when high performance is required. This feature improves performance for any Red Hat Enterprise Linux 6.x guest in the VEPA use case. This feature is introduced as a Technology Preview. UDP checksum optimization for KVM network drivers UDP checksum optimization eliminates the need for the guest to validate the checksum if it has been validated by host NICs. This feature speeds up UDP external-to-guest traffic on 10 GB Ethernet cards with Red Hat Enterprise Linux 6.2 guests and hosts. The UDP checksum optimization is implemented in the virtio-net driver. Improved I/O path performance when host slower than guest The Red Hat Enterprise Linux 6.2 KVM network driver has improved I/O path performance, with reduced virtual machine exits and interrupts, that results in faster data delivery. This improvement enables you to run a faster guest on a slower host, without incurring any performance penalties. This enhancement is achieved by an enhanced virtio ring structure, and event index support in virtio and vhost-net . KVM Systems Management and usability improvements System monitoring via SNMP This feature provides KVM support for a stable technology that is already used in data center with bare metal systems. SNMP is the standard for monitoring and is extremely well understood as well as computationally efficient. System monitoring via SNMP in Red Hat Enterprise Linux 6.2 allows the KVM hosts to send SNMP traps on events so that hypervisor events can be communicated to the user via standard SNMP protocol. This feature is provided through the addition of a new package: libvirt-snmp . This feature is introduced as a Technology Preview. Improved guest debugging capabilities Users who virtualize their data centers need a way of debugging when a guest OS becomes unresponsive and a crash dump has to be initiated. There are two methods heavily used with physical systems: Triggering a non-maskable interrupt (NMI) in the guest Sending SysRq sequences to the guest While these capabilities are provided directly with the KVM console, a number of users use KVM through the libvirt API and virsh , where these two features were missing. Red Hat Enterprise Linux 6.2 improves guest debugging capabilities across the KVM stack, thus allowing a user to trigger NMIs in guests and send SysRq key sequences to guests. Improve virtual machine boot up access Users who virtualize their data centers need to track the guest boot up process and display the entire BIOS and kernel boot up message from the start. The absence of this feature prevents users from an interactive use of the virsh console, prior to boot up. A new package, sgabios , has been be added to Red Hat Enterprise Linux 6.2, to provide this capability, along with some additions to qemu-kvm . Multi-processor (NUMA) Tuning Improvements Red Hat Enterprise Linux 6.2 adds tuning improvements to the libvirt API stack, resulting in improved out-of-the-box performance when performing SPECvirt measurements. Red Hat Enterprise Linux 6.2 is now able to pin the memory associated with a NUMA node when a virtual machine is created. USB enhancements The USB 2.0 emulation has been implemented for qemu-kvm . This is available for QEMU directly only. Libvirt support is planned for the release. Remote Wakeup support has been added for the USB host controller. Together with the cooperation of the guest OS it allows the stopping of the frequent 1000hz polling mode and putting the device to sleep. It dramatically improves the power utilization and the CPU consumption of virtual machines with a USB mouse emulation (or a tablet) - one of the common devices that every virtual machine has. Xen improvements Memory ballooning Memory ballooning is now supported by Red Hat Enterprise Linux 6 paravirtualized Xen guests. Domain memory limit Memory limit for x86_64 domU PV guests has been increased to 128 GB: CONFIG_XEN_MAX_DOMAIN_MEMORY=128 . Time accounting The xen_sched_clock implementation (which returns the number of unstolen nanoseconds) has been replaced by the xen_clocksource_read implementation. Virtualization Documentation The Red Hat Enterprise Linux Virtualization Guide has been divided into several specific guides: Red Hat Enterprise Linux Virtualization Getting Started Guide Red Hat Enterprise Linux Virtualization Administration Guide Red Hat Enterprise Linux Virtualization Host Configuration and Guest Installation Guide spice-protocol The package spice-protocol has been upgraded to version 0.8.1, providing the following new features: Support for volume change Support for async guest I/O writes and interrupts Support for suspend (S3) related guest I/O writes Support for an interrupt indicating a guest bug Linux Containers Linux containers provide a flexible approach to application runtime containment on bare-metal systems without the need to fully virtualize the workload. Red Hat Enterprise Linux 6.2 provides application level containers to separate and control the application resource usage policies via cgroup and namespaces. This release introduces basic management of container life-cycle by allowing creation, editing and deletion of containers via the libvirt API and the virt-manager GUI. Linux Containers are a Technology Preview. Red Hat Enterprise Virtualization Hypervisor RPM multi-installable In order to allow side-by-side installs of the rhev-hypervisor package, configure Yum to make rhev-hypervisor an install-only package by editing the /etc/yum.conf file and adding the installonlypkgs option: This option needs to also include the default list of installonly packages which can be found in the yum.conf man page ( man yum.conf 5 ) in the installonlypkgs option section.
[ "[main] installonlypkgs=rhev-hypervisor" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_release_notes/virtualization
6.2 Technical Notes
6.2 Technical Notes Red Hat Enterprise Linux 6 Detailed notes on the changes implemented in Red Hat Enterprise Linux 6.2 Edition 2 Red Hat Engineering Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/index
Managing Red Hat Process Automation Manager and KIE Server settings
Managing Red Hat Process Automation Manager and KIE Server settings Red Hat Process Automation Manager 7.13
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/index
Chapter 3. Preparing your system to deploy Red Hat Quay
Chapter 3. Preparing your system to deploy Red Hat Quay For a proof of concept Red Hat Quay deployment, you must configure port mapping, a database, and Redis prior to deploying the registry. Use the following procedures to prepare your system to deploy Red Hat Quay. 3.1. Configuring port mapping for Red Hat Quay You can use port mappings to expose ports on the host and then use these ports in combination with the host IP address or host name to navigate to the Red Hat Quay endpoint. Procedure Enter the following command to obtain your static IP address for your host system: USD ip a Example output --- link/ether 6c:6a:77:eb:09:f1 brd ff:ff:ff:ff:ff:ff inet 192.168.1.132/24 brd 192.168.1.255 scope global dynamic noprefixroute wlp82s0 --- Add the IP address and a local hostname, for example, quay-server.example.com to your /etc/hosts file that will be used to reach the Red Hat Quay endpoint. You can confirm that the IP address and hostname have been added to the /etc/hosts file by entering the following command: USD cat /etc/hosts Example output 192.168.1.138 quay-server.example.com 3.2. Configuring the database Red Hat Quay requires a database for storing metadata. PostgreSQL is used throughout this document. For this deployment, a directory on the local file system to persist database data is used. Use the following procedure to set up a PostgreSQL database. Procedure In the installation folder, denoted here by the USDQUAY variable, create a directory for the database data by entering the following command: USD mkdir -p USDQUAY/postgres-quay Set the appropriate permissions by entering the following command: USD setfacl -m u:26:-wx USDQUAY/postgres-quay Start the Postgres container, specifying the username, password, and database name and port, with the volume definition for database data: Ensure that the Postgres pg_trgm module is installed by running the following command: USD sudo podman exec -it postgresql-quay /bin/bash -c 'echo "CREATE EXTENSION IF NOT EXISTS pg_trgm" | psql -d quay -U postgres' Note The pg_trgm module is required for the Quay container. 3.3. Configuring Redis Redis is a key-value store that is used by Red Hat Quay for live builder logs. Use the following procedure to deploy the Redis container for the Red Hat Quay proof of concept. Procedure Start the Redis container, specifying the port and password, by entering the following command:
[ "ip a", "--- link/ether 6c:6a:77:eb:09:f1 brd ff:ff:ff:ff:ff:ff inet 192.168.1.132/24 brd 192.168.1.255 scope global dynamic noprefixroute wlp82s0 ---", "cat /etc/hosts", "192.168.1.138 quay-server.example.com", "mkdir -p USDQUAY/postgres-quay", "setfacl -m u:26:-wx USDQUAY/postgres-quay", "sudo podman run -d --rm --name postgresql-quay -e POSTGRESQL_USER=quayuser -e POSTGRESQL_PASSWORD=quaypass -e POSTGRESQL_DATABASE=quay -e POSTGRESQL_ADMIN_PASSWORD=adminpass -p 5432:5432 -v USDQUAY/postgres-quay:/var/lib/pgsql/data:Z registry.redhat.io/rhel8/postgresql-13", "sudo podman exec -it postgresql-quay /bin/bash -c 'echo \"CREATE EXTENSION IF NOT EXISTS pg_trgm\" | psql -d quay -U postgres'", "sudo podman run -d --rm --name redis -p 6379:6379 -e REDIS_PASSWORD=strongpassword registry.redhat.io/rhel8/redis-6:1-110" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/proof_of_concept_-_deploying_red_hat_quay/preparing-system-deploy-quay
16.2. XML Representation of a Floating Disk
16.2. XML Representation of a Floating Disk Example 16.1. An XML representation of a disk device
[ "<disk id=\"ed7feafe-9aaf-458c-809a-ed789cdbd5b4\" href=\"/ovirt-engine/api/disks/ed7feafe-9aaf-458c-809a-ed789cdbd5b4\"> <link rel=\"statistics\" href=\"/ovirt-engine/api/disks/ed7feafe-9aaf-458c-809a-ed789cdbd5b4/statistics\"/> <storage_domains> <storage_domain id=\"fabe0451-701f-4235-8f7e-e20e458819ed\"/> </storage_domains> <size>10737418240</size> <type>system</type> <status> <state>ok</state> </status> <interface>virtio</interface> <format>raw</format> <bootable>true</bootable> <shareable>true</shareable> <lunStorage> <storage> <logical_unit id=\"lun1\"> </logical_unit> <storage> </lunStorage> </disk>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/xml_representation_of_a_floating_disk
Chapter 5. Configuring Streams for Apache Kafka
Chapter 5. Configuring Streams for Apache Kafka Use the Kafka configuration properties files to configure Streams for Apache Kafka. The properties file is in Java format, with each property on a separate line in the following format: Lines starting with # or ! are treated as comments and are ignored by Streams for Apache Kafka components. Values can be split into multiple lines by using \ directly before the newline/carriage return. After saving the changes in the properties file, you need to restart the Kafka node. In a multi-node environment, repeat the process on each node in the cluster. 5.1. Using standard Kafka configuration properties Use standard Kafka configuration properties to configure Kafka components. The properties provide options to control and tune the configuration of the following Kafka components: Brokers Topics Producer, consumer, and management clients Kafka Connect Kafka Streams Broker and client parameters include options to configure authorization, authentication and encryption. For further information on Kafka configuration properties and how to use the properties to tune your deployment, see the following guides: Kafka configuration properties Kafka configuration tuning 5.2. Loading configuration values from environment variables Use the Environment Variables Configuration Provider plugin to load configuration data from environment variables. You can use the Environment Variables Configuration Provider, for example, to load certificates or JAAS configuration from environment variables. You can use the provider to load configuration data for all Kafka components, including producers and consumers. Use the provider, for example, to provide the credentials for Kafka Connect connector configuration. Prerequisites Streams for Apache Kafka is installed on each host , and the configuration files are available. The Environment Variables Configuration Provider JAR file. The JAR file is available from the Streams for Apache Kafka archive . Procedure Add the Environment Variables Configuration Provider JAR file to the Kafka libs directory. Initialize the Environment Variables Configuration Provider in the configuration properties file of the Kafka component. For example, to initialize the provider for Kafka, add the configuration to the server.properties file. Configuration to enable the Environment Variables Configuration Provider config.providers.env.class=org.apache.kafka.common.config.provider.EnvVarConfigProvider Add configuration to the properties file to load data from environment variables. Configuration to load data from an environment variable option=USD{env: <MY_ENV_VAR_NAME> } Use capitalized or upper-case environment variable naming conventions, such as MY_ENV_VAR_NAME . Save the changes. Restart the Kafka component. For information on restarting brokers in a multi-node cluster, see Section 3.7, "Performing a graceful rolling restart of Kafka brokers" . 5.3. Configuring Kafka Kafka uses properties files to store static configuration. The recommended location for the configuration files is ./config/kraft/ . The configuration files have to be readable by the Kafka user. Streams for Apache Kafka ships example configuration files that highlight various basic and advanced features of the product. They can be found under config/kraft/ in the Streams for Apache Kafka installation directory as follows: (default) config/kraft/server.properties for nodes running in combined mode config/kraft/broker.properties for nodes running as brokers config/kraft/controller.properties for nodes running as controllers This chapter explains the most important configuration options. 5.3.1. Listeners Listeners are used to connect to Kafka brokers. Each Kafka broker can be configured to use multiple listeners. Each listener requires a different configuration so it can listen on a different port or network interface. To configure listeners, edit the listeners property in the Kafka configuration properties file. Add listeners to the listeners property as a comma-separated list. Configure each property as follows: If <hostname> is empty, Kafka uses the java.net.InetAddress.getCanonicalHostName() class as the hostname. Example configuration for multiple listeners listeners=internal-1://:9092,internal-2://:9093,replication://:9094 When a Kafka client wants to connect to a Kafka cluster, it first connects to the bootstrap server , which is one of the cluster nodes. The bootstrap server provides the client with a list of all the brokers in the cluster, and the client connects to each one individually. The list of brokers is based on the configured listeners . Advertised listeners Optionally, you can use the advertised.listeners property to provide the client with a different set of listener addresses than those given in the listeners property. This is useful if additional network infrastructure, such as a proxy, is between the client and the broker, or an external DNS name is being used instead of an IP address. The advertised.listeners property is formatted in the same way as the listeners property. Example configuration for advertised listeners listeners=internal-1://:9092,internal-2://:9093 advertised.listeners=internal-1://my-broker-1.my-domain.com:1234,internal-2://my-broker-1.my-domain.com:1235 Note The names of the advertised listeners must match those listed in the listeners property. Inter-broker listeners Inter-broker listeners are used for communication between Kafka brokers. Inter-broker communication is required for: Coordinating workloads between different brokers Replicating messages between partitions stored on different brokers The inter-broker listener can be assigned to a port of your choice. When multiple listeners are configured, you can define the name of the inter-broker listener in the inter.broker.listener.name property of your broker configuration. Here, the inter-broker listener is named as REPLICATION : listeners=REPLICATION://0.0.0.0:9091 inter.broker.listener.name=REPLICATION Controller listeners Controller configuration is used to connect and communicate with the controller that coordinates the cluster and manages the metadata used to track the status of brokers and partitions. By default, communication between the controllers and brokers uses a dedicated controller listener. Controllers are responsible for coordinating administrative tasks, such as partition leadership changes, so one or more of these listeners is required. Specify listeners to use for controllers using the controller.listener.names property. You can specify a dynamic quorum of controllers using the controller.quorum.bootstrap.servers property. The quorum enables a leader-follower structure for administrative tasks, with the leader actively managing operations and followers as hot standbys, ensuring metadata consistency in memory and facilitating failover. listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER controller.quorum.bootstrap.servers=localhost:9090 The format for the controller quorum is <hostname>:<port> . 5.3.2. Data logs Apache Kafka stores all records it receives from producers in logs. The logs contain the actual data, in the form of records, that Kafka needs to deliver. Note that these records differ from application log files, which detail the broker's activities. Log directories You can configure log directories using the log.dirs property in the server configuration properties file to store logs in one or multiple log directories. It should be set to /var/lib/kafka directory created during installation: Data log configuration For performance reasons, you can configure log.dirs to multiple directories and place each of them on a different physical device to improve disk I/O performance. For example: Configuration for multiple directories 5.3.3. Metadata log Controllers use a metadata log stored as a single-partition topic ( __cluster_metadata ) on every node. It records the state of the cluster, storing information on brokers, replicas, topics, and partitions, including the state of in-sync replicas and partition leadership. Metadata log directory You can configure the directory for storing the metadata log using the metadata.log.dir property. By default, if this property is not set, Kafka uses the log.dirs property to determine the storage directory for both data logs and metadata logs. The metadata log is placed in the first directory specified for log.dirs . Isolating metadata operations from data operations can improve manageability and potentially lead to performance gains. To set a specific directory for the metadata log, include the metadata.log.dir property in the server configuration properties file. For example: Metadata log configuration Note Kafka tools are available for inspecting and debugging the metadata log. For more information, see the Apache Kafka documentation . 5.3.4. Node ID Node ID is a unique identifier for each node (broker or controller) in the cluster. You can assign an integer greater than or equal to 0 as node ID. The node ID is used to identify the nodes after restarts or crashes and it is therefore important that the ID is stable and does not change over time. The node ID is configured in the Kafka configuration properties file: 5.4. Transitioning to separate broker and controller roles This procedure describes how to transition to using nodes with separate roles. If your Kafka cluster is using nodes with dual controller and broker roles, you can transition to using nodes with separate roles. To do this, add new controllers, scale down the controllers on the dual-role nodes, and then switch the dual-role nodes to broker-only. In this example, we update three dual-role nodes. Prerequisites Streams for Apache Kafka (minimum 2.9) is installed on each host , and the configuration files are available. The controller quorum is configured for dynamic scaling using the controller.quorum.bootstrap.servers property. Cruise Control is installed. A backup of the cluster is recommended. Procedure Add a quorum of three new controller-only nodes. Integrate the controllers into the controller quorum by updating the controller.quorum.bootstrap.servers property. For more information, see Section 14.2, "Adding new controllers" . Using the kafka-metadata-quorum.sh tool, remove the dual-role controllers from the controller quorum. For more information, see Section 14.3, "Removing controllers" . For each dual-role node, and one at a time: Stop the dual-role node: ./bin/kafka-server-stop.sh Configure the dual-role node to serve as a broker-node only by switching process.roles=broker, controller in the node configuration to process.roles=broker . Example broker configuration node.id=1 process.roles=broker log.dirs=/var/lib/kafka listeners=PLAINTEXT://0.0.0.0:9092 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT controller.quorum.bootstrap.servers=localhost:9090, localhost:9091, localhost:9092 inter.broker.listener.name=PLAINTEXT num.partitions=1 auto.create.topics.enable=false default.replication.factor=3 min.insync.replicas=2 #... Restart the broker node that was previously serving a dual role: ./bin/kafka-server-start.sh -daemon ./config/kraft/server.properties 5.5. Transitioning to dual-role nodes This procedure describes how to transition from using separate nodes with broker-only and controller-only roles to using dual-role nodes. If your Kafka cluster is using nodes with dedicated controller and broker nodes, you can transition to using single nodes with both roles. To do this, add dual-role configuration to the nodes, then rebalance the cluster to move partition replicas to the nodes that previously served as controllers only. Note A dual-role configuration is suitable for development or testing. In a typical production environment, use dedicated broker and controller nodes. Prerequisites Streams for Apache Kafka (minimum 2.9) is is installed on each host , and the configuration files are available. The controller quorum is configured for dynamic scaling using the controller.quorum.bootstrap.servers property. Cruise Control is installed. A backup of the cluster is recommended. Procedure For each controller node, and one at a time: Stop the controller node: ./bin/kafka-server-stop.sh Configure the controller-only node to serve as a dual-role node by adding broker-specific configuration. At a minimum, do the following: Switch process.roles=controller to process.roles=broker, controller . Add or update the broker log directory using log.dirs . Add a listener for the broker to handle client requests. In this example, PLAINTEXT://:9092 is added. Update mappings between listener names and security protocols using listener.security.protocol.map . Configure a listener for inter-broker communication using inter.broker.listener.name . Example dual-role configuration process.roles=broker, controller node.id=1 log.dirs=/var/lib/kafka metadata.log.dir=/var/lib/kafka-metadata listeners=PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT controller.quorum.bootstrap.servers=localhost:9090 inter.broker.listener.name=PLAINTEXT num.partitions=1 auto.create.topics.enable=false default.replication.factor=3 min.insync.replicas=2 # ... Restart the node that is now operating in a dual role: ./bin/kafka-server-start.sh -daemon ./config/kraft/dual-role.properties Use the Cruise Control remove_broker endpoint to reassign partition replicas from broker-only nodes to the nodes that now serve as dual-role nodes. The reassignment can take some time depending on the number of topics and partitions in the cluster. For more information, see Section 15.7, "Generating optimization proposals" . Unregister the broker nodes: ./bin/kafka-cluster.sh unregister \ --bootstrap-server <broker_host>:<port> \ --id <node_id_number> For more information, see Section 14.4, "Unregistering nodes after scale-down operations" . Stop the broker nodes: ./bin/kafka-server-stop.sh
[ "<option> = <value>", "This is a comment", "sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username=\"bob\" password=\"bobs-password\";", "config.providers.env.class=org.apache.kafka.common.config.provider.EnvVarConfigProvider", "option=USD{env: <MY_ENV_VAR_NAME> }", "<listener_name>://<hostname>:<port>", "listeners=internal-1://:9092,internal-2://:9093,replication://:9094", "listeners=internal-1://:9092,internal-2://:9093 advertised.listeners=internal-1://my-broker-1.my-domain.com:1234,internal-2://my-broker-1.my-domain.com:1235", "listeners=REPLICATION://0.0.0.0:9091 inter.broker.listener.name=REPLICATION", "listeners=CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER controller.quorum.bootstrap.servers=localhost:9090", "log.dirs=/var/lib/kafka", "log.dirs=/var/lib/kafka1,/var/lib/kafka2,/var/lib/kafka3", "log.dirs=/var/lib/kafka metadata.log.dir=/var/lib/kafka-metadata", "node.id=1", "./bin/kafka-server-stop.sh", "node.id=1 process.roles=broker log.dirs=/var/lib/kafka listeners=PLAINTEXT://0.0.0.0:9092 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT controller.quorum.bootstrap.servers=localhost:9090, localhost:9091, localhost:9092 inter.broker.listener.name=PLAINTEXT num.partitions=1 auto.create.topics.enable=false default.replication.factor=3 min.insync.replicas=2 #", "./bin/kafka-server-start.sh -daemon ./config/kraft/server.properties", "./bin/kafka-server-stop.sh", "process.roles=broker, controller node.id=1 log.dirs=/var/lib/kafka metadata.log.dir=/var/lib/kafka-metadata listeners=PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9090 controller.listener.names=CONTROLLER listener.security.protocol.map=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT controller.quorum.bootstrap.servers=localhost:9090 inter.broker.listener.name=PLAINTEXT num.partitions=1 auto.create.topics.enable=false default.replication.factor=3 min.insync.replicas=2", "./bin/kafka-server-start.sh -daemon ./config/kraft/dual-role.properties", "./bin/kafka-cluster.sh unregister --bootstrap-server <broker_host>:<port> --id <node_id_number>", "./bin/kafka-server-stop.sh" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/assembly-configuring-amq-streams-str
Chapter 21. Red Hat Quay garbage collection
Chapter 21. Red Hat Quay garbage collection Red Hat Quay includes automatic and continuous image garbage collection. Garbage collection ensures efficient use of resources for active objects by removing objects that occupy sizeable amounts of disk space, such as dangling or untagged images, repositories, and blobs, including layers and manifests. Garbage collection performed by Red Hat Quay can reduce downtime in your organization's environment. 21.1. Red Hat Quay garbage collection in practice Currently, all garbage collection happens discreetly, and there are no commands to manually run garbage collection. Red Hat Quay provides metrics that track the status of the different garbage collection workers. For namespace and repository garbage collection, the progress is tracked based on the size of their respective queues. Namespace and repository garbage collection workers require a global lock to work. As a result, and for performance reasons, only one worker runs at a time. Note Red Hat Quay shares blobs between namespaces and repositories in order to conserve disk space. For example, if the same image is pushed 10 times, only one copy of that image will be stored. It is possible that tags can share their layers with different images already stored somewhere in Red Hat Quay. In that case, blobs will stay in storage, because deleting shared blobs would make other images unusable. Blob expiration is independent of the time machine. If you push a tag to Red Hat Quay and the time machine is set to 0 seconds, and then you delete a tag immediately, garbage collection deletes the tag and everything related to that tag, but will not delete the blob storage until the blob expiration time is reached. Garbage collecting tagged images works differently than garbage collection on namespaces or repositories. Rather than having a queue of items to work with, the garbage collection workers for tagged images actively search for a repository with inactive or expired tags to clean up. Each instance of garbage collection workers will grab a repository lock, which results in one worker per repository. Note In Red Hat Quay, inactive or expired tags are manifests without tags because the last tag was deleted or it expired. The manifest stores information about how the image is composed and stored in the database for each individual tag. When a tag is deleted and the allotted time from Time Machine has been met, Red Hat Quay garbage collects the blobs that are not connected to any other manifests in the registry. If a particular blob is connected to a manifest, then it is preserved in storage and only its connection to the manifest that is being deleted is removed. Expired images will disappear after the allotted time, but are still stored in Red Hat Quay. The time in which an image is completely deleted, or collected, depends on the Time Machine setting of your organization. The default time for garbage collection is 14 days unless otherwise specified. Until that time, tags can be pointed to an expired or deleted images. For each type of garbage collection, Red Hat Quay provides metrics for the number of rows per table deleted by each garbage collection worker. The following image shows an example of how Red Hat Quay monitors garbage collection with the same metrics: 21.1.1. Measuring storage reclamation Red Hat Quay does not have a way to track how much space is freed up by garbage collection. Currently, the best indicator of this is by checking how many blobs have been deleted in the provided metrics. Note The UploadedBlob table in the Red Hat Quay metrics tracks the various blobs that are associated with a repository. When a blob is uploaded, it will not be garbage collected before the time designated by the PUSH_TEMP_TAG_EXPIRATION_SEC parameter. This is to avoid prematurely deleting blobs that are part of an ongoing push. For example, if garbage collection is set to run often, and a tag is deleted in the span of less than one hour, then it is possible that the associated blobs will not get cleaned up immediately. Instead, and assuming that the time designated by the PUSH_TEMP_TAG_EXPIRATION_SEC parameter has passed, the associated blobs will be removed the time garbage collection is triggered to run by another expired tag on the same repository. 21.2. Garbage collection configuration fields The following configuration fields are available to customize what is garbage collected, and the frequency at which garbage collection occurs: Name Description Schema FEATURE_GARBAGE_COLLECTION Whether garbage collection is enabled for image tags. Defaults to true . Boolean FEATURE_NAMESPACE_GARBAGE_COLLECTION Whether garbage collection is enabled for namespaces. Defaults to true . Boolean FEATURE_REPOSITORY_GARBAGE_COLLECTION Whether garbage collection is enabled for repositories. Defaults to true . Boolean GARBAGE_COLLECTION_FREQUENCY The frequency, in seconds, at which the garbage collection worker runs. Affects only garbage collection workers. Defaults to 30 seconds. String PUSH_TEMP_TAG_EXPIRATION_SEC The number of seconds that blobs will not be garbage collected after being uploaded. This feature prevents garbage collection from cleaning up blobs that are not referenced yet, but still used as part of an ongoing push. String TAG_EXPIRATION_OPTIONS List of valid tag expiration values. String DEFAULT_TAG_EXPIRATION Tag expiration time for time machine. String CLEAN_BLOB_UPLOAD_FOLDER Automatically cleans stale blobs left over from an S3 multipart upload. By default, blob files older than two days are cleaned up every hour. Boolean + Default: true 21.3. Disabling garbage collection The garbage collection features for image tags, namespaces, and repositories are stored in the config.yaml file. These features default to true . In rare cases, you might want to disable garbage collection, for example, to control when garbage collection is performed. You can disable garbage collection by setting the GARBAGE_COLLECTION features to false . When disabled, dangling or untagged images, repositories, namespaces, layers, and manifests are not removed. This might increase the downtime of your environment. Note There is no command to manually run garbage collection. Instead, you would disable, and then re-enable, the garbage collection feature. 21.4. Garbage collection and quota management Red Hat Quay introduced quota management in 3.7. With quota management, users have the ability to report storage consumption and to contain registry growth by establishing configured storage quota limits. As of Red Hat Quay 3.7, garbage collection reclaims memory that was allocated to images, repositories, and blobs after deletion. Because the garbage collection feature reclaims memory after deletion, there is a discrepancy between what is stored in an environment's disk space and what quota management is reporting as the total consumption. There is currently no workaround for this issue. 21.5. Garbage collection in practice Use the following procedure to check your Red Hat Quay logs to ensure that garbage collection is working. Procedure Enter the following command to ensure that garbage collection is properly working: USD sudo podman logs <container_id> Example output: gcworker stdout | 2022-11-14 18:46:52,458 [63] [INFO] [apscheduler.executors.default] Job "GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], run at: 2022-11-14 18:47:22 UTC)" executed successfully Delete an image tag. Enter the following command to ensure that the tag was deleted: USD podman logs quay-app Example output: gunicorn-web stdout | 2022-11-14 19:23:44,574 [233] [INFO] [gunicorn.access] 192.168.0.38 - - [14/Nov/2022:19:23:44 +0000] "DELETE /api/v1/repository/quayadmin/busybox/tag/test HTTP/1.0" 204 0 "http://quay-server.example.com/repository/quayadmin/busybox?tab=tags" "Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0" 21.6. Red Hat Quay garbage collection metrics The following metrics show how many resources have been removed by garbage collection. These metrics show how many times the garbage collection workers have run and how many namespaces, repositories, and blobs were removed. Metric name Description quay_gc_iterations_total Number of iterations by the GCWorker quay_gc_namespaces_purged_total Number of namespaces purged by the NamespaceGCWorker quay_gc_repos_purged_total Number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker quay_gc_storage_blobs_deleted_total Number of storage blobs deleted Sample metrics output # TYPE quay_gc_iterations_created gauge quay_gc_iterations_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189714e+09 ... # HELP quay_gc_iterations_total number of iterations by the GCWorker # TYPE quay_gc_iterations_total counter quay_gc_iterations_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ... # TYPE quay_gc_namespaces_purged_created gauge quay_gc_namespaces_purged_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189433e+09 ... # HELP quay_gc_namespaces_purged_total number of namespaces purged by the NamespaceGCWorker # TYPE quay_gc_namespaces_purged_total counter quay_gc_namespaces_purged_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 .... # TYPE quay_gc_repos_purged_created gauge quay_gc_repos_purged_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.631782319018925e+09 ... # HELP quay_gc_repos_purged_total number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker # TYPE quay_gc_repos_purged_total counter quay_gc_repos_purged_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ... # TYPE quay_gc_storage_blobs_deleted_created gauge quay_gc_storage_blobs_deleted_created{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 1.6317823190189059e+09 ... # HELP quay_gc_storage_blobs_deleted_total number of storage blobs deleted # TYPE quay_gc_storage_blobs_deleted_total counter quay_gc_storage_blobs_deleted_total{host="example-registry-quay-app-6df87f7b66-9tfn6",instance="",job="quay",pid="208",process_name="secscan:application"} 0 ...
[ "sudo podman logs <container_id>", "gcworker stdout | 2022-11-14 18:46:52,458 [63] [INFO] [apscheduler.executors.default] Job \"GarbageCollectionWorker._garbage_collection_repos (trigger: interval[0:00:30], next run at: 2022-11-14 18:47:22 UTC)\" executed successfully", "podman logs quay-app", "gunicorn-web stdout | 2022-11-14 19:23:44,574 [233] [INFO] [gunicorn.access] 192.168.0.38 - - [14/Nov/2022:19:23:44 +0000] \"DELETE /api/v1/repository/quayadmin/busybox/tag/test HTTP/1.0\" 204 0 \"http://quay-server.example.com/repository/quayadmin/busybox?tab=tags\" \"Mozilla/5.0 (X11; Linux x86_64; rv:102.0) Gecko/20100101 Firefox/102.0\"", "TYPE quay_gc_iterations_created gauge quay_gc_iterations_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189714e+09 HELP quay_gc_iterations_total number of iterations by the GCWorker TYPE quay_gc_iterations_total counter quay_gc_iterations_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 TYPE quay_gc_namespaces_purged_created gauge quay_gc_namespaces_purged_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189433e+09 HELP quay_gc_namespaces_purged_total number of namespaces purged by the NamespaceGCWorker TYPE quay_gc_namespaces_purged_total counter quay_gc_namespaces_purged_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 . TYPE quay_gc_repos_purged_created gauge quay_gc_repos_purged_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.631782319018925e+09 HELP quay_gc_repos_purged_total number of repositories purged by the RepositoryGCWorker or NamespaceGCWorker TYPE quay_gc_repos_purged_total counter quay_gc_repos_purged_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0 TYPE quay_gc_storage_blobs_deleted_created gauge quay_gc_storage_blobs_deleted_created{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 1.6317823190189059e+09 HELP quay_gc_storage_blobs_deleted_total number of storage blobs deleted TYPE quay_gc_storage_blobs_deleted_total counter quay_gc_storage_blobs_deleted_total{host=\"example-registry-quay-app-6df87f7b66-9tfn6\",instance=\"\",job=\"quay\",pid=\"208\",process_name=\"secscan:application\"} 0" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/manage_red_hat_quay/garbage-collection
Index
Index , Upgrading From Red Hat Enterprise Linux 6 High Availability Add-On , Pacemaker Overview , Pacemaker Architecture Components , Pacemaker Configuration and Management Tools , Upgrading From Red Hat Enterprise Linux 6 High Availability Add-On C cluster fencing, Fencing Overview quorum, Quorum Overview F fencing, Fencing Overview H High Availability Add-On difference between Release 6 and 7, Overview of Differences Between Releases Q quorum, Quorum Overview
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/ix01
8.4. Kernel Same-page Merging (KSM)
8.4. Kernel Same-page Merging (KSM) Kernel same-page Merging (KSM), used by the KVM hypervisor, allows KVM guests to share identical memory pages. These shared pages are usually common libraries or other identical, high-use data. KSM allows for greater guest density of identical or similar guest operating systems by avoiding memory duplication. The concept of shared memory is common in modern operating systems. For example, when a program is first started, it shares all of its memory with the parent program. When either the child or parent program tries to modify this memory, the kernel allocates a new memory region, copies the original contents and allows the program to modify this new region. This is known as copy on write. KSM is a Linux feature which uses this concept in reverse. KSM enables the kernel to examine two or more already running programs and compare their memory. If any memory regions or pages are identical, KSM reduces multiple identical memory pages to a single page. This page is then marked copy on write. If the contents of the page is modified by a guest virtual machine, a new page is created for that guest. This is useful for virtualization with KVM. When a guest virtual machine is started, it only inherits the memory from the host qemu-kvm process. Once the guest is running, the contents of the guest operating system image can be shared when guests are running the same operating system or applications. KSM allows KVM to request that these identical guest memory regions be shared. KSM provides enhanced memory speed and utilization. With KSM, common process data is stored in cache or in main memory. This reduces cache misses for the KVM guests, which can improve performance for some applications and operating systems. Secondly, sharing memory reduces the overall memory usage of guests, which allows for higher densities and greater utilization of resources. Note In Red Hat Enterprise Linux 6.7 and later, KSM is NUMA aware. This allows it to take NUMA locality into account while coalescing pages, thus preventing performance drops related to pages being moved to a remote node. Red Hat recommends avoiding cross-node memory merging when KSM is in use. If KSM is in use, change the /sys/kernel/mm/ksm/merge_across_nodes tunable to 0 to avoid merging pages across NUMA nodes. This can be done with the virsh node-memory-tune --shm-merge-across-nodes 0 command. Kernel memory accounting statistics can eventually contradict each other after large amounts of cross-node merging. As such, numad can become confused after the KSM daemon merges large amounts of memory. If your system has a large amount of free memory, you may achieve higher performance by turning off and disabling the KSM daemon. Refer to Chapter 9, NUMA " for more information on NUMA. Important Ensure the swap size is sufficient for the committed RAM even without taking KSM into account. KSM reduces the RAM usage of identical or similar guests. Overcommitting guests with KSM without sufficient swap space may be possible, but is not recommended because guest virtual machine memory use can result in pages becoming unshared. Red Hat Enterprise Linux uses two separate methods for controlling KSM: The ksm service starts and stops the KSM kernel thread. The ksmtuned service controls and tunes the ksm service, dynamically managing same-page merging. ksmtuned starts the ksm service and stops the ksm service if memory sharing is not necessary. When new guests are created or destroyed, ksmtuned must be instructed with the retune parameter to run. Both of these services are controlled with the standard service management tools. Note KSM is off by default on Red Hat Enterprise Linux 6.7. 8.4.1. The KSM Service The ksm service is included in the qemu-kvm package. When the ksm service is not started, Kernel same-page merging (KSM) shares only 2000 pages. This default value provides limited memory-saving benefits. When the ksm service is started, KSM will share up to half of the host system's main memory. Start the ksm service to enable KSM to share more memory. The ksm service can be added to the default startup sequence. Make the ksm service persistent with the systemctl command.
[ "systemctl start ksm Starting ksm: [ OK ]", "systemctl enable ksm" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/chap-ksm
User guide
User guide Red Hat OpenShift Dev Spaces 3.16 Using Red Hat OpenShift Dev Spaces 3.16 Jana Vrbkova [email protected] Red Hat Developer Group Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.16/html/user_guide/index
Chapter 1. Installing a cluster on any platform
Chapter 1. Installing a cluster on any platform In OpenShift Container Platform version 4.12, you can install a cluster on any infrastructure that you provision, including virtualization and cloud environments. Important Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in virtualized or cloud environments. 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 1.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 1.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 1.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 1.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 8 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 1.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 1.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 1.3.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 1.3.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 1.3.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 1.3.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 1.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 1.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 1.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 1.3.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 1.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 1.3.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 1.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 1.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 1.3.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application ingress load balancing infrastructure. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 1.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 1.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 1.3.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 1.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 1.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 1.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 1.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 1.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 1.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 1.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 1.9.1. Sample install-config.yaml file for other platforms You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for your platform. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 15 The pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 1.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 1.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 1.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 1.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on bare metal infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting. Note The compute node deployment steps included in this installation document are RHCOS-specific. If you choose instead to deploy RHEL-based compute nodes, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Only RHEL 8 compute machines are supported. You can configure RHCOS during ISO and PXE installations by using the following methods: Kernel arguments: You can use kernel arguments to provide installation-specific information. For example, you can specify the locations of the RHCOS installation files that you uploaded to your HTTP server and the location of the Ignition config file for the type of node you are installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot process to add the kernel arguments. In both installation cases, you can use special coreos.inst.* arguments to direct the live installer, as well as standard installation boot arguments for turning standard kernel services on or off. Ignition configs: OpenShift Container Platform Ignition config files ( *.ign ) are specific to the type of node you are installing. You pass the location of a bootstrap, control plane, or compute node Ignition config file during the RHCOS installation so that it takes effect on first boot. In special cases, you can create a separate, limited Ignition config to pass to the live system. That Ignition config could do a certain set of tasks, such as reporting success to a provisioning system after completing installation. This special Ignition config is consumed by the coreos-installer to be applied on first boot of the installed system. Do not provide the standard control plane and compute node Ignition configs to the live ISO directly. coreos-installer : You can boot the live ISO installer to a shell prompt, which allows you to prepare the permanent system in a variety of ways before first boot. In particular, you can run the coreos-installer command to identify various artifacts to include, work with disk partitions, and set up networking. In some cases, you can configure features on the live system and copy them to the installed system. Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP service and more preparation, but can make the installation process more automated. An ISO install is a more manual process and can be inconvenient if you are setting up more than a few machines. 1.11.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.12-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 1.11.2. Installing RHCOS by using PXE or iPXE booting You can use PXE or iPXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE or iPXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.12-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.12-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.12-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE or iPXE installation for the RHCOS images and begin the installation. Modify one of the following example menu entries for your environment and verify that the image and Ignition files are properly accessible: For PXE ( x86_64 ): 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and Grub as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 1.11.3. Advanced RHCOS installation configuration A key benefit for manually provisioning the Red Hat Enterprise Linux CoreOS (RHCOS) nodes for OpenShift Container Platform is to be able to do configuration that is not available through default OpenShift Container Platform installation methods. This section describes some of the configurations that you can do using techniques that include: Passing kernel arguments to the live installer Running coreos-installer manually from the live system Customizing a live ISO or PXE boot image The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways. 1.11.3.1. Using advanced networking options for PXE and ISO installations Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary configuration settings. To set up static IP addresses or configure special settings, such as bonding, you can do one of the following: Pass special kernel parameters when you boot the live installer. Use a machine config to copy networking files to the installed system. Configure networking from a live installer shell prompt, then copy those settings to the installed system so that they take effect when the installed system first boots. To configure a PXE or iPXE installation, use one of the following options: See the "Advanced RHCOS installation reference" tables. Use a machine config to copy networking files to the installed system. To configure an ISO installation, use the following procedure. Procedure Boot the ISO installer. From the live system shell prompt, configure networking for the live system using available RHEL tools, such as nmcli or nmtui . Run the coreos-installer command to install the system, adding the --copy-network option to copy networking configuration. For example: USD sudo coreos-installer install --copy-network \ --ignition-url=http://host/worker.ign /dev/sda Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. Reboot into the installed system. Additional resources See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for more information about the nmcli and nmtui tools. 1.11.3.2. Disk partitioning Disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the same partition layout, unless you override the default partitioning configuration. During the RHCOS installation, the size of the root file system is increased to use any remaining available space on the target device. Important The use of a custom partition scheme on your node might result in OpenShift Container Platform not monitoring or alerting on some node partitions. If you override the default partitioning, see Understanding OpenShift File System Monitoring (eviction conditions) for more information about how OpenShift Container Platform monitors your host file systems. OpenShift Container Platform monitors the following two filesystem identifiers: nodefs , which is the filesystem that contains /var/lib/kubelet imagefs , which is the filesystem that contains /var/lib/containers For the default partition scheme, nodefs and imagefs monitor the same root filesystem, / . To override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster node, you must create separate partitions. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Consider a situation where you want to add a separate storage partition for your containers and container images. For example, by mounting /var/lib/containers in a separate partition, the kubelet separately monitors /var/lib/containers as the imagefs directory and the root file system as the nodefs directory. Important If you have resized your disk size to host a larger file system, consider creating a separate /var/lib/containers partition. Consider resizing a disk that has an xfs format to reduce CPU time issues caused by a high number of allocation groups. 1.11.3.2.1. Creating a separate /var partition In general, you should use the default disk partitioning that is created during the RHCOS installation. However, there are cases where you might want to create a separate partition for a directory that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var directory or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth in the partitioned directory from filling up the root file system. The following procedure sets up a separate /var partition by adding a machine config manifest that is wrapped into the Ignition config file for a node type during the preparation phase of an installation. Procedure On your installation host, change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD openshift-install create manifests --dir <installation_directory> Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no offset value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for compute nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Create the Ignition config files: USD openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory: The files in the <installation_directory>/manifest and <installation_directory>/openshift directories are wrapped into the Ignition config files, including the file that contains the 98-var-partition custom MachineConfig object. steps You can apply the custom disk partitioning by referencing the Ignition config files during the RHCOS installations. 1.11.3.2.2. Retaining existing partitions For an ISO installation, you can add options to the coreos-installer command that cause the installer to maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the APPEND parameter to preserve partitions. Saved partitions might be data partitions from an existing OpenShift Container Platform system. You can identify the disk partitions you want to keep either by partition label or by number. Note If you save existing partitions, and those partitions do not leave enough space for RHCOS, the installation will fail without damaging the saved partitions. Retaining existing partitions during an ISO installation This example preserves any partition in which the partition label begins with data ( data* ): # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partlabel 'data*' /dev/sda The following example illustrates running the coreos-installer in a way that preserves the sixth (6) partition on the disk: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partindex 6 /dev/sda This example preserves partitions 5 and higher: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/sda In the examples where partition saving is used, coreos-installer recreates the partition immediately. Retaining existing partitions during a PXE installation This APPEND option preserves any partition in which the partition label begins with 'data' ('data*'): coreos.inst.save_partlabel=data* This APPEND option preserves partitions 5 and higher: coreos.inst.save_partindex=5- This APPEND option preserves partition 6: coreos.inst.save_partindex=6 1.11.3.3. Identifying Ignition configs When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide, with different reasons for providing each one: Permanent install Ignition config : Every manual RHCOS installation needs to pass one of the Ignition config files generated by openshift-installer , such as bootstrap.ign , master.ign and worker.ign , to carry out the installation. Important It is not recommended to modify these Ignition config files directly. You can update the manifest files that are wrapped into the Ignition config files, as outlined in examples in the preceding sections. For PXE installations, you pass the Ignition configs on the APPEND line using the coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt, you identify the Ignition config on the coreos-installer command line with the --ignition-url= option. In both cases, only HTTP and HTTPS protocols are supported. Live install Ignition config : This type can be created by using the coreos-installer customize subcommand and its various options. With this method, the Ignition config passes to the live install medium, runs immediately upon booting, and performs setup tasks before or after the RHCOS system installs to disk. This method should only be used for performing tasks that must be done once and not applied again later, such as with advanced partitioning that cannot be done using a machine config. For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url= option to identify the location of the Ignition config. You also need to append ignition.firstboot ignition.platform.id=metal or the ignition.config.url option will be ignored. 1.11.3.4. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 1.11.3.4.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=name[:network_interfaces][:options] name is the bonding device name ( bond0 ), network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple network interfaces to a single interface Optional: You can configure VLANs on bonded interfaces by using the vlan= parameter and to use DHCP, for example: ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Use the following example to configure the bonded interface with a VLAN and to use a static IP address: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0 Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 1.11.3.4.2. coreos-installer options for ISO and PXE installations You can install RHCOS by running coreos-installer install <options> <device> at the command prompt, after booting into the RHCOS live environment from an ISO image. The following table shows the subcommands, options, and arguments you can pass to the coreos-installer command. Table 1.9. coreos-installer subcommands, command-line options, and arguments coreos-installer install subcommand Subcommand Description USD coreos-installer install <options> <device> Embed an Ignition config in an ISO image. coreos-installer install subcommand options Option Description -u , --image-url <url> Specify the image URL manually. -f , --image-file <path> Specify a local image file manually. Used for debugging. -i, --ignition-file <path> Embed an Ignition config from a file. -I , --ignition-url <URL> Embed an Ignition config from a URL. --ignition-hash <digest> Digest type-value of the Ignition config. -p , --platform <name> Override the Ignition platform ID for the installed system. --console <spec> Set the kernel and bootloader console for the installed system. For more information about the format of <spec> , see the Linux kernel serial console documentation. --append-karg <arg>... Append a default kernel argument to the installed system. --delete-karg <arg>... Delete a default kernel argument from the installed system. -n , --copy-network Copy the network configuration from the install environment. Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. --network-dir <path> For use with -n . Default is /etc/NetworkManager/system-connections/ . --save-partlabel <lx>.. Save partitions with this label glob. --save-partindex <id>... Save partitions with this number or range. --insecure Skip RHCOS image signature verification. --insecure-ignition Allow Ignition URL without HTTPS or hash. --architecture <name> Target CPU architecture. Valid values are x86_64 and aarch64 . --preserve-on-error Do not clear partition table on error. -h , --help Print help information. coreos-installer install subcommand argument Argument Description <device> The destination device. coreos-installer ISO subcommands Subcommand Description USD coreos-installer iso customize <options> <ISO_image> Customize a RHCOS live ISO image. coreos-installer iso reset <options> <ISO_image> Restore a RHCOS live ISO image to default settings. coreos-installer iso ignition remove <options> <ISO_image> Remove the embedded Ignition config from an ISO image. coreos-installer ISO customize subcommand options Option Description --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --dest-karg-append <arg> Add a kernel argument to each boot of the destination system. --dest-karg-delete <arg> Delete a kernel argument from each boot of the destination system. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. --post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. --live-karg-append <arg> Add a kernel argument to each boot of the live environment. --live-karg-delete <arg> Delete a kernel argument from each boot of the live environment. --live-karg-replace <k=o=n> Replace a kernel argument in each boot of the live environment, in the form key=old=new . -f , --force Overwrite an existing Ignition config. -o , --output <path> Write the ISO to a new output file. -h , --help Print help information. coreos-installer PXE subcommands Subcommand Description Note that not all of these options are accepted by all subcommands. coreos-installer pxe customize <options> <path> Customize a RHCOS live PXE boot config. coreos-installer pxe ignition wrap <options> Wrap an Ignition config in an image. coreos-installer pxe ignition unwrap <options> <image_name> Show the wrapped Ignition config in an image. coreos-installer PXE customize subcommand options Option Description Note that not all of these options are accepted by all subcommands. --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. -o, --output <path> Write the initramfs to a new output file. Note This option is required for PXE environments. -h , --help Print help information. 1.11.3.4.3. coreos.inst boot options for ISO or PXE installations You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments. For ISO installations, the coreos.inst options can be added by interrupting the automatic boot at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL CoreOS (Live) menu option is highlighted. For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line before the RHCOS live installer is booted. The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE installations. Table 1.10. coreos.inst boot options Argument Description coreos.inst.install_dev Required. The block device on the system to install to. It is recommended to use the full path, such as /dev/sda , although sda is allowed. coreos.inst.ignition_url Optional: The URL of the Ignition config to embed into the installed system. If no URL is specified, no Ignition config is embedded. Only HTTP and HTTPS protocols are supported. coreos.inst.save_partlabel Optional: Comma-separated labels of partitions to preserve during the install. Glob-style wildcards are permitted. The specified partitions do not need to exist. coreos.inst.save_partindex Optional: Comma-separated indexes of partitions to preserve during the install. Ranges m-n are permitted, and either m or n can be omitted. The specified partitions do not need to exist. coreos.inst.insecure Optional: Permits the OS image that is specified by coreos.inst.image_url to be unsigned. coreos.inst.image_url Optional: Download and install the specified RHCOS image. This argument should not be used in production environments and is intended for debugging purposes only. While this argument can be used to install a version of RHCOS that does not match the live media, it is recommended that you instead use the media that matches the version you want to install. If you are using coreos.inst.image_url , you must also use coreos.inst.insecure . This is because the bare-metal media are not GPG-signed for OpenShift Container Platform. Only HTTP and HTTPS protocols are supported. coreos.inst.skip_reboot Optional: The system will not reboot after installing. After the install finishes, you will receive a prompt that allows you to inspect what is happening during installation. This argument should not be used in production environments and is intended for debugging purposes only. coreos.inst.platform_id Optional: The Ignition platform ID of the platform the RHCOS image is being installed on. Default is metal . This option determines whether or not to request an Ignition config from the cloud provider, such as VMware. For example: coreos.inst.platform_id=vmware . ignition.config.url Optional: The URL of the Ignition config for the live boot. For example, this can be used to customize how coreos-installer is invoked, or to run code before or after the installation. This is different from coreos.inst.ignition_url , which is the Ignition config for the installed system. 1.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 1.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 1.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 1.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Configure the Operators that are not available. 1.15.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 1.15.2. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 1.15.3. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 1.15.3.1. Configuring registry storage for bare metal and other manual installations As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS) nodes, such as bare metal. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.12 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 1.15.3.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 1.15.3.3. Configuring block registry storage for bare metal To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 1.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. 1.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 1.18. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage .
[ "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF", "USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF", "global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s", "dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1", "api.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>", "api-int.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>", "random.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>", "console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5", "dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>", "bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96", "dig +noall +answer @<nameserver_ip> -x 192.168.1.5", "5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2", "dig +noall +answer @<nameserver_ip> -x 192.168.1.96", "96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "compute: - name: worker platform: {} replicas: 0", "./openshift-install create manifests --dir <installation_directory> 1", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "sha512sum <installation_directory>/bootstrap.ign", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep '\\.iso[^.]'", "\"location\": \"<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.12-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",", "sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2", "sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "curl -k http://<HTTP_server>/bootstrap.ign 1", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa", "openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'", "\"<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.12-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.12-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.12-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.12-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.12-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.12/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"", "DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3", "kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot", "menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }", "Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied", "sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/sda", "openshift-install create manifests --dir <installation_directory>", "variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir <installation_directory> 1", ". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/sda", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/sda", "coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/sda", "coreos.inst.save_partlabel=data*", "coreos.inst.save_partindex=5-", "coreos.inst.save_partindex=6", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=::10.10.10.254::::", "rd.route=20.20.20.0/24:20.20.20.254:enp2s0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none", "ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0", "ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0", "nameserver=1.1.1.1 nameserver=8.8.8.8", "bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp", "bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none", "ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0", "team=team0:em1,em2 ip=team0:dhcp", "./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2", "INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resources found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.12 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "watch -n5 oc get clusteroperators", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize", "oc get pods --all-namespaces", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m", "oc logs <pod_name> -n <namespace> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_any_platform/installing-platform-agnostic
4.4.3.2. The sudo Command
4.4.3.2. The sudo Command The sudo command offers another approach to giving users administrative access. When trusted users precede an administrative command with sudo , they are prompted for their own password. Then, once authenticated and assuming that the command is permitted, the administrative command is executed as if by the root user. The basic format of the sudo command is as follows: In the above example, <command> would be replaced by a command normally reserved for the root user, such as mount . Important Users of the sudo command should take extra care to log out before walking away from their machines since sudoers can use the command again without being asked for a password within a five minute period. This setting can be altered via the configuration file, /etc/sudoers . The sudo command allows for a high degree of flexibility. For instance, only users listed in the /etc/sudoers configuration file are allowed to use the sudo command and the command is executed in the user's shell, not a root shell. This means the root shell can be completely disabled, as shown in Section 4.4.1, "Allowing Root Access" . The sudo command also provides a comprehensive audit trail. Each successful authentication is logged to the file /var/log/messages and the command issued along with the issuer's user name is logged to the file /var/log/secure . Another advantage of the sudo command is that an administrator can allow different users access to specific commands based on their needs. Administrators wanting to edit the sudo configuration file, /etc/sudoers , should use the visudo command. To give someone full administrative privileges, type visudo and add a line similar to the following in the user privilege specification section: This example states that the user, juan , can use sudo from any host and execute any command. The example below illustrates the granularity possible when configuring sudo : This example states that any user can issue the command /sbin/shutdown -h now as long as it is issued from the console. The man page for sudoers has a detailed listing of options for this file.
[ "sudo <command>", "juan ALL=(ALL) ALL", "%users localhost=/sbin/shutdown -h now" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s3-wstation-privileges-limitroot-sudo
3.4. Virtualized Hardware Devices
3.4. Virtualized Hardware Devices Virtualization on Red Hat Enterprise Linux 7 allows virtual machines to use the host's physical hardware as three distinct types of devices: Virtualized and emulated devices Paravirtualized devices Physically shared devices These hardware devices all appear as being physically attached to the virtual machine but the device drivers work in different ways. 3.4.1. Virtualized and Emulated Devices KVM implements many core devices for virtual machines as software. These emulated hardware devices are crucial for virtualizing operating systems. Emulated devices are virtual devices which exist entirely in software. In addition, KVM provides emulated drivers. These form a translation layer between the virtual machine and the Linux kernel (which manages the source device). The device level instructions are completely translated by the KVM hypervisor. Any device of the same type (storage, network, keyboard, or mouse) that is recognized by the Linux kernel can be used as the backing source device for the emulated drivers. Virtual CPUs (vCPUs) On Red Hat Enterprise Linux 7.2 and above, the host system can have up to 240 virtual CPUs (vCPUs) that can be presented to guests for use, regardless of the number of host CPUs. This is up from 160 in Red Hat Enterprise Linux 7.0. Emulated system components The following core system components are emulated to provide basic system functions: Intel i440FX host PCI bridge PIIX3 PCI to ISA bridge PS/2 mouse and keyboard EvTouch USB graphics tablet PCI UHCI USB controller and a virtualized USB hub Emulated serial ports EHCI controller, virtualized USB storage and a USB mouse USB 3.0 xHCI host controller (Technology Preview in Red Hat Enterprise Linux 7.3) Emulated storage drivers Storage devices and storage pools can use emulated drivers to attach storage devices to virtual machines. The guest uses an emulated storage driver to access the storage pool. Note that like all virtual devices, the storage drivers are not storage devices. The drivers are used to attach a backing storage device, file or storage pool volume to a virtual machine. The backing storage device can be any supported type of storage device, file, or storage pool volume. The emulated IDE driver KVM provides two emulated PCI IDE interfaces. An emulated IDE driver can be used to attach any combination of up to four virtualized IDE hard disks or virtualized IDE CD-ROM drives to each virtual machine. The emulated IDE driver is also used for virtualized CD-ROM and DVD-ROM drives. The emulated floppy disk drive driver The emulated floppy disk drive driver is used for creating virtualized floppy drives. Emulated sound devices An emulated (Intel) HDA sound device, intel-hda , is supported in the following guest operating systems: Red Hat Enterprise Linux 7, for the AMD64 and Intel 64 architecture Red Hat Enterprise Linux 4, 5, and 6, for the 32-bit AMD and Intel architecture and the AMD64 and Intel 64 architecture Note The following emulated sound device is also available, but is not recommended due to compatibility issues with certain guest operating systems: ac97 , an emulated Intel 82801AA AC97 Audio compatible sound card Emulated graphics cards The following emulated graphics cards are provided. A Cirrus CLGD 5446 PCI VGA card A standard VGA graphics card with Bochs VESA extensions (hardware level, including all non-standard modes) Guests can connect to these devices with the Simple Protocol for Independent Computing Environments (SPICE) protocol or with the Virtual Network Computing (VNC) system. Emulated network devices The following two emulated network devices are provided: The e1000 device emulates an Intel E1000 network adapter (Intel 82540EM, 82573L, 82544GC). The rtl8139 device emulates a Realtek 8139 network adapter. Emulated watchdog devices A watchdog can be used to automatically reboot a virtual machine when the machine becomes overloaded or unresponsive. Red Hat Enterprise Linux 7 provides the following emulated watchdog devices: i6300esb , an emulated Intel 6300 ESB PCI watchdog device. It is supported in guest operating system Red Hat Enterprise Linux versions 6.0 and above, and is the recommended device to use. ib700 , an emulated iBase 700 ISA watchdog device. The ib700 watchdog device is only supported in guests using Red Hat Enterprise Linux 6.2 and above. Both watchdog devices are supported on 32-bit and 64-bit AMD and Intel architectures for guest operating systems Red Hat Enterprise Linux 6.2 and above. 3.4.2. Paravirtualized Devices Paravirtualization provides a fast and efficient means of communication for guests to use devices on the host machine. KVM provides paravirtualized devices to virtual machines using the virtio API as a layer between the hypervisor and guest. Some paravirtualized devices decrease I/O latency and increase I/O throughput to near bare-metal levels, while other paravirtualized devices add functionality to virtual machines that is not otherwise available. It is recommended to use paravirtualized devices instead of emulated devices for virtual machines running I/O intensive applications. All virtio devices have two parts: the host device and the guest driver. Paravirtualized device drivers allow the guest operating system access to physical devices on the host system. To use this device, the paravirtualized device drivers must be installed on the guest operating system. By default, the paravirtualized device drivers are included in Red Hat Enterprise Linux 4.7 and later, Red Hat Enterprise Linux 5.4 and later, and Red Hat Enterprise Linux 6.0 and later. Note For more information on using the paravirtualized devices and drivers, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . The paravirtualized network device (virtio-net) The paravirtualized network device is a virtual network device that provides network access to virtual machines with increased I/O performance and lower latency. The paravirtualized block device (virtio-blk) The paravirtualized block device is a high-performance virtual storage device that provides storage to virtual machines with increased I/O performance and lower latency. The paravirtualized block device is supported by the hypervisor and is attached to the virtual machine (except for floppy disk drives, which must be emulated). The paravirtualized controller device (virtio-scsi) The paravirtualized SCSI controller device provides a more flexible and scalable alternative to virtio-blk. A virtio-scsi guest is capable of inheriting the feature set of the target device, and can handle hundreds of devices compared to virtio-blk, which can only handle 28 devices. virtio-scsi is fully supported for the following guest operating systems: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 6.4 and above The paravirtualized clock Guests using the Time Stamp Counter (TSC) as a clock source may suffer timing issues. KVM works around hosts that do not have a constant Time Stamp Counter by providing guests with a paravirtualized clock. Additionally, the paravirtualized clock assists with time adjustments needed after a guest runs the sleep (S3) or suspend to RAM operations. The paravirtualized serial device (virtio-serial) The paravirtualized serial device is a bytestream-oriented, character stream device, and provides a simple communication interface between the host's user space and the guest's user space. The balloon device (virtio-balloon) The balloon device can designate part of a virtual machine's RAM as not being used (a process known as inflating the balloon), so that the memory can be freed for the host (or for other virtual machines on that host) to use. When the virtual machine needs the memory again, the balloon can be deflated and the host can distribute the RAM back to the virtual machine. The paravirtualized random number generator (virtio-rng) The paravirtualized random number generator enables virtual machines to collect entropy, or randomness, directly from the host to use for encrypted data and security. Virtual machines can often be starved of entropy because typical inputs (such as hardware usage) are unavailable. Sourcing entropy can be time-consuming. virtio-rng makes this process faster by injecting entropy directly into guest virtual machines from the host. The paravirtualized graphics card (QXL) The paravirtualized graphics card works with the QXL driver to provide an efficient way to display a virtual machine's graphics from a remote host. The QXL driver is required to use SPICE. 3.4.3. Physical Host Devices Certain hardware platforms enable virtual machines to directly access various hardware devices and components. This process in virtualization is known as device assignment , or also as passthrough . VFIO device assignment Virtual Function I/O (VFIO) is a new kernel driver in Red Hat Enterprise Linux 7 that provides virtual machines with high performance access to physical hardware. VFIO attaches PCI devices on the host system directly to virtual machines, providing guests with exclusive access to PCI devices for a range of tasks. This enables PCI devices to appear and behave as if they were physically attached to the guest virtual machine. VFIO improves on PCI device assignment architecture by moving device assignment out of the KVM hypervisor, and enforcing device isolation at the kernel level. VFIO offers better security and is compatible with secure boot. It is the default device assignment mechanism in Red Hat Enterprise Linux 7. VFIO increases the number of assigned devices to 32 in Red Hat Enterprise Linux 7, up from a maximum 8 devices in Red Hat Enterprise Linux 6. VFIO also supports assignment of NVIDIA GPUs. Note For more information on VFIO device assignment, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . USB, PCI, and SCSI passthrough The KVM hypervisor supports attaching USB, PCI, and SCSI devices on the host system to virtual machines. USB, PCI, and SCSI device assignment makes it possible for the devices to appear and behave as if they were physically attached to the virtual machine. Thus, it provides guests with exclusive access to these devices for a variety of tasks. Note For more information on USB, PCI, and SCSI passthrough, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . SR-IOV SR-IOV (Single Root I/O Virtualization) is a PCI Express (PCI-e) standard that extends a single physical PCI function to share its PCI resources as separate virtual functions (VFs). Each function can be used by a different virtual machine via PCI device assignment. An SR-IOV-capable PCI-e device provides a Single Root function (for example, a single Ethernet port) and presents multiple, separate virtual devices as unique PCI device functions. Each virtual device may have its own unique PCI configuration space, memory-mapped registers, and individual MSI-based interrupts. Note For more information on SR-IOV, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . NPIV N_Port ID Virtualization (NPIV) is a functionality available with some Fibre Channel devices. NPIV shares a single physical N_Port as multiple N_Port IDs. NPIV provides similar functionality for Fibre Channel Host Bus Adapters (HBAs) that SR-IOV provides for PCIe interfaces. With NPIV, virtual machines can be provided with a virtual Fibre Channel initiator to Storage Area Networks (SANs). NPIV can provide high density virtualized environments with enterprise-level storage solutions. For more information on NPIV, see the vHBA-based storage pools using SCSI devices . 3.4.4. Guest CPU Models CPU models define which host CPU features are exposed to the guest operating system. KVM and libvirt contain definitions for a number of processor models, allowing users to enable CPU features that are available only in newer CPU models. The set of CPU features that can be exposed to guests depends on support in the host CPU, the kernel, and KVM code. To ensure safe migration of virtual machines between hosts with different sets of CPU features, KVM does not expose all features of the host CPU to guest operating system by default. Instead, CPU features are exposed based on the selected CPU model. If a virtual machine has a given CPU feature enabled, it cannot be migrated to a host that does not support exposing that feature to guests. Note For more information on guest CPU models, see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_getting_started_guide/sec-virtualization_getting_started-products-virtualized-hardware-devices
Chapter 7. Containerized Services
Chapter 7. Containerized Services The director installs the core OpenStack Platform services as containers on the overcloud. This section provides some background information on how containerized services work. 7.1. Containerized Service Architecture The director installs the core OpenStack Platform services as containers on the overcloud. The templates for the containerized services are located in the /usr/share/openstack-tripleo-heat-templates/deployment/ . All nodes using containerized services must enable the OS::TripleO::Services::Podman service. When you create a roles_data.yaml file for your custom roles configuration, include the OS::TripleO::Services::Podman service with the base composable services, as the containerized services. For example, the IronicConductor role uses the following role definition: 7.2. Containerized Service Parameters Each containerized service template contains an outputs section that defines a data set passed to the director's OpenStack Orchestration (Heat) service. In addition to the standard composable service parameters (see Section 6.2.5, "Examining Role Parameters" ), the template contain a set of parameters specific to the container configuration. puppet_config Data to pass to Puppet when configuring the service. In the initial overcloud deployment steps, the director creates a set of containers used to configure the service before the actual containerized service runs. This parameter includes the following sub-parameters: + config_volume - The mounted volume that stores the configuration. puppet_tags - Tags to pass to Puppet during configuration. These tags are used in OpenStack Platform to restrict the Puppet run to a particular service's configuration resource. For example, the OpenStack Identity (keystone) containerized service uses the keystone_config tag to ensure that all require only the keystone_config Puppet resource run on the configuration container. step_config - The configuration data passed to Puppet. This is usually inherited from the referenced composable service. config_image - The container image used to configure the service. kolla_config A set of container-specific data that defines configuration file locations, directory permissions, and the command to run on the container to launch the service. docker_config Tasks to run on the service's configuration container. All tasks are grouped into the following steps to help the director perform a staged deployment: Step 1 - Load balancer configuration Step 2 - Core services (Database, Redis) Step 3 - Initial configuration of OpenStack Platform service Step 4 - General OpenStack Platform services configuration Step 5 - Service activation host_prep_tasks Preparation tasks for the bare metal node to accommodate the containerized service. 7.3. Preparing container images The overcloud configuration requires initial registry configuration to determine where to obtain images and how to store them. Complete the following steps to generate and customize an environment file that you can use to prepare your container images. Procedure Log in to your undercloud host as the stack user. Generate the default container image preparation file: This command includes the following additional options: --local-push-destination sets the registry on the undercloud as the location for container images. This means the director pulls the necessary images from the Red Hat Container Catalog and pushes them to the registry on the undercloud. The director uses this registry as the container image source. To pull directly from the Red Hat Container Catalog, omit this option. --output-env-file is an environment file name. The contents of this file include the parameters for preparing your container images. In this case, the name of the file is containers-prepare-parameter.yaml . Note You can use the same containers-prepare-parameter.yaml file to define a container image source for both the undercloud and the overcloud. Modify the containers-prepare-parameter.yaml to suit your requirements. 7.4. Container image preparation parameters The default file for preparing your containers ( containers-prepare-parameter.yaml ) contains the ContainerImagePrepare heat parameter. This parameter defines a list of strategies for preparing a set of images: Each strategy accepts a set of sub-parameters that defines which images to use and what to do with the images. The following table contains information about the sub-parameters you can use with each ContainerImagePrepare strategy: Parameter Description excludes List of regular expressions to exclude image names from a strategy. includes List of regular expressions to include in a strategy. At least one image name must match an existing image. All excludes are ignored if includes is specified. modify_append_tag String to append to the tag for the destination image. For example, if you pull an image with the tag 14.0-89 and set the modify_append_tag to -hotfix , the director tags the final image as 14.0-89-hotfix . modify_only_with_labels A dictionary of image labels that filter the images that you want to modify. If an image matches the labels defined, the director includes the image in the modification process. modify_role String of ansible role names to run during upload but before pushing the image to the destination registry. modify_vars Dictionary of variables to pass to modify_role . push_destination Defines the namespace of the registry that you want to push images to during the upload process. If set to true , the push_destination is set to the undercloud registry namespace using the hostname, which is the recommended method. If set to false , the push to a local registry does not occur and nodes pull images directly from the source. If set to a custom value, director pushes images to an external local registry. If you choose to pull container images directly from the Red Hat Container Catalog, do not set this parameter to false in production environments or else all overcloud nodes will simultaneously pull the images from the Red Hat Container Catalog over your external connection, which can cause bandwidth issues. If the push_destination parameter is set to false or is not defined and the remote registry requires authentication, set the ContainerImageRegistryLogin parameter to true and include the credentials with the ContainerImageRegistryCredentials parameter. pull_source The source registry from where to pull the original container images. set A dictionary of key: value definitions that define where to obtain the initial images. tag_from_label Use the value of specified container image labels to discover and pull the versioned tag for every image. Director inspects each container image tagged with the value that you set for tag , then uses the container image labels to construct a new tag, which director pulls from the registry. For example, if you set tag_from_label: {version}-{release} , director uses the version and release labels to construct a new tag. For one container, version might be set to 13.0 and release might be set to 34 , which results in the tag 13.0-34 . The set parameter accepts a set of key: value definitions: Key Description ceph_image The name of the Ceph Storage container image. ceph_namespace The namespace of the Ceph Storage container image. ceph_tag The tag of the Ceph Storage container image. name_prefix A prefix for each OpenStack service image. name_suffix A suffix for each OpenStack service image. namespace The namespace for each OpenStack service image. neutron_driver The driver to use to determine which OpenStack Networking (neutron) container to use. Use a null value to set to the standard neutron-server container. Set to ovn to use OVN-based containers. tag Sets the specific tag for all images from the source. If you use this option without specifying a tag_from_label value, director pulls all container images that use this tag. However, if you use this option in combination with tag_from_label value, director uses the tag as a source image to identify a specific version tag based on labels. Keep this key set to the default value, which is the Red Hat OpenStack Platform version number. Important The Red Hat Container Registry uses a specific version format to tag all Red Hat OpenStack Platform container images. This version format is {version}-{release} , which each container image stores as labels in the container metadata. This version format helps facilitate updates from one {release} to the . For this reason, you must always use the tag_from_label: {version}-{release} parameter with the ContainerImagePrepare heat parameter. Do not only use tag on its own to to pull container images. For example, using tag by itself causes problems when performing updates because director requires a change in tag to update a container image. Important The container images use multi-stream tags based on Red Hat OpenStack Platform version. This means there is no longer a latest tag. The ContainerImageRegistryCredentials parameter maps a container registry to a username and password to authenticate to that registry. If a container registry requires a username and password, you can use ContainerImageRegistryCredentials to include credentials with the following syntax: In the example, replace my_username and my_password with your authentication credentials. Instead of using your individual user credentials, Red Hat recommends creating a registry service account and using those credentials to access registry.redhat.io content. For more information, see "Red Hat Container Registry Authentication" . The ContainerImageRegistryLogin parameter is used to control the registry login on the systems being deployed. This must be set to true if push_destination is set to false or not used. 7.5. Layering image preparation entries The value of the ContainerImagePrepare parameter is a YAML list. This means that you can specify multiple entries. The following example demonstrates two entries where director uses the latest version of all images except for the nova-api image, which uses the version tagged with 16.0-44 : The includes and excludes parameters use regular expressions to control image filtering for each entry. The images that match the includes strategy take precedence over excludes matches. The image name must the includes or excludes regular expression value to be considered a match. 7.6. Modifying images during preparation It is possible to modify images during image preparation, and then immediately deploy with modified images. Scenarios for modifying images include: As part of a continuous integration pipeline where images are modified with the changes being tested before deployment. As part of a development workflow where local changes must be deployed for testing and development. When changes must be deployed but are not available through an image build pipeline. For example, adding proprietary add-ons or emergency fixes. To modify an image during preparation, invoke an Ansible role on each image that you want to modify. The role takes a source image, makes the requested changes, and tags the result. The prepare command can push the image to the destination registry and set the heat parameters to refer to the modified image. The Ansible role tripleo-modify-image conforms with the required role interface and provides the behaviour necessary for the modify use cases. Control the modification with the modify-specific keys in the ContainerImagePrepare parameter: modify_role specifies the Ansible role to invoke for each image to modify. modify_append_tag appends a string to the end of the source image tag. This makes it obvious that the resulting image has been modified. Use this parameter to skip modification if the push_destination registry already contains the modified image. Change modify_append_tag whenever you modify the image. modify_vars is a dictionary of Ansible variables to pass to the role. To select a use case that the tripleo-modify-image role handles, set the tasks_from variable to the required file in that role. While developing and testing the ContainerImagePrepare entries that modify images, run the image prepare command without any additional options to confirm that the image is modified as you expect: 7.7. Updating existing packages on container images The following example ContainerImagePrepare entry updates all packages on the images using the dnf repository configuration on the undercloud host: 7.8. Installing additional RPM files to container images You can install a directory of RPM files in your container images. This is useful for installing hotfixes, local package builds, or any package that is not available through a package repository. For example, the following ContainerImagePrepare entry installs some hotfix packages only on the nova-compute image: 7.9. Modifying container images with a custom Dockerfile For maximum flexibility, you can specify a directory containing a Dockerfile to make the required changes. When you invoke the tripleo-modify-image role, the role generates a Dockerfile.modified file that changes the FROM directive and adds extra LABEL directives. The following example runs the custom Dockerfile on the nova-compute image: The following example shows the /home/stack/nova-custom/Dockerfile file. After you run any USER root directives, you must switch back to the original image default user:
[ "- name: IronicConductor description: | Ironic Conductor node role networks: InternalApi: subnet: internal_api_subnet Storage: subnet: storage_subnet HostnameFormatDefault: '%stackname%-ironic-%index%' ServicesDefault: - OS::TripleO::Services::Aide - OS::TripleO::Services::AuditD - OS::TripleO::Services::BootParams - OS::TripleO::Services::CACerts - OS::TripleO::Services::CertmongerUser - OS::TripleO::Services::Collectd - OS::TripleO::Services::Docker - OS::TripleO::Services::Fluentd - OS::TripleO::Services::IpaClient - OS::TripleO::Services::Ipsec - OS::TripleO::Services::IronicConductor - OS::TripleO::Services::IronicPxe - OS::TripleO::Services::Kernel - OS::TripleO::Services::LoginDefs - OS::TripleO::Services::MetricsQdr - OS::TripleO::Services::MySQLClient - OS::TripleO::Services::ContainersLogrotateCrond - OS::TripleO::Services::Podman - OS::TripleO::Services::Rhsm - OS::TripleO::Services::SensuClient - OS::TripleO::Services::Snmp - OS::TripleO::Services::Timesync - OS::TripleO::Services::Timezone - OS::TripleO::Services::TripleoFirewall - OS::TripleO::Services::TripleoPackages - OS::TripleO::Services::Tuned", "openstack tripleo container image prepare default --local-push-destination --output-env-file containers-prepare-parameter.yaml", "parameter_defaults: ContainerImagePrepare: - (strategy one) - (strategy two) - (strategy three)", "ContainerImagePrepare: - push_destination: true set: namespace: registry.redhat.io/ ContainerImageRegistryCredentials: registry.redhat.io: my_username: my_password", "ContainerImagePrepare: - set: namespace: registry.redhat.io/ ContainerImageRegistryCredentials: registry.redhat.io: my_username: my_password ContainerImageRegistryLogin: true", "ContainerImagePrepare: - tag_from_label: \"{version}-{release}\" push_destination: true excludes: - nova-api set: namespace: registry.redhat.io/rhosp-rhel8 name_prefix: openstack- name_suffix: '' tag: 16.0 - push_destination: true includes: - nova-api set: namespace: registry.redhat.io/rhosp-rhel8 tag: 16.0-44", "sudo openstack tripleo container image prepare -e ~/containers-prepare-parameter.yaml", "ContainerImagePrepare: - push_destination: true modify_role: tripleo-modify-image modify_append_tag: \"-updated\" modify_vars: tasks_from: yum_update.yml compare_host_packages: true yum_repos_dir_path: /etc/yum.repos.d", "ContainerImagePrepare: - push_destination: true includes: - nova-compute modify_role: tripleo-modify-image modify_append_tag: \"-hotfix\" modify_vars: tasks_from: rpm_install.yml rpms_path: /home/stack/nova-hotfix-pkgs", "ContainerImagePrepare: - push_destination: true includes: - nova-compute modify_role: tripleo-modify-image modify_append_tag: \"-hotfix\" modify_vars: tasks_from: modify_image.yml modify_dir_path: /home/stack/nova-custom", "FROM registry.redhat.io/rhosp-rhel8/openstack-nova-compute:latest USER \"root\" COPY customize.sh /tmp/ RUN /tmp/customize.sh USER \"nova\"" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/advanced_overcloud_customization/sect-containerized_services
Installing OpenShift Container Platform with the Assisted Installer
Installing OpenShift Container Platform with the Assisted Installer Assisted Installer for OpenShift Container Platform 2025 User Guide Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_openshift_container_platform_with_the_assisted_installer/index
3.5. Repairing an XFS File System
3.5. Repairing an XFS File System To repair an XFS file system, use xfs_repair : The xfs_repair utility is highly scalable and is designed to repair even very large file systems with many inodes efficiently. Unlike other Linux file systems, xfs_repair does not run at boot time, even when an XFS file system was not cleanly unmounted. In the event of an unclean unmount, xfs_repair simply replays the log at mount time, ensuring a consistent file system. Warning The xfs_repair utility cannot repair an XFS file system with a dirty log. To clear the log, mount and unmount the XFS file system. If the log is corrupt and cannot be replayed, use the -L option ("force log zeroing") to clear the log, that is, xfs_repair -L /dev/device . Be aware that this may result in further corruption or data loss. For more information about repairing an XFS file system, see man xfs_repair .
[ "xfs_repair /dev/device" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/xfsrepair
Chapter 13. ImageContentSourcePolicy [operator.openshift.io/v1alpha1]
Chapter 13. ImageContentSourcePolicy [operator.openshift.io/v1alpha1] Description ImageContentSourcePolicy holds cluster-wide information about how to handle registry mirror rules. When multiple policies are defined, the outcome of the behavior is defined on each field. Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object Required spec 13.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration 13.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description repositoryDigestMirrors array repositoryDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in RepositoryDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. Only image pull specifications that have an image digest will have this behavior applied to them - tags will continue to be pulled from the specified repository in the pull spec. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. repositoryDigestMirrors[] object RepositoryDigestMirrors holds cluster-wide information about how to handle mirros in the registries config. Note: the mirrors only work when pulling the images that are referenced by their digests. 13.1.2. .spec.repositoryDigestMirrors Description repositoryDigestMirrors allows images referenced by image digests in pods to be pulled from alternative mirrored repository locations. The image pull specification provided to the pod will be compared to the source locations described in RepositoryDigestMirrors and the image may be pulled down from any of the mirrors in the list instead of the specified repository allowing administrators to choose a potentially faster mirror. Only image pull specifications that have an image digest will have this behavior applied to them - tags will continue to be pulled from the specified repository in the pull spec. Each "source" repository is treated independently; configurations for different "source" repositories don't interact. When multiple policies are defined for the same "source" repository, the sets of defined mirrors will be merged together, preserving the relative order of the mirrors, if possible. For example, if policy A has mirrors a, b, c and policy B has mirrors c, d, e , the mirrors will be used in the order a, b, c, d, e . If the orders of mirror entries conflict (e.g. a, b vs. b, a ) the configuration is not rejected but the resulting order is unspecified. Type array 13.1.3. .spec.repositoryDigestMirrors[] Description RepositoryDigestMirrors holds cluster-wide information about how to handle mirros in the registries config. Note: the mirrors only work when pulling the images that are referenced by their digests. Type object Required source Property Type Description mirrors array (string) mirrors is one or more repositories that may also contain the same images. The order of mirrors in this list is treated as the user's desired priority, while source is by default considered lower priority than all mirrors. Other cluster configuration, including (but not limited to) other repositoryDigestMirrors objects, may impact the exact order mirrors are contacted in, or some mirrors may be contacted in parallel, so this should be considered a preference rather than a guarantee of ordering. source string source is the repository that users refer to, e.g. in image pull specifications. 13.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies DELETE : delete collection of ImageContentSourcePolicy GET : list objects of kind ImageContentSourcePolicy POST : create an ImageContentSourcePolicy /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name} DELETE : delete an ImageContentSourcePolicy GET : read the specified ImageContentSourcePolicy PATCH : partially update the specified ImageContentSourcePolicy PUT : replace the specified ImageContentSourcePolicy /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name}/status GET : read status of the specified ImageContentSourcePolicy PATCH : partially update status of the specified ImageContentSourcePolicy PUT : replace status of the specified ImageContentSourcePolicy 13.2.1. /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies HTTP method DELETE Description delete collection of ImageContentSourcePolicy Table 13.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ImageContentSourcePolicy Table 13.2. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicyList schema 401 - Unauthorized Empty HTTP method POST Description create an ImageContentSourcePolicy Table 13.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.4. Body parameters Parameter Type Description body ImageContentSourcePolicy schema Table 13.5. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 201 - Created ImageContentSourcePolicy schema 202 - Accepted ImageContentSourcePolicy schema 401 - Unauthorized Empty 13.2.2. /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name} Table 13.6. Global path parameters Parameter Type Description name string name of the ImageContentSourcePolicy HTTP method DELETE Description delete an ImageContentSourcePolicy Table 13.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 13.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ImageContentSourcePolicy Table 13.9. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ImageContentSourcePolicy Table 13.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.11. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ImageContentSourcePolicy Table 13.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.13. Body parameters Parameter Type Description body ImageContentSourcePolicy schema Table 13.14. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 201 - Created ImageContentSourcePolicy schema 401 - Unauthorized Empty 13.2.3. /apis/operator.openshift.io/v1alpha1/imagecontentsourcepolicies/{name}/status Table 13.15. Global path parameters Parameter Type Description name string name of the ImageContentSourcePolicy HTTP method GET Description read status of the specified ImageContentSourcePolicy Table 13.16. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ImageContentSourcePolicy Table 13.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.18. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ImageContentSourcePolicy Table 13.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 13.20. Body parameters Parameter Type Description body ImageContentSourcePolicy schema Table 13.21. HTTP responses HTTP code Reponse body 200 - OK ImageContentSourcePolicy schema 201 - Created ImageContentSourcePolicy schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/operator_apis/imagecontentsourcepolicy-operator-openshift-io-v1alpha1
Chapter 45. keypair
Chapter 45. keypair This chapter describes the commands under the keypair command. 45.1. keypair create Create new public or private key for server ssh access Usage: Table 45.1. Positional arguments Value Summary <name> New public or private key name Table 45.2. Command arguments Value Summary -h, --help Show this help message and exit --public-key <file> Filename for public key to add. if not used, creates a private key. --private-key <file> Filename for private key to save. if not used, print private key in console. --type <type> Keypair type. can be ssh or x509. (supported by api versions 2.2 - 2.latest ) --user <user> The owner of the keypair. (admin only) (name or id). Requires ``--os-compute-api-version`` 2.10 or greater. --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. Table 45.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 45.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 45.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 45.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 45.2. keypair delete Delete public or private key(s) Usage: Table 45.7. Positional arguments Value Summary <key> Name of key(s) to delete (name only) Table 45.8. Command arguments Value Summary -h, --help Show this help message and exit --user <user> The owner of the keypair. (admin only) (name or id). Requires ``--os-compute-api-version`` 2.10 or greater. --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. 45.3. keypair list List key fingerprints Usage: Table 45.9. Command arguments Value Summary -h, --help Show this help message and exit --user <user> Show keypairs for another user (admin only) (name or ID). Requires ``--os-compute-api-version`` 2.10 or greater. --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. --project <project> Show keypairs for all users associated with project (admin only) (name or ID). Requires ``--os-compute- api-version`` 2.10 or greater. --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --marker MARKER The last keypair id of the page --limit LIMIT Maximum number of keypairs to display Table 45.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 45.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 45.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 45.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 45.4. keypair show Display key details Usage: Table 45.14. Positional arguments Value Summary <key> Public or private key to display (name only) Table 45.15. Command arguments Value Summary -h, --help Show this help message and exit --public-key Show only bare public key paired with the generated key --user <user> The owner of the keypair. (admin only) (name or id). Requires ``--os-compute-api-version`` 2.10 or greater. --user-domain <user-domain> Domain the user belongs to (name or id). this can be used in case collisions between user names exist. Table 45.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 45.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 45.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 45.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack keypair create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--public-key <file> | --private-key <file>] [--type <type>] [--user <user>] [--user-domain <user-domain>] <name>", "openstack keypair delete [-h] [--user <user>] [--user-domain <user-domain>] <key> [<key> ...]", "openstack keypair list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--user <user>] [--user-domain <user-domain>] [--project <project>] [--project-domain <project-domain>] [--marker MARKER] [--limit LIMIT]", "openstack keypair show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--public-key] [--user <user>] [--user-domain <user-domain>] <key>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/keypair
17.4. Actions
17.4. Actions 17.4.1. Export Template Action Note The export storage domain is deprecated. Storage data domains can be unattached from a data center and imported to another data center in the same environment, or in a different environment. Virtual machines, floating virtual disk images, and templates can then be uploaded from the imported storage domain to the attached data center. See the Importing Existing Storage Domains section in the Red Hat Virtualization Administration Guide for information on importing storage domains. The templates collection contains an export action. The export action exports a template to an Export storage domain. A destination storage domain is specified with a storage_domain reference. The export action reports a failed action if a virtual machine template of the same name exists in the destination domain. Set the exclusive parameter to true to change this behavior and overwrite any existing virtual machine template. Example 17.7. Action to export a template to an export storage domain
[ "POST /ovirt-engine/api/templates/00000000-0000-0000-0000-000000000000/export HTTP/1.1 Accept: application/xml Content-type: application/xml <action> <storage_domain id=\"00000000-0000-0000-0000-000000000000\"/> <exclusive>true<exclusive/> </action>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/version_3_rest_api_guide/sect-actions4
Chapter 7. Upgrading a Red Hat Ceph Storage cluster
Chapter 7. Upgrading a Red Hat Ceph Storage cluster As a storage administrator, you can upgrade a Red Hat Ceph Storage cluster to a new major version or to a new minor version or to just apply asynchronous updates to the current version. The rolling_update.yml Ansible playbook performs upgrades for bare-metal or containerized deployments of Red Hat Ceph Storage. Ansible upgrades the Ceph nodes in the following order: Monitor nodes MGR nodes OSD nodes MDS nodes Ceph Object Gateway nodes All other Ceph client nodes Note Starting with Red Hat Ceph Storage 3.1, new Ansible playbooks were added to optimize storage for performance when using Object Gateway and high speed NVMe based SSDs (and SATA SSDs). The playbooks do this by placing journals and bucket indexes together on SSDs; this increases performance compared to having all journals on one device. These playbooks are designed to be used when installing Ceph. Existing OSDs continue to work and need no extra steps during an upgrade. There is no way to upgrade a Ceph cluster while simultaneously reconfiguring OSDs to optimize storage in this way. To use different devices for journals or bucket indexes requires reprovisioning OSDs. For more information see Using NVMe with LVM optimally in Ceph Object Gateway for Production Guide . Important When upgrading a Red Hat Ceph Storage cluster from a supported version to version 4.2z2, the upgrade completes with the storage cluster in a HEALTH_WARN state stating that monitors are allowing insecure global_id reclaim. This is due to a patched CVE, the details of which are available in the CVE-2021-20288 . This issue is fixed by CVE for Red Hat Ceph Storage 4.2z2. Recommendations to mute health warnings: Identify clients that are not updated by checking the ceph health detail output for the AUTH_INSECURE_GLOBAL_ID_RECLAIM alert. Upgrade all clients to Red Hat Ceph Storage 4.2z2 release. After validating all clients have been updated and the AUTH_INSECURE_GLOBAL_ID_RECLAIM alert is no longer present for a client, set auth_allow_insecure_global_id_reclaim to false . When this option is set to false , then an unpatched client cannot reconnect to the storage cluster after an intermittent network disruption breaking its connection to a monitor, or be able to renew its authentication ticket when it times out, which is 72 hours by default. Syntax Ensure that no clients are listed with the AUTH_INSECURE_GLOBAL_ID_RECLAIM alert. Important The rolling_update.yml playbook includes the serial variable that adjusts the number of nodes to be updated simultaneously. Red Hat strongly recommends to use the default value ( 1 ), which ensures that Ansible will upgrade cluster nodes one by one. Important If the upgrade fails at any point, check the cluster status with the ceph status command to understand the upgrade failure reason. If you are not sure of the failure reason and how to resolve , please contact Red hat Support for assistance. Warning If upgrading a multisite setup from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, heed the following recommendations or else replication may break. Set rgw_multisite: false in all.yml before running rolling_update.yml . Do not re-enable rgw_multisite after upgrade. Use it only if you need to add new gateways after upgrade. Only upgrade a Red Hat Ceph Storage 3 cluster at version 3.3z5 or higher to Red Hat Ceph Storage 4. If you cannot update to 3.3z5 or a higher, disable synchronization between sites before upgrading the clusters. To disable synchronization, set rgw_run_sync_thread = false and restart the RADOS Gateway daemon. Upgrade the primary cluster first. Upgrade to Red Hat Ceph Storage 4.1 or later. To see the package versions that correlate to 3.3z5 see What are the Red Hat Ceph Storage releases and corresponding Ceph package versions? For instructions on how to disable synchronization, see How to disable RGW Multisite sync temporarily? Warning When using Ceph Object Gateway and upgrading from Red Hat Ceph Storage 3.x to Red Hat Ceph Storage 4.x, the front end is automatically changed from CivetWeb to Beast, which is the new default. For more information, see Configuration in the Object Gateway Configuration and Administration Guide . Warning If using RADOS Gateway, Ansible will switch the front end from CivetWeb to Beast. In the process of this the RGW instance names are changed from rgw. HOSTNAME to rgw. HOSTNAME .rgw0. Due to the name change Ansible does not update the existing RGW configuration in ceph.conf and instead appends a default configuration, leaving intact the old CivetWeb based RGW setup, however it is not used. Custom RGW configuration changes would then be lost, which could cause an RGW service interruption. To avoid this, before upgrade, add the existing RGW configuration in the ceph_conf_overrides section of all.yml , but change the RGW instance names by appending .rgw0 , then restart the RGW service. This will preserve non-default RGW configuration changes after upgrade. For information on ceph_conf_overrides , see Overriding Ceph Default Settings . 7.1. Supported Red Hat Ceph Storage upgrade scenarios Red Hat supports the following upgrade scenarios. Read the tables for bare-metal , and containerized to understand what pre-upgrade state your cluster must be in to move to certain post-upgrade states. Use ceph-ansible to perform bare-metal and containerized upgrades where the bare-metal or host operating system does not change major versions. Upgrading from Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8 is not supported with ceph-ansible . To upgrade the bare-metal operating system from Red Hat Enterprise Linux 7.9 to Red Hat Enterprise Linux 8.4 as a part of upgrading Red Hat Ceph Storage, see the Manually upgrading a Red Hat Ceph Storage cluster and operating system section in the Red Hat Ceph Storage Installation Guide . Note To upgrade your cluster to Red Hat Ceph Storage 4, Red Hat recommends your cluster to be on the latest version of the Red Hat Ceph Storage 3. To know the latest version of Red Hat Ceph Storage, see the What are the Red Hat Ceph Storage releases? Knowledgebase article for more information. Table 7.1. Supported upgrade scenarios for Bare-metal deployments Pre-upgrade state Post-upgrade state Red Hat Enterprise Linux version Red Hat Ceph Storage version Red Hat Enterprise Linux version Red Hat Ceph Storage version 7.6 3.3 7.9 4.2 7.6 3.3 8.4 4.2 7.7 3.3 7.9 4.2 7.7 4.0 7.9 4.2 7.8 3.3 7.9 4.2 7.8 3.3 8.4 4.2 7.9 3.3 8.4 4.2 8.1 4.0 8.4 4.2 8.2 4.1 8.4 4.2 8.2 4.1 8.4 4.2 8.3 4.1 8.4 4.2 Table 7.2. Supported upgrade scenarios for Containerized deployments Pre-upgrade state Post-upgrade state Host Red Hat Enterprise Linux version Container Red Hat Enterprise Linux version Red Hat Ceph Storage version Host Red Hat Enterprise Linux version Container Red Hat Enterprise Linux version Red Hat Ceph Storage version 7.6 7.8 3.3 7.9 8.4 4.2 7.7 7.8 3.3 7.9 8.4 4.2 7.7 8.1 4.0 7.9 8.4 4.2 7.8 7.8 3.3 7.9 8.4 4.2 8.1 8.1 4.0 8.4 8.4 4.2 8.2 8.2 4.1 8.4 8.4 4.2 8.3 8.3 4.1 8.4 8.4 4.2 7.2. Preparing for an upgrade There are a few things to complete before you can start an upgrade of a Red Hat Ceph Storage cluster. These steps apply to both bare-metal and container deployments of a Red Hat Ceph Storage cluster, unless specified for one or the other. Important You can only upgrade to the latest version of Red Hat Ceph Storage 4. For example, if version 4.1 is available, you cannot upgrade from 3 to 4.0; you must go directly to 4.1. Important If using the FileStore object store, after upgrading from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, you must migrate to BlueStore. Important You cannot use ceph-ansible to upgrade Red Hat Ceph Storage while also upgrading Red Hat Enterprise Linux 7 to Red Hat Enterprise Linux 8. You must stay on Red Hat Enterprise Linux 7. To upgrade the operating system as well, see Manually upgrading a Red Hat Ceph Storage cluster and operating system . Important The option bluefs_buffered_io is set to True by default for Red Hat Ceph Storage 4.2z2 and later versions. This option enables BlueFS to perform buffered reads in some cases and enables the kernel page cache to act as a secondary cache for reads like RocksDB block reads. For example, if the RocksDB block cache is not large enough to hold all blocks during the OMAP iteration, it may be possible to read them from the page cache instead of the disk. This can dramatically improve performance when osd_memory_target is too small to hold all entries in the block cache. Currently enabling bluefs_buffered_io and disabling the system level swap prevents performance degradation. Prerequisites Root-level access to all nodes in the storage cluster. The system clocks on all nodes in the storage cluster are synchronized. If the Monitor nodes are out of sync, the upgrade process might not complete properly. If upgrading from version 3, the version 3 cluster is upgraded to the latest version of Red Hat Ceph Storage 3 . Before upgrading to version 4, if the Prometheus node exporter service is running, then stop the service: Example Important This is a known issue, that will be fixed in an upcoming Red Hat Ceph Storage release. See the Red Hat Knowledgebase article for more details regarding this issue. Note For Bare-metal or Container Red Hat Ceph Storage cluster nodes that cannot access the internet during an upgrade, follow the procedure provided in the section Registering Red Hat Ceph Storage nodes to the CDN and attaching subscriptions in the Red Hat Ceph Storage Installation Guide . Procedure Log in as the root user on all nodes in the storage cluster. If the Ceph nodes are not connected to the Red Hat Content Delivery Network (CDN), you can use an ISO image to upgrade Red Hat Ceph Storage by updating the local repository with the latest version of Red Hat Ceph Storage. If upgrading Red Hat Ceph Storage from version 3 to version 4, remove an existing Ceph dashboard installation. On the Ansible administration node, change to the cephmetrics-ansible directory: Run the purge.yml playbook to remove an existing Ceph dashboard installation: If upgrading Red Hat Ceph Storage from version 3 to version 4, enable the Ceph and Ansible repositories on the Ansible administration node: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 On the Ansible administration node, ensure the latest versions of the ansible and ceph-ansible packages are installed. Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Edit the infrastructure-playbooks/rolling_update.yml playbook and change the health_osd_check_retries and health_osd_check_delay values to 50 and 30 respectively: For each OSD node, these values cause Ansible to wait for up to 25 minutes, and will check the storage cluster health every 30 seconds, waiting before continuing the upgrade process. Note Adjust the health_osd_check_retries option value up or down based on the used storage capacity of the storage cluster. For example, if you are using 218 TB out of 436 TB, basically using 50% of the storage capacity, then set the health_osd_check_retries option to 50 . If the storage cluster you want to upgrade contains Ceph Block Device images that use the exclusive-lock feature, ensure that all Ceph Block Device users have permissions to blacklist clients: If the storage cluster was originally installed using Cockpit, create a symbolic link in the /usr/share/ceph-ansible directory to the inventory file where Cockpit created it, at /usr/share/ansible-runner-service/inventory/hosts : Change to the /usr/share/ceph-ansible directory: Create the symbolic link: To upgrade the cluster using ceph-ansible , create the symbolic link in the etc/ansible/hosts directory to the hosts inventory file: If the storage cluster was originally installed using Cockpit, copy the Cockpit generated SSH keys to the Ansible user's ~/.ssh directory: Copy the keys: Replace ANSIBLE_USERNAME with the username for Ansible, usually admin . Example Set the appropriate owner, group, and permissions on the key files: Replace ANSIBLE_USERNAME with the username for Ansible, usually admin . Example Additional Resources See Enabling the Red Hat Ceph Storage repositories for details. For more information about clock synchronization and clock skew, see the Clock Skew section in the Red Hat Ceph Storage Troubleshooting Guide . 7.3. Upgrading the storage cluster using Ansible Using the Ansible deployment tool, you can upgrade a Red Hat Ceph Storage cluster by doing a rolling upgrade. These steps apply to both bare-metal and container deployment, unless otherwise noted. Prerequisites Root-level access to the Ansible administration node. An ansible user account. Procedure Navigate to the /usr/share/ceph-ansible/ directory: Example If upgrading from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, make backup copies of the group_vars/all.yml , group_vars/osds.yml , and group_vars/clients.yml files: If upgrading from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, create new copies of the group_vars/all.yml.sample , group_vars/osds.yml.sample and group_vars/clients.yml.sample files, and rename them to group_vars/all.yml , group_vars/osds.yml , and group_vars/clients.yml respectively. Open and edit them accordingly, basing the changes on your previously backed up copies. Edit the group_vars/osds.yml file. Add and set the following options: Note These are the default values; you can modify the values as per your use case. If upgrading to a new minor version of Red Hat Ceph Storage 4, verify the value for grafana_container_image in group_vars/all.yml is the same as in group_vars/all.yml.sample . If it is not the same, edit it so it is. Example Note The image path shown is included in ceph-ansible version 4.0.23-1. Copy the latest site.yml or site-container.yml file from the sample files: For bare-metal deployments: For container deployments: Open the group_vars/all.yml file and edit the following options. Add the fetch_directory option: Replace FULL_DIRECTORY_PATH with a writable location, such as the Ansible user's home directory. If the cluster you want to upgrade contains any Ceph Object Gateway nodes, add the radosgw_interface option: Replace INTERFACE with the interface that the Ceph Object Gateway nodes listen to. If your current setup has SSL certificates configured, you need to edit the following: The default OSD object store is BlueStore. To keep the traditional OSD object store, you must explicitly set the osd_objectstore option to filestore : Note With the osd_objectstore option set to filestore , replacing an OSD will use FileStore, instead of BlueStore. Important Starting with Red Hat Ceph Storage 4, FileStore is a deprecated feature. Red Hat recommends migrating the FileStore OSDs to BlueStore OSDs. Starting with Red Hat Ceph Storage 4.1, you must uncomment or set dashboard_admin_password and grafana_admin_password in /usr/share/ceph-ansible/group_vars/all.yml . Set secure passwords for each. Also set custom user names for dashboard_admin_user and grafana_admin_user . For both bare-metal and containers deployments: Uncomment the upgrade_ceph_packages option and set it to True : Set the ceph_rhcs_version option to 4 : Note Setting the ceph_rhcs_version option to 4 will pull in the latest version of Red Hat Ceph Storage 4. Add the ceph_docker_registry information to all.yml : Syntax Note If you do not have a Red Hat Registry Service Account, create one using the Registry Service Account webpage . See the Red Hat Container Registry Authentication Knowledgebase article for more details. Note In addition to using a Service Account for the ceph_docker_registry_username and ceph_docker_registry_password parameters, you can also use your Customer Portal credentials, but to ensure security, encrypt the ceph_docker_registry_password parameter. For more information, see Encrypting Ansible password variables with ansible-vault . For containers deployments: Change the ceph_docker_image option to point to the Ceph 4 container version: Change the ceph_docker_image_tag option to point to the latest version of rhceph/rhceph-4-rhel8 : If upgrading from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, open the Ansible inventory file for editing, /etc/ansible/hosts by default, and add the Ceph dashboard node name or IP address under the [grafana-server] section. If this section does not exist, then also add this section along with the node name or IP address. Switch to or log in as the Ansible user, then run the rolling_update.yml playbook: Important Using the --limit Ansible option with the rolling_update.yml playbook is not supported. As the root user on the RBD mirroring daemon node, upgrade the rbd-mirror package manually: Restart the rbd-mirror daemon: Verify the health status of the storage cluster. For bare-metal deployments, log into a monitor node as the root user and run the Ceph status command: For container deployments, log into a Ceph Monitor node as the root user. List all running containers: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Check health status: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Replace MONITOR_NAME with the name of the Ceph Monitor container found in the step. Example Optional: If upgrading from Red Hat Ceph Storage 3.x to Red Hat Ceph Storage 4.x, you might see this health warning: Legacy BlueStore stats reporting detected on 336 OSD(s). This is caused by newer code calculating pool stats differently. You can resolve this by setting the bluestore_fsck_quick_fix_on_mount parameter. Set bluestore_fsck_quick_fix_on_mount to true : Example Set the noout and norebalance flags to prevent data movement while OSDs are down: Example For bare-metal deployment, restart ceph-osd.target on every OSD node of the storage cluster: Example For containerized deployment, restart the individual OSDs one after the other and wait for all the placement groups to be in active+clean state. Syntax Example When all the OSDs are repaired, unset the nout and norebalance flags: Example Set the bluestore_fsck_quick_fix_on_mount to false once all the OSDs are repaired: Example Optional: An alternate method for bare-metal deployment is to stop the OSD service, run the repair function on the OSD using the ceph-bluestore-tool command, and then start the OSD service: Stop the OSD service: Run the repair function on the OSD, specifying its actual OSD ID: Syntax Example Start the OSD service: Once the upgrade finishes, you can migrate the FileStore OSDs to BlueStore OSDs, by running the Ansible playbook: Syntax Example Once the migration completes do the following sub steps. Open for editing the group_vars/osds.yml file, and set the osd_objectstore option to bluestore , for example: If you are using the lvm_volumes variable, then change the journal and journal_vg options to db and db_vg respectively, for example: Before After converting to Bluestore If working in an OpenStack environment, update all the cephx users to use the RBD profile for pools. The following commands must be run as the root user: Glance users: Syntax Example Cinder users: Syntax Example OpenStack general users: Syntax Example Important Do these CAPS updates before performing any live client migrations. This allows clients to use the new libraries running in memory, causing the old CAPS settings to drop from cache and applying the new RBD profile settings. Optional: On client nodes, restart any applications that depend on the Ceph client-side libraries. Note If you are upgrading OpenStack Nova compute nodes that have running QEMU or KVM instances or use a dedicated QEMU or KVM client, stop and start the QEMU or KVM instance because restarting the instance does not work in this case. Additional Resources See Understanding the limit option for more details. See How to migrate the object store from FileStore to BlueStore in the Red Hat Ceph Storage Administration Guide for more details. See the Knowledgebase article After a ceph-upgrade the cluster status reports `Legacy BlueStore stats reporting detected` for additional details. 7.4. Upgrading the storage cluster using the command-line interface You can upgrade from Red Hat Ceph Storage 3.3 to Red Hat Ceph Storage 4 while the storage cluster is running. An important difference between these versions is that Red Hat Ceph Storage 4 uses the msgr2 protocol by default, which uses port 3300 . If it is not open, the cluster will issue a HEALTH_WARN error. Here are the constraints to consider when upgrading the storage cluster: Red Hat Ceph Storage 4 uses msgr2 protocol by default. Ensure port 3300 is open on Ceph Monitor nodes Once you upgrade the ceph-monitor daemons from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, the Red Hat Ceph Storage 3 ceph-osd daemons cannot create new OSDs until you upgrade them to Red Hat Ceph Storage 4. Do not create any pools while the upgrade is in progress. Prerequisites Root-level access to the Ceph Monitor, OSD, and Object Gateway nodes. Procedure Ensure that the cluster has completed at least one full scrub of all PGs while running Red Hat Ceph Storage 3. Failure to do so can cause your monitor daemons to refuse to join the quorum on start, leaving them non-functional. To ensure the cluster has completed at least one full scrub of all PGs, execute the following: To proceed with an upgrade from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, the OSD map must include the recovery_deletes and purged_snapdirs flags. Ensure the cluster is in a healthy and clean state. For nodes running ceph-mon and ceph-manager , execute: Once the Red Hat Ceph Storage 4 package is enabled, execute the following on each of the ceph-mon and ceph-manager nodes: Replace <mon-hostname> and <mgr-hostname> with the hostname of the target host. Before upgrading OSDs, set the noout and nodeep-scrub flags on a Ceph Monitor node to prevent OSDs from rebalancing during upgrade. On each OSD node, execute: Once the Red Hat Ceph Storage 4 package is enabled, update the OSD node: For each OSD daemon running on the node, execute: Replace <osd-num> with the osd number to restart. Ensure all OSDs on the node have restarted before proceeding to the OSD node. If there are any OSDs in the storage cluster deployed with ceph-disk , instruct ceph-volume to start the daemons. Enable the Nautilus only functionality: Important Failure to execute this step will make it impossible for OSDs to communicate after msgr2 is enabled. After upgrading all OSD nodes, unset the noout and nodeep-scrub flags on a Ceph Monitor node. Switch any existing CRUSH buckets to the latest bucket type straw2 . Once all the daemons are updated after upgrading from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, run the following steps: Enable the messenger v2 protocol, msgr2 : This instructs all Ceph Monitors that bind to the old default port of 6789, to also bind to the new port of 3300. Verify the status of the monitor: Note Running nautilus OSDs does not bind to their v2 address automatically. They must be restarted. For each host upgraded from Red Hat Ceph Storage 3 to Red Hat Ceph Storage 4, update the ceph.conf file to either not specify any monitor port or reference both the v2 and v1 addresses and ports. Import any configuration options in ceph.conf file into the storage cluster's configuration database. Example Check the storage cluster's configuration database. Example Optional: After upgrading to Red Hat Ceph Storage 4, create a minimal ceph.conf file for each host: Example On Ceph Object Gateway nodes, execute: Once the Red Hat Ceph Storage 4 package is enabled, update the node and restart the ceph-rgw daemon: Replace <rgw-target> with the rgw target to restart. For the administration node, execute: Ensure the cluster is in a healthy and clean state. Optional: On client nodes, restart any applications that depend on the Ceph client-side libraries. Note If you are upgrading OpenStack Nova compute nodes that have running QEMU or KVM instances or use a dedicated QEMU or KVM client, stop and start the QEMU or KVM instance because restarting the instance does not work in this case. 7.5. Manually upgrading the Ceph File System Metadata Server nodes You can manually upgrade the Ceph File System (CephFS) Metadata Server (MDS) software on a Red Hat Ceph Storage cluster running either Red Hat Enterprise Linux 7 or 8. Important Before you upgrade the storage cluster, reduce the number of active MDS ranks to one per file system. This eliminates any possible version conflicts between multiple MDS. In addition, take all standby nodes offline before upgrading. This is because the MDS cluster does not possess built-in versioning or file system flags. Without these features, multiple MDS might communicate using different versions of the MDS software, and could cause assertions or other faults to occur. Prerequisites A running Red Hat Ceph Storage cluster. The nodes are using Red Hat Ceph Storage version 3.3z64 or 4.1. Root-level access to all nodes in the storage cluster. Important The underlying XFS filesystem must be formatted with ftype=1 or with d_type support. Run the command xfs_info /var to ensure the ftype is set to 1 . If the value of ftype is not 1 , attach a new disk or create a volume. On top of this new device, create a new XFS filesystem and mount it on /var/lib/containers . Starting with Red Hat Enterprise Linux 8.0, mkfs.xfs enables ftype=1 by default. Procedure Reduce the number of active MDS ranks to 1: Syntax Example Wait for the cluster to stop all of the MDS ranks. When all of the MDS have stopped, only rank 0 should be active. The rest should be in standby mode. Check the status of the file system: Use systemctl to take all standby MDS offline: Confirm that only one MDS is online, and that it has rank 0 for your file system: If you are upgrading from Red Hat Ceph Storage 3 on RHEL 7, disable the Red Hat Ceph Storage 3 tools repository and enable the Red Hat Ceph Storage 4 tools repository: Update the node and restart the ceph-mds daemon: Follow the same processes for the standby daemons. Disable and enable the tools repositories, and then upgrade and restart each standby MDS: When you have finished restarting all of the MDS in standby, restore the value of max_mds for the storage cluster: Syntax Example 7.6. Additional Resources To see the package versions that correlate to 3.3z5 see What are the Red Hat Ceph Storage releases and corresponding Ceph package versions?
[ "ceph config set mon auth_allow_insecure_global_id_reclaim false", "systemctl stop prometheus-node-exporter.service", "cd /usr/share/cephmetrics-ansible", "ansible-playbook -v purge.yml", "subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms --enable=rhel-7-server-ansible-2.9-rpms", "subscription-manager repos --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms", "yum update ansible ceph-ansible", "dnf update ansible ceph-ansible", "health_osd_check_retries: 50 health_osd_check_delay: 30", "ceph auth caps client. ID mon 'allow r, allow command \"osd blacklist\"' osd ' EXISTING_OSD_USER_CAPS '", "cd /usr/share/ceph-ansible", "ln -s /usr/share/ansible-runner-service/inventory/hosts hosts", "ln -s /etc/ansible/hosts hosts", "cp /usr/share/ansible-runner-service/env/ssh_key.pub /home/ ANSIBLE_USERNAME /.ssh/id_rsa.pub cp /usr/share/ansible-runner-service/env/ssh_key /home/ ANSIBLE_USERNAME /.ssh/id_rsa", "cp /usr/share/ansible-runner-service/env/ssh_key.pub /home/admin/.ssh/id_rsa.pub cp /usr/share/ansible-runner-service/env/ssh_key /home/admin/.ssh/id_rsa", "chown ANSIBLE_USERNAME :_ANSIBLE_USERNAME_ /home/ ANSIBLE_USERNAME /.ssh/id_rsa.pub chown ANSIBLE_USERNAME :_ANSIBLE_USERNAME_ /home/ ANSIBLE_USERNAME /.ssh/id_rsa chmod 644 /home/ ANSIBLE_USERNAME /.ssh/id_rsa.pub chmod 600 /home/ ANSIBLE_USERNAME /.ssh/id_rsa", "chown admin:admin /home/admin/.ssh/id_rsa.pub chown admin:admin /home/admin/.ssh/id_rsa chmod 644 /home/admin/.ssh/id_rsa.pub chmod 600 /home/admin/.ssh/id_rsa", "cd /usr/share/ceph-ansible/", "cp group_vars/all.yml group_vars/all_old.yml cp group_vars/osds.yml group_vars/osds_old.yml cp group_vars/clients.yml group_vars/clients_old.yml", "cp group_vars/all.yml.sample group_vars/all.yml cp group_vars/osds.yml.sample group_vars/osds.yml cp group_vars/clients.yml.sample group_vars/clients.yml", "nb_retry_wait_osd_up: 60 delay_wait_osd_up: 10", "grafana_container_image: registry.redhat.io/rhceph/rhceph-4-dashboard-rhel8:4", "cp site.yml.sample site.yml", "cp site-container.yml.sample site-container.yml", "fetch_directory: FULL_DIRECTORY_PATH", "radosgw_interface: INTERFACE", "radosgw_frontend_ssl_certificate: /etc/pki/ca-trust/extracted/ CERTIFICATE_NAME radosgw_frontend_port: 443", "osd_objectstore: filestore", "upgrade_ceph_packages: True", "ceph_rhcs_version: 4", "ceph_docker_registry: registry.redhat.io ceph_docker_registry_username: SERVICE_ACCOUNT_USER_NAME ceph_docker_registry_password: TOKEN", "ceph_docker_image: rhceph/rhceph-4-rhel8", "ceph_docker_image_tag: latest", "[ansible@admin ceph-ansible]USD ansible-playbook infrastructure-playbooks/rolling_update.yml -i hosts", "yum upgrade rbd-mirror", "systemctl restart ceph-rbd-mirror@ CLIENT_ID", "ceph -s", "docker ps", "podman ps", "docker exec ceph-mon- MONITOR_NAME ceph -s", "podman exec ceph-mon- MONITOR_NAME ceph -s", "podman exec ceph-mon-mon01 ceph -s", "ceph config set osd bluestore_fsck_quick_fix_on_mount true", "ceph osd set noout ceph osd set norebalance", "systemctl restart ceph-osd.target", "systemctl restart ceph-osd@ OSD_ID .service", "systemctl restart [email protected]", "ceph osd unset noout ceph osd unset norebalance", "ceph config set osd bluestore_fsck_quick_fix_on_mount false", "systemctl stop ceph-osd.target", "ceph-bluestore-tool --path /var/lib/ceph/osd/ceph- OSDID repair", "ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-2 repair", "systemctl start ceph-osd.target", "ansible-playbook infrastructure-playbooks/filestore-to-bluestore.yml --limit OSD_NODE_TO_MIGRATE", "[ansible@admin ceph-ansible]USD ansible-playbook infrastructure-playbooks/filestore-to-bluestore.yml --limit osd01", "osd_objectstore: bluestore", "lvm_volumes: - data: /dev/sdb journal: /dev/sdc1 - data: /dev/sdd journal: journal1 journal_vg: journals", "lvm_volumes: - data: /dev/sdb db: /dev/sdc1 - data: /dev/sdd db: journal1 db_vg: journals", "ceph auth caps client.glance mon 'profile rbd' osd 'profile rbd pool= GLANCE_POOL_NAME '", "ceph auth caps client.glance mon 'profile rbd' osd 'profile rbd pool=images'", "ceph auth caps client.cinder mon 'profile rbd' osd 'profile rbd pool= CINDER_VOLUME_POOL_NAME , profile rbd pool= NOVA_POOL_NAME , profile rbd-read-only pool= GLANCE_POOL_NAME '", "ceph auth caps client.cinder mon 'profile rbd' osd 'profile rbd pool=volumes, profile rbd pool=vms, profile rbd-read-only pool=images'", "ceph auth caps client.openstack mon 'profile rbd' osd 'profile rbd-read-only pool= CINDER_VOLUME_POOL_NAME , profile rbd pool= NOVA_POOL_NAME , profile rbd-read-only pool= GLANCE_POOL_NAME '", "ceph auth caps client.openstack mon 'profile rbd' osd 'profile rbd-read-only pool=volumes, profile rbd pool=vms, profile rbd-read-only pool=images'", "ceph osd dump | grep ^flags", "ceph health HEALTH_OK", "subscription-manager repos --enable=rhel-7-server-rhceph-4-mon-rpms", "firewall-cmd --add-port=3300/tcp firewall-cmd --add-port=3300/tcp --permanent yum update -y systemctl restart ceph-mon@<mon-hostname> systemctl restart ceph-mgr@<mgr-hostname>", "ceph osd set noout ceph osd det nodeep-scrub", "subscription-manager repos --enable=rhel-7-server-rhceph-4-osd-rpms", "yum update -y", "systemctl restart ceph-osd@<osd-num>", "ceph-volume simple scan ceph-volume simple activate --all", "ceph osd require-osd-release nautilus", "ceph osd unset noout ceph osd unset nodeep-scrub", "ceph osd getcrushmap -o backup-crushmap ceph osd crush set-all-straw-buckets-to-straw2", "ceph mon enable-msgr2", "ceph mon dump", "ceph config assimilate-conf -i /etc/ceph/ceph.conf", "ceph config dump", "ceph config generate-minimal-conf > /etc/ceph/ceph.conf.new mv /etc/ceph/ceph.conf.new /etc/ceph/ceph.conf", "subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms", "yum update -y systemctl restart ceph-rgw@<rgw-target>", "subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms yum update -y", "ceph health HEALTH_OK", "ceph fs set FILE_SYSTEM_NAME max_mds 1", "ceph fs set fs1 max_mds 1", "ceph status", "systemctl stop ceph-mds.target", "ceph status", "subscription-manager repos --disable=rhel-7-server-rhceph-3-tools-rpms subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms", "yum update -y systemctl restart ceph-mds.target", "subscription-manager repos --disable=rhel-7-server-rhceph-3-tools-rpms subscription-manager repos --enable=rhel-7-server-rhceph-4-tools-rpms yum update -y systemctl restart ceph-mds.target", "ceph fs set FILE_SYSTEM_NAME max_mds ORIGINAL_VALUE", "ceph fs set fs1 max_mds 5" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/installation_guide/upgrading-a-red-hat-ceph-storage-cluster
probe::vm.write_shared_copy
probe::vm.write_shared_copy Name probe::vm.write_shared_copy - Page copy for shared page write Synopsis vm.write_shared_copy Values zero boolean indicating whether it is a zero page (can do a clear instead of a copy) name Name of the probe point address The address of the shared write Context The process attempting the write. Description Fires when a write to a shared page requires a page copy. This is always preceded by a vm.write_shared.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-vm-write-shared-copy