title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
sequencelengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Red Hat Data Grid | Red Hat Data Grid Data Grid is a high-performance, distributed in-memory data store. Schemaless data structure Flexibility to store different objects as key-value pairs. Grid-based data storage Designed to distribute and replicate data across clusters. Elastic scaling Dynamically adjust the number of nodes to meet demand without service disruption. Data interoperability Store, retrieve, and query data in the grid from different endpoints. | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/using_data_grid_with_spring/red-hat-data-grid |
Chapter 7. Compiler and Tools | Chapter 7. Compiler and Tools The linuxptp package now supports active-backup bonding for clock synchronization With this update, you can now specify a bond interface in the active-backup mode to be used by the ptp4l application. As a result, ptp4l uses the clock of the active interface from the bond as the Precision Time Protocol (PTP) clock and can switch to another interface of the bond in case of a failover. Additionally, the phc2sys utility in the automatic mode (the -a option) can synchronize the system clock to the PTP clock of the active interface when operating as a PTP slave and the PTP clock to the system clock when operating as a PTP master. (BZ#1002657) parted can now resize partitions using the resizepart command The ability to resize disk partitions using the resizepart NUMBER END command is now backported to the parted disk partitioning utility distributed with Red Hat Enterprise Linux 7. See the parted(8) man page for information. Note that this command only resizes partitions, not file systems residing on them. Use file system utilities such as resize2fs to grow or shrink file systems. (BZ# 1423357 ) binutils rebased to version 2.27 The binutils package has been rebased to upstream version 2.27. This version brings the following changes: Support for compressed debug sections Improved handling of orphan sections during linking Support for the LLVM plugin Ability to insert new symbols into an object file with the objcopy utility Support for the IBM POWER9 architecture Support for the ARMv8.1 and ARMv8.2 instruction set extensions Additionally, this update fixes the following bugs: Previously, the binutils package did not contain the standards.info documentation file that describes the GNU Coding Standard. This file has been added and is available through the info command again. Previously, the ld linker on the IBM Power Systems architecture stored intermediate data in the first object file specified by the linker command line. As a consequence, the linker terminated unexpectedly with a segmentation fault if that file was not used in the output and was discarded. The linker has been modified to directly store the data in the output file and skip the intermediate storage in the input file. As a result, linking no longer fails with a segmentation fault in the described situation. (BZ# 1385959 , BZ#1356856, BZ# 1467390 , BZ#1513014) pcp rebased to version 3.12.2 The Performance Co-Pilot (PCP) application has been rebased to version 3.12.2, which includes many enhancements and bug fixes. Collector systems updates: The following Performance Metric Domain Agents (PMDAs) have been updated: perfevent , containers and CGroups, MySQL slave metrics, Linux per-process metrics, and Linux kernel metrics for entropy, slabinfo , IPv6 sockets, and NFSD worker threads. New PMDAs are now available: Prometheus endpoint and HAProxy. Device Mapper statistics now expose an API. Monitor systems updates: The derived metrics language has been extended for all monitors. The pmchart charting utility includes fixes for timezone and display bugs. The pmlogconf configuration utility automatically enables the hotproc metric logging and adds atop metrics. Performance is now more optimized. The pcp-atop monitoring utility recognizes the new --hotproc option. Several bugs have been fixed. The pcp-pidstat and pcp-mpstat monitoring utilities recognize several new output options. The pmrep reporting utility now supports Comma-separated Values (CSV) output compatible with the sadf tool. New utilities for exporting PCP metrics to various formats have also been added: pcp2zabbix , pcp2xml , pcp2json , and pcp2elasticsearch . (BZ# 1472153 ) Improved DWARF 5 support in various tools Support for the DWARF debugging format version 5 has been extended in the following tools: The eu-readelf tool from the elfutils package now recognizes all DWARF 5 tags and attributes. The readelf and objdump tools from the binutils package now recognize the DWARF 5 tag DW_AT_exported_symbols and correctly report its presence in debug information sections. (BZ# 1472955 , BZ# 1472969 ) systemtap rebased to version 3.2 The SystemTap utility has been updated to upstream version 3.2. Notable enhancements include: Support for extraction of matched regular expression has been added. Probe aliases for accepting input from the standard input have been added. Translator diagnostics have been improved. Support for the new statx system call has been added. A new string function strpos() for detecting substring position has been added to the stap language. Additionally, this update fixes the following bugs: Previously, the statistics extractor functions @min() and @max() returned incorrect values. As a consequence, scripts relying on these functions did not work properly. The @min() and @max() functions have been fixed to return the correct maximum and minimum values. As a result, the affected scripts now work as expected. Previously, some kernel tracepoints were inconsistently listed with the stap -L command, even when they could not be probed. SystemTap has been fixed so that the listed and probe-able tracepoint sets match again. The netdev.receive probe has been fixed and can collect data again. The example script nettop.stp affected by the broken netdev.receive probe again works as expected. Note that the kernel version in Red Hat Enterprise Linux does not support extended Berkeley Packet Filter (eBPF), and consequently the related upstream SystemTap features are not available. (BZ# 1473722 , BZ#1490862, BZ# 1506230 , BZ#1485228, BZ#1518462) valgrind rebased to version 3.13.0 The valgrind package has been upgraded to version 3.13.0, which provides a number of bug fixes and enhancements over the version. Notable changes are: Valgrind has been extended in several ways to run large programs. The amount of memory usable by Valgrind has been increased to 128 GB. As a consequence, the Memcheck tool supports running applications that allocate up to approximately 60 GB. Additionally, Valgrind can now load executable files up to 1200 MB in size. The tools Memcheck , Helgrind , and Massif can now use a new execution tree (xtree) representation to report heap consumption of the analyzed applications. The symbol demangler has been updated to support the C++11 standard and the Rust programming language. Failures with long blocks of code using AVX2 instructions on the Intel and AMD 64-bit architecture have been fixed. The 64-bit timebase register of the PowerPC architecture is no longer modeled by Valgrind as only 32-bit. Support for the IBM Power Systems architecture has been extended to include the ISA 3.0B specification. An alternative implementation of Load-Linked and Store-Conditional instructions for the 64-bit ARM architecture has been added. The alternative implementation is enabled automatically when required. To enable it manually, use the --sim-hints=fallback-llsc option. (BZ# 1473725 , BZ#1508148) ncat rebased to version 7.50 The ncat utility, which is provided by the nmap-ncat package, has been rebased to upstream version 7.50. This provides a number of bug fixes and new features over the version. Notable changes include: Support for SOCKS5 authentication has been added. The -z option for quickly checking the status of a port has been added. The --no-shutdown option now also works in connect mode, not only in listen mode. (BZ#1460249) rsync rebased to version 3.1.2 The rsync packages have been upgraded to upstream version 3.1.2, which provides a number of bug fixes and enhancements over the version. This update introduces the following output changes: The default output format of numbers has been changed to 3-digit groups, for example, 1,234,567 . The output of the --progress option has been changed; the following strings have been shortened: xfer to xfr , and to-check to to-chk . Notable enhancements in this version include: I/O handling has been improved, which results in faster data transfers. New --info and --debug options have been added for more fine-grained output. The ability to synchronize nano-second modified times has been added. New options, --usermap , --groupmap , and --chown , have been added for manipulating file ownership during the copy operation. A new --preallocate option has been added. (BZ# 1432899 ) tcpdump can now analyze virtio traffic The tcpdump utility now supports the virtio-vsock communication device. This makes it possible for tcpdump to filter and analyze virtio communication between a hypervisor and a guest virtual machine. (BZ# 1464390 ) Vim now supports C++11 syntax highlighting Syntax highlighting for C++ in the Vim text editor has been enhanced to support the C++11 standard. (BZ#1267826) Vim now supports the blowfish2 encryption method Support for the blowfish2 encryption method has been added to the Vim text editor. This method provides stronger encryption than blowfish . To set the blowfish2 encryption method, use the :setlocal cm=blowfish2 command. Note that files encrypted with blowfish2 are compatible between Red Hat Enterprise Linux 7 and Red Hat Enterprise Linux 6. (BZ#1319760) The IO::Socket::SSL Perl module now uses the system-wide CA certificate store by default Previously, if a TLS application based on the IO::Socket::SSL Perl module did not provide an explicit path to a certificate authority (CA) certificate, no authority was known, and the peer's identity could not be verified. With this update, the module uses the system-wide CA certificate store by default. However, it is possible to disable any certificate store by passing the undef value to the SSL_ca_file option of the IO::Socket::SSL->new() constructor. (BZ# 1402588 ) perl-DateTime-TimeZone rebased to version 1.70 The perl-DateTime-TimeZone package has been upgraded to upstream version 1.70, which provides a number of bug fixes and enhancements over the version. Notably: With this update, it is possible to install Bugzilla version 5, which requires a more recent version of perl-DateTime-TimeZone than the system provided previously. The Olson time zone database has been updated to version 2017b. Previously, applications written in the Perl language that use the DateTime::TimeZone module mishandled time zones that changed their specifications since version 2013h due to the outdated database. Using a local time zone from a tainted time zone identifier has been fixed. (BZ# 1241818 , BZ# 1101251 ) system-config-kdump now support selecting of either automated or manual kdump memory settings when fadump is performed This update adds fadump memory reservation support into the system-config-kdump packages. As a result, users can now select either Automated kdump memory settings or Manual settings when Firmware assisted dump is selected. (BZ#1384943) conman rebased to version 0.2.8 The conman packages have been upgraded to upstream version 0.2.8, which provides a number of bug fixes and enhancements over the version. Notable changes include: Scalability has been improved. Coverity Scan and Clang warnings have been fixed to improve stability. Arbitrary limit on the number of Intelligent Platform Management Interface (IPMI) Serial Over LAN (SOL) consoles has been fixed. The default value of the loopback setting has been changed to ON in the conman.conf file. (BZ# 1435840 ) Support for the TFTP windowsize option has been implemented With this update, support for the windowsize option according to RFC 7440 has been implemented in the Trivial File Transfer Protocol (TFTP) server and client. When the windowsize option is used, data blocks are sent in batches, which significantly improves throughput. (BZ# 1328827 ) curl now supports disabling GSSAPI with SOCKS5 New --socks5-basic and --socks5-gssapi options for the curl utility and a corresponding option CURLOPT_SOCKS5_AUTH for the libcurl library have been introduced to control the authentication methods for SOCKS5 proxies. (BZ#1409208) The rsync utility now copies files with their original nanosecond part of the time stamp Previously, the rsync utility ignored the nanosecond part of the time stamp of files. As a consequence, the nanosecond time stamp of newly created files was always zero. With this update, the rsync utility recognizes the nanosecond part. As a result, the newly copied files keep their original nanosecond time stamp on systems that support it. (BZ# 1393543 ) tcpdump rebased to version 4.9.2 The tcpdump package has been upgraded to upstream version 4.9.2, which provides a number of bug fixes (for almost 100 CVEs) and enhancements over the version. Notable changes include: A segmentation fault with OpenSSL 1.1 has been fixed and OpenSSL usage has been improved. The buffer overflow vulnerabilities have been fixed. The infinite loop vulnerabilities have been fixed. Many buffer over-read vulnerabilities have been fixed. (BZ#1490842) OProfile support for Intel Xeon processor family extended OProfile has been extended to support the Intel Xeon PhiTM Processor x200 and x205 Product Families. (BZ#1465354) Support for Intel Xeon v4 uncore performance events in libpfm , pcp , and papi This update adds support for Intel Xeon v4 uncore performance events to the libpfm performance monitoring library, the pcp tool, and the papi interface. (BZ# 1474999 ) Memory copying performance improved on IBM POWER architectures Previously, the memcpy() function from the GNU C Library ( glibc ) used unaligned vector load and store instructions on 64-bit IBM POWER systems. Consequently, when memcpy() was used to access device memory on POWER9 systems, performance would suffer. The memcpy() function has been enhanced to use aligned memory access instructions, to provide better performance for applications regardless of the memory involved on POWER9, without affecting the performance on generations of the POWER architecture. (BZ#1498925) TAI clock macro available Previously, the kernel provided the CLOCK_TAI clock, but the CLOCK_TAI macro to access it was missing in the glibc header file time.h . The macro definition has been added to the header file. As a result, applications can now access the CLOCK_TAI kernel clock. (BZ# 1448822 ) Support for selective use of 4 KiB page tables on IBM Z This update adds the option --s390-pgste to the ld linker from the binutils package to mark applications for the IBM Z architecture that require 4 KiB memory page tables on the lowest level. As a result, use of this feature can be restricted only to applications that need it, allowing optimal use of space by all applications on the system. Note that the qemu backend no longer forces 4 KiB lowest level page tables on all running applications. Make sure to specify the new option if your applications require them. (BZ# 1485398 ) More efficient glibc functions on IBM Z Support for additional instructions of the IBM Z architecture has been added to the glibc library. As a result, programs compiled for this architecture can benefit from the increased performance of the glibc functions. (BZ#1375235) The ld linker no longer incorrectly combines position-dependent and independent code Previously, the ld linker combined object files on the IBM Z platform without considering whether they have been built for Position Independent Executable (PIE). Because PIE and non-PIE code cannot be combined, it was possible to create executable files that could not run. The linker has been extended to detect mixing of PIE and non-PIE code and produce an error message in this case. As a result, broken executable files can no longer be created this way. (BZ#1406430) python-virtualenv rebased to 15.1.0 The python-virtualenv package has been upgraded to version 15.1.0, which provides a number of bug fixes and enhancements over the version. With this update, the following bundled packages have been upgraded: setuptools to version 28.0.0 and pip to version 9.0.1. (BZ#1461154) python-urllib3 supports IP addresses in subjectAltName The python-urllib3 package, a Python HTTP module with connection pooling and file POST abilities, now supports IP addresses in the subjectAltName (SAN) fields. (BZ# 1434114 ) Support for retpolines added to GCC This update adds support for retpolines to GCC. Retpolines are a technique used by the kernel to reduce overhead of mitigating Spectre Variant 2 attacks described in CVE-2017-5715. (BZ#1535655) Shenandoah garbage collector is now fully supported The low pause time Shenandoah garbage collector for OpenJDK , previously available as a Technology Preview, is now fully supported on the Intel 64, AMD64, and 64-bit ARM architectures. Shenandoah performs concurrent evacuation which allows users to run with large heaps without long pause times. For more information, see https://wiki.openjdk.java.net/display/shenandoah/Main . (BZ#1578075) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/new_features_compiler_and_tools |
Chapter 12. VolumeSnapshotClass [snapshot.storage.k8s.io/v1] | Chapter 12. VolumeSnapshotClass [snapshot.storage.k8s.io/v1] Description VolumeSnapshotClass specifies parameters that a underlying storage system uses when creating a volume snapshot. A specific VolumeSnapshotClass is used by specifying its name in a VolumeSnapshot object. VolumeSnapshotClasses are non-namespaced Type object Required deletionPolicy driver 12.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources deletionPolicy string deletionPolicy determines whether a VolumeSnapshotContent created through the VolumeSnapshotClass should be deleted when its bound VolumeSnapshot is deleted. Supported values are "Retain" and "Delete". "Retain" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are kept. "Delete" means that the VolumeSnapshotContent and its physical snapshot on underlying storage system are deleted. Required. driver string driver is the name of the storage driver that handles this VolumeSnapshotClass. Required. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata parameters object (string) parameters is a key-value map with storage driver specific parameters for creating snapshots. These values are opaque to Kubernetes. 12.2. API endpoints The following API endpoints are available: /apis/snapshot.storage.k8s.io/v1/volumesnapshotclasses DELETE : delete collection of VolumeSnapshotClass GET : list objects of kind VolumeSnapshotClass POST : create a VolumeSnapshotClass /apis/snapshot.storage.k8s.io/v1/volumesnapshotclasses/{name} DELETE : delete a VolumeSnapshotClass GET : read the specified VolumeSnapshotClass PATCH : partially update the specified VolumeSnapshotClass PUT : replace the specified VolumeSnapshotClass 12.2.1. /apis/snapshot.storage.k8s.io/v1/volumesnapshotclasses Table 12.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of VolumeSnapshotClass Table 12.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 12.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind VolumeSnapshotClass Table 12.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 12.5. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotClassList schema 401 - Unauthorized Empty HTTP method POST Description create a VolumeSnapshotClass Table 12.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.7. Body parameters Parameter Type Description body VolumeSnapshotClass schema Table 12.8. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotClass schema 201 - Created VolumeSnapshotClass schema 202 - Accepted VolumeSnapshotClass schema 401 - Unauthorized Empty 12.2.2. /apis/snapshot.storage.k8s.io/v1/volumesnapshotclasses/{name} Table 12.9. Global path parameters Parameter Type Description name string name of the VolumeSnapshotClass Table 12.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a VolumeSnapshotClass Table 12.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 12.12. Body parameters Parameter Type Description body DeleteOptions schema Table 12.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified VolumeSnapshotClass Table 12.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 12.15. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotClass schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified VolumeSnapshotClass Table 12.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 12.17. Body parameters Parameter Type Description body Patch schema Table 12.18. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotClass schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified VolumeSnapshotClass Table 12.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 12.20. Body parameters Parameter Type Description body VolumeSnapshotClass schema Table 12.21. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotClass schema 201 - Created VolumeSnapshotClass schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/storage_apis/volumesnapshotclass-snapshot-storage-k8s-io-v1 |
Chapter 10. Defining Access Control for IdM Users | Chapter 10. Defining Access Control for IdM Users Access control is a set of security features which defines who can access certain resources, such as machines, services or entries, and what kinds of operations they are allowed to perform. Identity Management provides several access control areas to make it clear what kind of access is being granted and to whom it is granted. As part of this, Identity Management draws a distinction between access controls to resources within the domain and access control to the IdM configuration itself. This chapter details the different internal access control mechanisms that are available for users within IdM to the IdM server and other IdM users. 10.1. Access Controls for IdM Entries Access control defines the rights or permissions users have been granted to perform operations on other users or objects. The Identity Management access control structure is based on standard LDAP access controls. Access within the IdM server is based on the IdM users, stored in the back end Directory Server instance, who are allowed to access other IdM entities, also stored as LDAP entries in the Directory Server instance. An access control instruction (ACI) has three parts: Actor This is the entity who is being granted permission to do something. In LDAP access control models, this is called the bind rule because it defines who the user is and can optionally require other limits on the bind attempt, such as restricting attempts to a certain time of day or a certain machine. Target This defines the entry which the actor is allowed to perform operations on. Operation type Operation type - the last part determines what kinds of actions the user is allowed to perform. The most common operations are add, delete, write, read, and search. In Identity Management, all users are implicitly granted read and search rights to all entries in the IdM domain, with restrictions only for sensitive attributes like passwords and Kerberos keys. Anonymous users are restricted from seeing security-related configuration, like sudo rules and host-based access control. When any operation is attempted, the first thing that the IdM client does is send user credentials as part of the bind operation. The back end Directory Server checks those user credentials and then checks the user account to see if the user has permission to perform the requested operation. 10.1.1. Access Control Methods in Identity Management To make access control rules simple and clear to implement, Identity Management divides access control definitions into three categories: Self-service rules Self-service rules, which define what operations a user can perform on his own personal entry. The access control type only allows write permissions to attributes within the entry; it does not allow add or delete operations for the entry itself. Delegation rules Delegation rules, which allow a specific user group to perform write (edit) operations on specific attributes for users in another user group. Like self-service rules, this form of access control rule is limited to editing the values of specific attributes; it does not grant the ability to add or remove whole entries or control over unspecified attributes. Role-based access control Role-based access control, which creates special access control groups which are then granted much broader authority over all types of entities in the IdM domain. Roles can be granted edit, add, and delete rights, meaning they can be granted complete control over entire entries, not just selected attributes. Some roles are already created and available within Identity Management. Special roles can be created to manage any type of entry in specific ways, such as hosts, automount configuration, netgroups, DNS settings, and IdM configuration. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/server-access-controls |
Chapter 44. ContainerEnvVar schema reference | Chapter 44. ContainerEnvVar schema reference Used in: ContainerTemplate Property Property type Description name string The environment variable key. value string The environment variable value. valueFrom ContainerEnvVarSource Reference to the secret or config map property to which the environment variable is set. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-ContainerEnvVar-reference |
Installing on IBM Z and IBM LinuxONE | Installing on IBM Z and IBM LinuxONE OpenShift Container Platform 4.12 Installing OpenShift Container Platform on IBM Z and IBM LinuxONE Red Hat OpenShift Documentation Team | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=dasda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.dasd=0.0.3490",
"rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=sda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000",
"ipl c",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.12 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=dasda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/bootstrap.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.dasd=0.0.3490",
"rd.neednet=1 console=ttysclp0 coreos.inst.install_dev=sda coreos.live.rootfs_url=http://cl1.provide.example.com:8080/assets/rhcos-live-rootfs.s390x.img coreos.inst.ignition_url=http://cl1.provide.example.com:8080/ignition/worker.ign ip=172.18.78.2::172.18.78.1:255.255.255.0:::none nameserver=172.18.78.1 rd.znet=qeth,0.0.bdf0,0.0.bdf1,0.0.bdf2,layer2=1,portno=0 zfcp.allow_lun_scan=0 rd.zfcp=0.0.1987,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763070bc5e3,0x4008400B00000000 rd.zfcp=0.0.1987,0x50050763071bc5e3,0x4008400B00000000 rd.zfcp=0.0.19C7,0x50050763071bc5e3,0x4008400B00000000",
"ipl c",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup,fail_over_mac=1 ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"ip=bond0.100:dhcp bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0.100:none bond=bond0:em1,em2:mode=active-backup vlan=bond0.100:bond0",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.12 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"prot_virt: Reserving <amount>MB as ultravisor base storage.",
"cat /sys/firmware/uv/prot_virt_host",
"1",
"{ \"ignition\": { \"version\": \"3.0.0\" }, \"storage\": { \"files\": [ { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 }, { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 } ] } } ```",
"base64 <your-hostkey>.crt",
"qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}",
"virt-install --noautoconsole --connect qemu:///system --name {vn_name} --memory {memory} --vcpus {vcpus} --disk {disk} --import --network network={network},mac={mac} --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional",
"virt-install --connect qemu:///system --name {vn_name} --vcpus {vcpus} --memory {memory_mb} --disk {vn_name}.qcow2,size={image_size| default(10,true)} --network network={virt_network_parm} --boot hd --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} --extra-args \"rd.neednet=1 coreos.inst=yes coreos.inst.install_dev=vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vn_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}\" --noautoconsole --wait",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-mddf5 20m system:node:master-01.example.com Approved,Issued csr-z5rln 16m system:node:worker-21.example.com Approved,Issued",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.12 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: s390x controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: s390x metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_repository>/ocp4/openshift4 source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"prot_virt: Reserving <amount>MB as ultravisor base storage.",
"cat /sys/firmware/uv/prot_virt_host",
"1",
"{ \"ignition\": { \"version\": \"3.0.0\" }, \"storage\": { \"files\": [ { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 }, { \"path\": \"/etc/se-hostkeys/ibm-z-hostkey-<your-hostkey>.crt\", \"contents\": { \"source\": \"data:;base64,<base64 encoded hostkey document>\" }, \"mode\": 420 } ] } } ```",
"base64 <your-hostkey>.crt",
"qemu-img create -f qcow2 -F qcow2 -b /var/lib/libvirt/images/{source_rhcos_qemu} /var/lib/libvirt/images/{vmname}.qcow2 {size}",
"virt-install --noautoconsole --connect qemu:///system --name {vn_name} --memory {memory} --vcpus {vcpus} --disk {disk} --import --network network={network},mac={mac} --disk path={ign_file},format=raw,readonly=on,serial=ignition,startup_policy=optional",
"virt-install --connect qemu:///system --name {vn_name} --vcpus {vcpus} --memory {memory_mb} --disk {vn_name}.qcow2,size={image_size| default(10,true)} --network network={virt_network_parm} --boot hd --location {media_location},kernel={rhcos_kernel},initrd={rhcos_initrd} --extra-args \"rd.neednet=1 coreos.inst=yes coreos.inst.install_dev=vda coreos.live.rootfs_url={rhcos_liveos} ip={ip}::{default_gateway}:{subnet_mask_length}:{vn_name}:enc1:none:{MTU} nameserver={dns} coreos.inst.ignition_url={rhcos_ign}\" --noautoconsole --wait",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.25.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.12 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.12.0 True False False 19m baremetal 4.12.0 True False False 37m cloud-credential 4.12.0 True False False 40m cluster-autoscaler 4.12.0 True False False 37m config-operator 4.12.0 True False False 38m console 4.12.0 True False False 26m csi-snapshot-controller 4.12.0 True False False 37m dns 4.12.0 True False False 37m etcd 4.12.0 True False False 36m image-registry 4.12.0 True False False 31m ingress 4.12.0 True False False 30m insights 4.12.0 True False False 31m kube-apiserver 4.12.0 True False False 26m kube-controller-manager 4.12.0 True False False 36m kube-scheduler 4.12.0 True False False 36m kube-storage-version-migrator 4.12.0 True False False 37m machine-api 4.12.0 True False False 29m machine-approver 4.12.0 True False False 37m machine-config 4.12.0 True False False 36m marketplace 4.12.0 True False False 37m monitoring 4.12.0 True False False 29m network 4.12.0 True False False 38m node-tuning 4.12.0 True False False 37m openshift-apiserver 4.12.0 True False False 32m openshift-controller-manager 4.12.0 True False False 30m openshift-samples 4.12.0 True False False 32m operator-lifecycle-manager 4.12.0 True False False 37m operator-lifecycle-manager-catalog 4.12.0 True False False 37m operator-lifecycle-manager-packageserver 4.12.0 True False False 32m service-ca 4.12.0 True False False 38m storage 4.12.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/installing_on_ibm_z_and_ibm_linuxone/index |
Preface | Preface Providing feedback on Red Hat build of Apache Camel documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create ticket Enter a brief description of the issue in the Summary. Provide a detailed description of the issue or enhancement in the Description. Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_quarkus_reference/pr01 |
20.4. Known Issues | 20.4. Known Issues SELinux MLS policy is not supported with kernel version 4.14 SELinux Multi-Level Security (MLS) Policy denies unknown classes and permissions, and kernel version 4.14 in the kernel-alt packages recognizes the map permission, which is not defined in any policy. Consequently, every command on a system with active MLS policy and SELinux in enforcing mode terminates with the Segmentation fault error. A lot of SELinux denial warnings occurs on systems with active MLS policy and SELinux in permissive mode. The combination of SELinux MLS policy with kernel version 4.14 is not supported. kdump saves the vmcore only if mpt3sas is blacklisted When kdump kernel loads the mpt3sas driver, the kdump kernel crashes and fails to save the vmcore on certain POWER9 systems. To work around this problem, blacklist mpt3sas from the kdump kernel environment by appending the module_blacklist=mpt3sas string to the KDUMP_COMMANDLINE_APPEND variable in the /etc/sysconfig/kdump file: Then restart the kdump service to pick up the changes to the configuration file by running the systemctl restart command as the root user: As a result, kdump is now able to save the vmcore on the POWER9 systems. (BZ#1496273) Recovering from OOM situation fails due to incorrect function of OOM-killer Recovering from an out-of-memory (OOM) situation does not work correctly on systems with large amounts of memory. Kernel's OOM-killer kills the process using the most memory and frees the memory to be used again. However, sometimes the OOM-killer does not wait long enough before killing a second process. Eventually, the OOM-killer kills all the processes on the system and logs this error: If this happens, the operating system must be rebooted. There is no available workaround. (BZ#1405748) HTM is disabled for guests running on IBM POWER systems The Hardware Transactional Memory (HTM) feature currently prevents migrating guest virtual machines from IBM POWER8 to IBM POWER9 hosts, and has therefore been disabled by default. As a consequence, guest virtual machines running on IBM POWER8 and IBM POWER9 hosts cannot use HTM, unless the feature is manually enabled. To do so, change the default pseries-rhel7.5 machine type of these guests to pseries-rhel7.4 . Note that guests configured this way cannot be migrated from an IBM POWER8 host to an IBM POWER9 host. (BZ#1525599) Migrating guests with huge pages from IBM POWER8 to IBM POWER9 fails IBM POWER8 hosts can only use 16MB and 16GB huge pages, but these huge-page sizes are not supported on IBM POWER9. As a consequence, migrating a guest from an IBM POWER8 host to an IBM POWER9 host fails if the guest is configured with static huge pages. To work around this problem, disable huge pages on the guest and reboot it prior to migration. (BZ#1538959) modprobe succeeds to load kernel modules with incorrect parameters When attempting to load a kernel module with an incorrect parameter using the modprobe command, the incorrect parameter is ignored, and the module loads as expected on Red Hat Enterprise Linux 7 for ARM and for IBM Power LE (POWER9). Note that this is a different behavior compared to Red Hat Enterprise Linux for traditional architectures, such as AMD64 and Intel 64, IBM Z and IBM Power Systems. On these systems, modprobe exits with an error, and the module with an incorrect parameter does not load in the described situation. On all architectures, an error message is recorded in the dmesg output. (BZ#1449439) | [
"KDUMP_COMMANDLINE_APPEND=\"irqpoll maxcpus=1 ... module_blacklist=mpt3sas\"",
"~]# systemctl restart kdump.service",
"Kernel panic - not syncing: Out of memory and no killable processes"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/power9-known-issues |
Chapter 212. LDAP Component | Chapter 212. LDAP Component Available as of Camel version 1.5 The ldap component allows you to perform searches in LDAP servers using filters as the message payload. This component uses standard JNDI ( javax.naming package) to access the server. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ldap</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 212.1. URI format ldap:ldapServerBean[?options] The ldapServerBean portion of the URI refers to a DirContext bean in the registry. The LDAP component only supports producer endpoints, which means that an ldap URI cannot appear in the from at the start of a route. You can append query options to the URI in the following format, ?option=value&option=value&... 212.2. Options The LDAP component has no options. The LDAP endpoint is configured using URI syntax: with the following path and query parameters: 212.2.1. Path Parameters (1 parameters): Name Description Default Type dirContextName Required Name of either a javax.naming.directory.DirContext, or java.util.Hashtable, or Map bean to lookup in the registry. If the bean is either a Hashtable or Map then a new javax.naming.directory.DirContext instance is created for each use. If the bean is a javax.naming.directory.DirContext then the bean is used as given. The latter may not be possible in all situations where the javax.naming.directory.DirContext must not be shared, and in those situations it can be better to use java.util.Hashtable or Map instead. String 212.2.2. Query Parameters (5 parameters): Name Description Default Type base (producer) The base DN for searches. ou=system String pageSize (producer) When specified the ldap module uses paging to retrieve all results (most LDAP Servers throw an exception when trying to retrieve more than 1000 entries in one query). To be able to use this a LdapContext (subclass of DirContext) has to be passed in as ldapServerBean (otherwise an exception is thrown) Integer returnedAttributes (producer) Comma-separated list of attributes that should be set in each entry of the result String scope (producer) Specifies how deeply to search the tree of entries, starting at the base DN. subtree String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 212.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.ldap.enabled Enable ldap component true Boolean camel.component.ldap.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 212.4. Result The result is returned in the Out body as a ArrayList<javax.naming.directory.SearchResult> object. 212.5. DirContext The URI, ldap:ldapserver , references a Spring bean with the ID, ldapserver . The ldapserver bean may be defined as follows: <bean id="ldapserver" class="javax.naming.directory.InitialDirContext" scope="prototype"> <constructor-arg> <props> <prop key="java.naming.factory.initial">com.sun.jndi.ldap.LdapCtxFactory</prop> <prop key="java.naming.provider.url">ldap://localhost:10389</prop> <prop key="java.naming.security.authentication">none</prop> </props> </constructor-arg> </bean> The preceding example declares a regular Sun based LDAP DirContext that connects anonymously to a locally hosted LDAP server. Note DirContext objects are not required to support concurrency by contract. It is therefore important that the directory context is declared with the setting, scope="prototype" , in the bean definition or that the context supports concurrency. In the Spring framework, prototype scoped objects are instantiated each time they are looked up. 212.6. Samples Following on from the Spring configuration above, the code sample below sends an LDAP request to filter search a group for a member. The Common Name is then extracted from the response. ProducerTemplate<Exchange> template = exchange .getContext().createProducerTemplate(); Collection<?> results = (Collection<?>) (template .sendBody( "ldap:ldapserver?base=ou=mygroup,ou=groups,ou=system", "(member=uid=huntc,ou=users,ou=system)")); if (results.size() > 0) { // Extract what we need from the device's profile Iterator<?> resultIter = results.iterator(); SearchResult searchResult = (SearchResult) resultIter .(); Attributes attributes = searchResult .getAttributes(); Attribute deviceCNAttr = attributes.get("cn"); String deviceCN = (String) deviceCNAttr.get(); ... If no specific filter is required - for example, you just need to look up a single entry - specify a wildcard filter expression. For example, if the LDAP entry has a Common Name, use a filter expression like: (cn=*) 212.6.1. Binding using credentials A Camel end user donated this sample code he used to bind to the ldap server using credentials. Properties props = new Properties(); props.setProperty(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory"); props.setProperty(Context.PROVIDER_URL, "ldap://localhost:389"); props.setProperty(Context.URL_PKG_PREFIXES, "com.sun.jndi.url"); props.setProperty(Context.REFERRAL, "ignore"); props.setProperty(Context.SECURITY_AUTHENTICATION, "simple"); props.setProperty(Context.SECURITY_PRINCIPAL, "cn=Manager"); props.setProperty(Context.SECURITY_CREDENTIALS, "secret"); SimpleRegistry reg = new SimpleRegistry(); reg.put("myldap", new InitialLdapContext(props, null)); CamelContext context = new DefaultCamelContext(reg); context.addRoutes( new RouteBuilder() { public void configure() throws Exception { from("direct:start").to("ldap:myldap?base=ou=test"); } } ); context.start(); ProducerTemplate template = context.createProducerTemplate(); Endpoint endpoint = context.getEndpoint("direct:start"); Exchange exchange = endpoint.createExchange(); exchange.getIn().setBody("(uid=test)"); Exchange out = template.send(endpoint, exchange); Collection<SearchResult> data = out.getOut().getBody(Collection.class); assert data != null; assert !data.isEmpty(); System.out.println(out.getOut().getBody()); context.stop(); 212.7. Configuring SSL All required is to create a custom socket factory and reference it in the InitialDirContext bean - see below sample. SSL Configuration <?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd http://camel.apache.org/schema/blueprint http://camel.apache.org/schema/blueprint/camel-blueprint.xsd"> <sslContextParameters xmlns="http://camel.apache.org/schema/blueprint" id="sslContextParameters"> <keyManagers keyPassword="{{keystore.pwd}}"> <keyStore resource="{{keystore.url}}" password="{{keystore.pwd}}"/> </keyManagers> </sslContextParameters> <bean id="customSocketFactory" class="zotix.co.util.CustomSocketFactory"> <argument ref="sslContextParameters" /> </bean> <bean id="ldapserver" class="javax.naming.directory.InitialDirContext" scope="prototype"> <argument> <props> <prop key="java.naming.factory.initial" value="com.sun.jndi.ldap.LdapCtxFactory"/> <prop key="java.naming.provider.url" value="ldaps://lab.zotix.co:636"/> <prop key="java.naming.security.protocol" value="ssl"/> <prop key="java.naming.security.authentication" value="simple" /> <prop key="java.naming.security.principal" value="cn=Manager,dc=example,dc=com"/> <prop key="java.naming.security.credentials" value="passw0rd"/> <prop key="java.naming.ldap.factory.socket" value="zotix.co.util.CustomSocketFactory"/> </props> </argument> </bean> </blueprint> Custom Socket Factory import org.apache.camel.util.jsse.SSLContextParameters; import javax.net.SocketFactory; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLSocketFactory; import javax.net.ssl.TrustManagerFactory; import java.io.IOException; import java.net.InetAddress; import java.net.Socket; import java.security.KeyStore; /** * The CustomSocketFactory. Loads the KeyStore and creates an instance of SSLSocketFactory */ public class CustomSocketFactory extends SSLSocketFactory { private static SSLSocketFactory socketFactory; /** * Called by the getDefault() method. */ public CustomSocketFactory() { } /** * Called by Blueprint DI to initialise an instance of SocketFactory * * @param sslContextParameters */ public CustomSocketFactory(SSLContextParameters sslContextParameters) { try { KeyStore keyStore = sslContextParameters.getKeyManagers().getKeyStore().createKeyStore(); TrustManagerFactory tmf = TrustManagerFactory.getInstance("SunX509"); tmf.init(keyStore); SSLContext ctx = SSLContext.getInstance("TLS"); ctx.init(null, tmf.getTrustManagers(), null); socketFactory = ctx.getSocketFactory(); } catch (Exception ex) { ex.printStackTrace(System.err); /* handle exception */ } } /** * Getter for the SocketFactory * * @return */ public static SocketFactory getDefault() { return new CustomSocketFactory(); } @Override public String[] getDefaultCipherSuites() { return socketFactory.getDefaultCipherSuites(); } @Override public String[] getSupportedCipherSuites() { return socketFactory.getSupportedCipherSuites(); } @Override public Socket createSocket(Socket socket, String string, int i, boolean bln) throws IOException { return socketFactory.createSocket(socket, string, i, bln); } @Override public Socket createSocket(String string, int i) throws IOException { return socketFactory.createSocket(string, i); } @Override public Socket createSocket(String string, int i, InetAddress ia, int i1) throws IOException { return socketFactory.createSocket(string, i, ia, i1); } @Override public Socket createSocket(InetAddress ia, int i) throws IOException { return socketFactory.createSocket(ia, i); } @Override public Socket createSocket(InetAddress ia, int i, InetAddress ia1, int i1) throws IOException { return socketFactory.createSocket(ia, i, ia1, i1); } } 212.8. See Also Configuring Camel Component Endpoint Getting Started | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ldap</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"ldap:ldapServerBean[?options]",
"ldap:dirContextName",
"<bean id=\"ldapserver\" class=\"javax.naming.directory.InitialDirContext\" scope=\"prototype\"> <constructor-arg> <props> <prop key=\"java.naming.factory.initial\">com.sun.jndi.ldap.LdapCtxFactory</prop> <prop key=\"java.naming.provider.url\">ldap://localhost:10389</prop> <prop key=\"java.naming.security.authentication\">none</prop> </props> </constructor-arg> </bean>",
"ProducerTemplate<Exchange> template = exchange .getContext().createProducerTemplate(); Collection<?> results = (Collection<?>) (template .sendBody( \"ldap:ldapserver?base=ou=mygroup,ou=groups,ou=system\", \"(member=uid=huntc,ou=users,ou=system)\")); if (results.size() > 0) { // Extract what we need from the device's profile Iterator<?> resultIter = results.iterator(); SearchResult searchResult = (SearchResult) resultIter .next(); Attributes attributes = searchResult .getAttributes(); Attribute deviceCNAttr = attributes.get(\"cn\"); String deviceCN = (String) deviceCNAttr.get();",
"(cn=*)",
"Properties props = new Properties(); props.setProperty(Context.INITIAL_CONTEXT_FACTORY, \"com.sun.jndi.ldap.LdapCtxFactory\"); props.setProperty(Context.PROVIDER_URL, \"ldap://localhost:389\"); props.setProperty(Context.URL_PKG_PREFIXES, \"com.sun.jndi.url\"); props.setProperty(Context.REFERRAL, \"ignore\"); props.setProperty(Context.SECURITY_AUTHENTICATION, \"simple\"); props.setProperty(Context.SECURITY_PRINCIPAL, \"cn=Manager\"); props.setProperty(Context.SECURITY_CREDENTIALS, \"secret\"); SimpleRegistry reg = new SimpleRegistry(); reg.put(\"myldap\", new InitialLdapContext(props, null)); CamelContext context = new DefaultCamelContext(reg); context.addRoutes( new RouteBuilder() { public void configure() throws Exception { from(\"direct:start\").to(\"ldap:myldap?base=ou=test\"); } } ); context.start(); ProducerTemplate template = context.createProducerTemplate(); Endpoint endpoint = context.getEndpoint(\"direct:start\"); Exchange exchange = endpoint.createExchange(); exchange.getIn().setBody(\"(uid=test)\"); Exchange out = template.send(endpoint, exchange); Collection<SearchResult> data = out.getOut().getBody(Collection.class); assert data != null; assert !data.isEmpty(); System.out.println(out.getOut().getBody()); context.stop();",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://www.osgi.org/xmlns/blueprint/v1.0.0 http://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd http://camel.apache.org/schema/blueprint http://camel.apache.org/schema/blueprint/camel-blueprint.xsd\"> <sslContextParameters xmlns=\"http://camel.apache.org/schema/blueprint\" id=\"sslContextParameters\"> <keyManagers keyPassword=\"{{keystore.pwd}}\"> <keyStore resource=\"{{keystore.url}}\" password=\"{{keystore.pwd}}\"/> </keyManagers> </sslContextParameters> <bean id=\"customSocketFactory\" class=\"zotix.co.util.CustomSocketFactory\"> <argument ref=\"sslContextParameters\" /> </bean> <bean id=\"ldapserver\" class=\"javax.naming.directory.InitialDirContext\" scope=\"prototype\"> <argument> <props> <prop key=\"java.naming.factory.initial\" value=\"com.sun.jndi.ldap.LdapCtxFactory\"/> <prop key=\"java.naming.provider.url\" value=\"ldaps://lab.zotix.co:636\"/> <prop key=\"java.naming.security.protocol\" value=\"ssl\"/> <prop key=\"java.naming.security.authentication\" value=\"simple\" /> <prop key=\"java.naming.security.principal\" value=\"cn=Manager,dc=example,dc=com\"/> <prop key=\"java.naming.security.credentials\" value=\"passw0rd\"/> <prop key=\"java.naming.ldap.factory.socket\" value=\"zotix.co.util.CustomSocketFactory\"/> </props> </argument> </bean> </blueprint>",
"import org.apache.camel.util.jsse.SSLContextParameters; import javax.net.SocketFactory; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLSocketFactory; import javax.net.ssl.TrustManagerFactory; import java.io.IOException; import java.net.InetAddress; import java.net.Socket; import java.security.KeyStore; /** * The CustomSocketFactory. Loads the KeyStore and creates an instance of SSLSocketFactory */ public class CustomSocketFactory extends SSLSocketFactory { private static SSLSocketFactory socketFactory; /** * Called by the getDefault() method. */ public CustomSocketFactory() { } /** * Called by Blueprint DI to initialise an instance of SocketFactory * * @param sslContextParameters */ public CustomSocketFactory(SSLContextParameters sslContextParameters) { try { KeyStore keyStore = sslContextParameters.getKeyManagers().getKeyStore().createKeyStore(); TrustManagerFactory tmf = TrustManagerFactory.getInstance(\"SunX509\"); tmf.init(keyStore); SSLContext ctx = SSLContext.getInstance(\"TLS\"); ctx.init(null, tmf.getTrustManagers(), null); socketFactory = ctx.getSocketFactory(); } catch (Exception ex) { ex.printStackTrace(System.err); /* handle exception */ } } /** * Getter for the SocketFactory * * @return */ public static SocketFactory getDefault() { return new CustomSocketFactory(); } @Override public String[] getDefaultCipherSuites() { return socketFactory.getDefaultCipherSuites(); } @Override public String[] getSupportedCipherSuites() { return socketFactory.getSupportedCipherSuites(); } @Override public Socket createSocket(Socket socket, String string, int i, boolean bln) throws IOException { return socketFactory.createSocket(socket, string, i, bln); } @Override public Socket createSocket(String string, int i) throws IOException { return socketFactory.createSocket(string, i); } @Override public Socket createSocket(String string, int i, InetAddress ia, int i1) throws IOException { return socketFactory.createSocket(string, i, ia, i1); } @Override public Socket createSocket(InetAddress ia, int i) throws IOException { return socketFactory.createSocket(ia, i); } @Override public Socket createSocket(InetAddress ia, int i, InetAddress ia1, int i1) throws IOException { return socketFactory.createSocket(ia, i, ia1, i1); } }"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/ldap-component |
5.4.3.4. Changing Mirrored Volume Configuration | 5.4.3.4. Changing Mirrored Volume Configuration You can increase or decrease the number of mirrors that a logical volume contains by using the lvconvert command. This allows you to convert a logical volume from a mirrored volume to a linear volume or from a linear volume to a mirrored volume. You can also use this command to reconfigure other mirror parameters of an existing logical volume, such as corelog . When you convert a linear volume to a mirrored volume, you are creating mirror legs for an existing volume. This means that your volume group must contain the devices and space for the mirror legs and for the mirror log. If you lose a leg of a mirror, LVM converts the volume to a linear volume so that you still have access to the volume, without the mirror redundancy. After you replace the leg, you can use the lvconvert command to restore the mirror. This procedure is provided in Section 7.3, "Recovering from LVM Mirror Failure" . The following command converts the linear logical volume vg00/lvol1 to a mirrored logical volume. The following command converts the mirrored logical volume vg00/lvol1 to a linear logical volume, removing the mirror leg. The following example adds an additional mirror leg to the existing logical volume vg00/lvol1 . This example shows the configuration of the volume before and after the lvconvert command changed the volume to a volume with two mirror legs. | [
"lvconvert -m1 vg00/lvol1",
"lvconvert -m0 vg00/lvol1",
"lvs -a -o name,copy_percent,devices vg00 LV Copy% Devices lvol1 100.00 lvol1_mimage_0(0),lvol1_mimage_1(0) [lvol1_mimage_0] /dev/sda1(0) [lvol1_mimage_1] /dev/sdb1(0) [lvol1_mlog] /dev/sdd1(0) lvconvert -m 2 vg00/lvol1 vg00/lvol1: Converted: 13.0% vg00/lvol1: Converted: 100.0% Logical volume lvol1 converted. lvs -a -o name,copy_percent,devices vg00 LV Copy% Devices lvol1 100.00 lvol1_mimage_0(0),lvol1_mimage_1(0),lvol1_mimage_2(0) [lvol1_mimage_0] /dev/sda1(0) [lvol1_mimage_1] /dev/sdb1(0) [lvol1_mimage_2] /dev/sdc1(0) [lvol1_mlog] /dev/sdd1(0)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/mirror_reconfigure |
8.3. Evaluating the Tools | 8.3. Evaluating the Tools An assessment can start by using some form of an information gathering tool. When assessing the entire network, map the layout first to find the hosts that are running. Once located, examine each host individually. Focusing on these hosts requires another set of tools. Knowing which tools to use may be the most crucial step in finding vulnerabilities. Just as in any aspect of everyday life, there are many different tools that perform the same job. This concept applies to performing vulnerability assessments as well. There are tools specific to operating systems, applications, and even networks (based on the protocols used). Some tools are free; others are not. Some tools are intuitive and easy to use, while others are cryptic and poorly documented but have features that other tools do not. Finding the right tools may be a daunting task and in the end, experience counts. If possible, set up a test lab and try out as many tools as you can, noting the strengths and weaknesses of each. Review the README file or man page for the tool. Additionally, look to the Internet for more information, such as articles, step-by-step guides, or even mailing lists specific to a tool. The tools discussed below are just a small sampling of the available tools. 8.3.1. Scanning Hosts with Nmap Nmap is a popular tool included in Red Hat Enterprise Linux that can be used to determine the layout of a network. Nmap has been available for many years and is probably the most often used tool when gathering information. An excellent man page is included that provides a detailed description of its options and usage. Administrators can use Nmap on a network to find host systems and open ports on those systems. Nmap is a competent first step in vulnerability assessment. You can map out all the hosts within your network and even pass an option that allows Nmap to attempt to identify the operating system running on a particular host. Nmap is a good foundation for establishing a policy of using secure services and stopping unused services. 8.3.1.1. Using Nmap Nmap can be run from a shell prompt by typing the nmap command followed by the hostname or IP address of the machine to scan. The results of the scan (which could take up to a few minutes, depending on where the host is located) should look similar to the following: Nmap tests the most common network communication ports for listening or waiting services. This knowledge can be helpful to an administrator who wants to close down unnecessary or unused services. For more information about using Nmap, refer to the official homepage at the following URL: http://www.insecure.org/ | [
"nmap foo.example.com",
"Starting nmap V. 3.50 ( www.insecure.org/nmap/ ) Interesting ports on localhost.localdomain (127.0.0.1): (The 1591 ports scanned but not shown below are in state: closed) Port State Service 22/tcp open ssh 25/tcp open smtp 111/tcp open sunrpc 443/tcp open https 515/tcp open printer 950/tcp open oftep-rpc 6000/tcp open X11 Nmap run completed -- 1 IP address (1 host up) scanned in 71.825 seconds"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s1-vuln-tools |
21.3.6. Adding an LPD/LPR Host or Printer | 21.3.6. Adding an LPD/LPR Host or Printer Follow this procedure to add an LPD/LPR host or printer: Open the New Printer dialog (see Section 21.3.2, "Starting Printer Setup" ). In the list of devices on the left, select Network Printer LPD/LPR Host or Printer . On the right, enter the connection settings: Host The host name of the LPD/LPR printer or host. Optionally, click Probe to find queues on the LPD host. Queue The queue name to be given to the new queue (if the box is left empty, a name based on the device node will be used). Figure 21.7. Adding an LPD/LPR printer Click Forward to continue. Select the printer model. See Section 21.3.8, "Selecting the Printer Model and Finishing" for details. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-printing-LPDLPR-printer |
Using Ansible to install and manage Identity Management | Using Ansible to install and manage Identity Management Red Hat Enterprise Linux 9 Using Ansible to maintain an IdM environment Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_ansible_to_install_and_manage_identity_management/index |
Chapter 10. Postinstallation storage configuration | Chapter 10. Postinstallation storage configuration After installing OpenShift Container Platform, you can further expand and customize your cluster to your requirements, including storage configuration. By default, containers operate by using the ephemeral storage or transient local storage. The ephemeral storage has a lifetime limitation. To store the data for a long time, you must configure persistent storage. You can configure storage by using one of the following methods: Dynamic provisioning You can dynamically provision storage on-demand by defining and creating storage classes that control different levels of storage, including storage access. Static provisioning You can use Kubernetes persistent volumes to make existing storage available to a cluster. Static provisioning can support various device configurations and mount options. 10.1. Dynamic provisioning Dynamic Provisioning allows you to create storage volumes on-demand, eliminating the need for cluster administrators to pre-provision storage. See Dynamic provisioning . 10.2. Recommended configurable storage technology The following table summarizes the recommended and configurable storage technologies for the given OpenShift Container Platform cluster application. Table 10.1. Recommended and configurable storage technology Storage type Block File Object 1 ReadOnlyMany 2 ReadWriteMany 3 Prometheus is the underlying technology used for metrics. 4 This does not apply to physical disk, VM physical disk, VMDK, loopback over NFS, AWS EBS, and Azure Disk. 5 For metrics, using file storage with the ReadWriteMany (RWX) access mode is unreliable. If you use file storage, do not configure the RWX access mode on any persistent volume claims (PVCs) that are configured for use with metrics. 6 For logging, review the recommended storage solution in Configuring persistent storage for the log store section. Using NFS storage as a persistent volume or through NAS, such as Gluster, can corrupt the data. Hence, NFS is not supported for Elasticsearch storage and LokiStack log store in OpenShift Container Platform Logging. You must use one persistent volume type per log store. 7 Object storage is not consumed through OpenShift Container Platform's PVs or PVCs. Apps must integrate with the object storage REST API. ROX 1 Yes 4 Yes 4 Yes RWX 2 No Yes Yes Registry Configurable Configurable Recommended Scaled registry Not configurable Configurable Recommended Metrics 3 Recommended Configurable 5 Not configurable Elasticsearch Logging Recommended Configurable 6 Not supported 6 Loki Logging Not configurable Not configurable Recommended Apps Recommended Recommended Not configurable 7 Note A scaled registry is an OpenShift image registry where two or more pod replicas are running. 10.2.1. Specific application storage recommendations Important Testing shows issues with using the NFS server on Red Hat Enterprise Linux (RHEL) as a storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations in the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. 10.2.1.1. Registry In a non-scaled/high-availability (HA) OpenShift image registry cluster deployment: The storage technology does not have to support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage followed by block storage. File storage is not recommended for OpenShift image registry cluster deployment with production workloads. 10.2.1.2. Scaled registry In a scaled/HA OpenShift image registry cluster deployment: The storage technology must support RWX access mode. The storage technology must ensure read-after-write consistency. The preferred storage technology is object storage. Red Hat OpenShift Data Foundation (ODF), Amazon Simple Storage Service (Amazon S3), Google Cloud Storage (GCS), Microsoft Azure Blob Storage, and OpenStack Swift are supported. Object storage should be S3 or Swift compliant. For non-cloud platforms, such as vSphere and bare metal installations, the only configurable technology is file storage. Block storage is not configurable. The use of Network File System (NFS) storage with OpenShift Container Platform is supported. However, the use of NFS storage with a scaled registry can cause known issues. For more information, see the Red Hat Knowledgebase solution, Is NFS supported for OpenShift cluster internal components in Production? . 10.2.1.3. Metrics In an OpenShift Container Platform hosted metrics cluster deployment: The preferred storage technology is block storage. Object storage is not configurable. Important It is not recommended to use file storage for a hosted metrics cluster deployment with production workloads. 10.2.1.4. Logging In an OpenShift Container Platform hosted logging cluster deployment: Loki Operator: The preferred storage technology is S3 compatible Object storage. Block storage is not configurable. OpenShift Elasticsearch Operator: The preferred storage technology is block storage. Object storage is not supported. Note As of logging version 5.4.3 the OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat will provide bug fixes and support for this feature during the current release lifecycle, but this feature will no longer receive enhancements and will be removed. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator. 10.2.1.5. Applications Application use cases vary from application to application, as described in the following examples: Storage technologies that support dynamic PV provisioning have low mount time latencies, and are not tied to nodes to support a healthy cluster. Application developers are responsible for knowing and understanding the storage requirements for their application, and how it works with the provided storage to ensure that issues do not occur when an application scales or interacts with the storage layer. 10.2.2. Other specific application storage recommendations Important It is not recommended to use RAID configurations on Write intensive workloads, such as etcd . If you are running etcd with a RAID configuration, you might be at risk of encountering performance issues with your workloads. Red Hat OpenStack Platform (RHOSP) Cinder: RHOSP Cinder tends to be adept in ROX access mode use cases. Databases: Databases (RDBMSs, NoSQL DBs, etc.) tend to perform best with dedicated block storage. The etcd database must have enough storage and adequate performance capacity to enable a large cluster. Information about monitoring and benchmarking tools to establish ample storage and a high-performance environment is described in Recommended etcd practices . 10.3. Deploy Red Hat OpenShift Data Foundation Red Hat OpenShift Data Foundation is a provider of agnostic persistent storage for OpenShift Container Platform supporting file, block, and object storage, either in-house or in hybrid clouds. As a Red Hat storage solution, Red Hat OpenShift Data Foundation is completely integrated with OpenShift Container Platform for deployment, management, and monitoring. For more information, see the Red Hat OpenShift Data Foundation documentation . Important OpenShift Data Foundation on top of Red Hat Hyperconverged Infrastructure (RHHI) for Virtualization, which uses hyperconverged nodes that host virtual machines installed with OpenShift Container Platform, is not a supported configuration. For more information about supported platforms, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Guide . If you are looking for Red Hat OpenShift Data Foundation information about... See the following Red Hat OpenShift Data Foundation documentation: What's new, known issues, notable bug fixes, and Technology Previews OpenShift Data Foundation 4.12 Release Notes Supported workloads, layouts, hardware and software requirements, sizing and scaling recommendations Planning your OpenShift Data Foundation 4.12 deployment Instructions on deploying OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster Deploying OpenShift Data Foundation 4.12 in external mode Instructions on deploying OpenShift Data Foundation to local storage on bare metal infrastructure Deploying OpenShift Data Foundation 4.12 using bare metal infrastructure Instructions on deploying OpenShift Data Foundation on Red Hat OpenShift Container Platform VMware vSphere clusters Deploying OpenShift Data Foundation 4.12 on VMware vSphere Instructions on deploying OpenShift Data Foundation using Amazon Web Services for local or cloud storage Deploying OpenShift Data Foundation 4.12 using Amazon Web Services Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Google Cloud clusters Deploying and managing OpenShift Data Foundation 4.12 using Google Cloud Instructions on deploying and managing OpenShift Data Foundation on existing Red Hat OpenShift Container Platform Azure clusters Deploying and managing OpenShift Data Foundation 4.12 using Microsoft Azure Instructions on deploying OpenShift Data Foundation to use local storage on IBM Power(R) infrastructure Deploying OpenShift Data Foundation on IBM Power(R) Instructions on deploying OpenShift Data Foundation to use local storage on IBM Z(R) infrastructure Deploying OpenShift Data Foundation on IBM Z(R) infrastructure Allocating storage to core services and hosted applications in Red Hat OpenShift Data Foundation, including snapshot and clone Managing and allocating resources Managing storage resources across a hybrid cloud or multicloud environment using the Multicloud Object Gateway (NooBaa) Managing hybrid and multicloud resources Safely replacing storage devices for Red Hat OpenShift Data Foundation Replacing devices Safely replacing a node in a Red Hat OpenShift Data Foundation cluster Replacing nodes Scaling operations in Red Hat OpenShift Data Foundation Scaling storage Monitoring a Red Hat OpenShift Data Foundation 4.12 cluster Monitoring Red Hat OpenShift Data Foundation 4.12 Resolve issues encountered during operations Troubleshooting OpenShift Data Foundation 4.12 Migrating your OpenShift Container Platform cluster from version 3 to version 4 Migration | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/postinstallation_configuration/post-install-storage-configuration |
Chapter 1. Generating an sos report for technical support | Chapter 1. Generating an sos report for technical support With the sos utility, you can collect configuration, diagnostic, and troubleshooting data, and provide those files to Red Hat Technical Support. 1.1. What the sos utility does An sos report is a common starting point for Red Hat technical support engineers when performing analysis of a service request for a RHEL system. The sos utility (also known as sosreport ) provides a standardized way to collect diagnostic information that Red Hat support engineers can reference throughout their investigation of issues reported in support cases. Using the sos utility helps to ensure that you are not repeatedly asked for data output. The sos utility allows to collect various debugging information from one or more systems, optionally clean sensitive data, and upload it in a form of a report to Red Hat. More specifically, the three sos components do the following: sos report collects debugging information from one system. Note This program was originally named sosreport . Running sosreport still works as sos report is called instead, with the same arguments. sos collect allows to run and collect individual sos reports from a specified set of nodes. sos clean obfuscates potentially sensitive information such as user names, host names, IP or MAC addresses, or other user-specified data. The information collected in a report contains configuration details, system information, and diagnostic information from a RHEL system, such as: The running kernel version. Loaded kernel modules. System and service configuration files. Diagnostic command output. A list of installed packages. The sos utility writes the data it collects to an archive named sosreport- <host_name> - <support_case_number> - <YYYY-MM-DD> - <unique_random_characters> .tar.xz . The utility stores the archive and its SHA-256 checksum in the /var/tmp/ directory: Additional resources sosreport(1) man page 1.2. Installing the sos package from the command line To use the sos utility, install the sos package. Prerequisites You need root privileges. Procedure Install the sos package. Verification steps Use the rpm utility to verify that the sos package is installed. 1.3. Generating an sos report from the command line Use the sos report command to gather an sos report from a RHEL server. Prerequisites You have installed the sos package. You need root privileges. Procedure Run the sos report command and follow the on-screen instructions. You can add the --upload option to transfer the sos report to Red Hat immediately after generating it. (Optional) If you have already opened a Technical Support case with Red Hat, enter the case number to embed it in the sos report file name, and it will be uploaded to that case if you specified the --upload option. If you do not have a case number, leave this field blank. Entering a case number is optional and does not affect the operation of the sos utility. Take note of the sos report file name displayed at the end of the console output. Note You can use the --batch option to generate an sos report without prompting for interactive input. You can also use the --clean option to obfuscate a just-collected sos report. Verification steps Verify that the sos utility created an archive in /var/tmp/ matching the description from the command output. Additional resources Methods for providing an sos report to Red Hat technical support . 1.4. Generating and collecting sos reports on multiple systems concurrently You can use the sos utility to trigger the sos report command on multiple systems. Wait for the report to terminate and collect all generated reports. Prerequisites You know the cluster type or list of nodes to run on. You have installed the sos package on all systems. You have ssh keys for the root account on all the systems, or you can provide the root password via the --password option. Procedure Run the sos collect command and follow the on-screen instructions. Note By default, sos collect tries to identify the type of cluster it runs on to automatically identify the nodes to collect reports from. You can set the cluster or nodes types manually with the --cluster or --nodes options. You can also use the --master option to point the sos utility at a remote node to determine the cluster type and the node lists. Thus, you do not have to be logged into one of the cluster nodes to collect sos reports; you can do it from your workstation. You can add the --upload option to transfer the sos report to Red Hat immediately after generating it. Any valid sos report option can be further supplied and will be passed to all sos reports executions, such as the --batch and --clean options. Verification steps Verify that the sos collect command created an archive in the /var/tmp/ directory matching the description from the command output. Additional resources For examples on using the --batch and --clean options, see Generating an sos report from the command line . 1.5. Cleaning an sos report The sos utility offers a routine to obfuscate potentially sensitive data, such as user names, host names, IP or MAC addresses, or other user-specified keywords. The original sos report or sos collect stays unchanged, and a new *-obfuscated.tar.xz file is generated and intended to be shared with a third party. Note You can append the cleaner functionality to the sos report or sos collect commands with the --clean option: [user@server1 ~]USD sudo sos report --clean Prerequisites You have generated an sos report or an sos collect tarball. (Optional) You have a list of specific keywords beyond the user names, host names, and other data you want to obfuscate. Procedure Run the sos clean command on either an sos report or sos collect tarball and follow the on-screen instructions. You can add the --keywords option to additionally clean a given list of keywords. You can add the --usernames option to obfuscate further sensitive user names. The automatic user name cleaning will automatically run for users reported through the lastlog file for users with an UID of 1000 and above. This option is used for LDAP users that may not appear as an actual login, but may occur in certain log files. Verification steps Verify that the sos clean command created an obfuscated archive and an obfuscation mapping in the /var/tmp/ directory matching the description from the command output. Check the *-private_map file for the obfuscation mapping: Important Keep both the original unobfuscated archive and the *private_map files locally as Red Hat support might refer to the obfuscated terms that you will need to translate to the original values. 1.6. Generating an sos report and securing it with GPG passphrase encryption This procedure describes how to generate an sos report and secure it with symmetric GPG2 encryption based on a passphrase. You might want to secure the contents of an sos report with a password if, for example, you need to transfer it over a public network to a third party. Note Ensure you have sufficient space when creating an encrypted sos report, as it temporarily uses double the disk space: The sos utility creates an unencrypted sos report. The utility encrypts the sos report as a new file. The utility then removes the unencrypted archive. Prerequisites You have installed the sos package. You need root privileges. Procedure Run the sos report command and specify a passphrase with the --encrypt-pass option. You can add the --upload option to transfer the sos report to Red Hat immediately after generating it. (Optional) If you have already opened a Technical Support case with Red Hat, enter the case number to embed it in the sos report file name, and it will be uploaded to that case if you specified the --upload option. If you do not have a case number, leave this field blank. Entering a case number is optional and does not affect the operation of the sos utility. Take note of the sos report file name displayed at the end of the console output. Verification steps Verify that the sos utility created an archive meeting the following requirements: File name starts with secured . File name ends with a .gpg extension. Located in the /var/tmp/ directory. Verify that you can decrypt the archive with the same passphrase you used to encrypt it. Use the gpg command to decrypt the archive. When prompted, enter the passphrase you used to encrypt the archive. Verify that the gpg utility produced an unencrypted archive with a .tar.gz file extension. Additional resources Methods for providing an sos report to Red Hat technical support . 1.7. Generating an sos report and securing it with GPG encryption based on a keypair This procedure describes how to generate an sos report and secure it with GPG2 encryption based on a keypair from a GPG keyring. You might want to secure the contents of an sos report with this type of encryption if, for example, you want to protect an sos report stored on a server. Note Ensure you have sufficient space when creating an encrypted sos report, as it temporarily uses double the disk space: The sos utility creates an unencrypted sos report. The utility encrypts the sos report as a new file. The utility then removes the unencrypted archive. Prerequisites You have installed the sos package. You need root privileges. You have created a GPG2 key. Procedure Run the sos report command and specify the user name that owns the GPG keyring with the --encrypt-key option. You can add the --upload option to transfer the sos report to Red Hat immediately after generating it. Note The user running the sos report command must be the same user that owns the GPG keyring used to encrypt and decrypt the sos report. If the user uses sudo to run the sos report command, the keyring must also be set up using sudo , or the user must have direct shell access to that account. (Optional) If you have already opened a Technical Support case with Red Hat, enter the case number to embed it in the sos report file name, and it will be uploaded to that case if you specified the --upload option. If you do not have a case number, leave this field blank. Entering a case number is optional and does not affect the operation of the sos utility. Take note of the sos report file name displayed at the end of the console output. Verification steps Verify that the sos utility created an archive meeting the following requirements: File name starts with secured . File name ends with a .gpg extension. Located in the /var/tmp/ directory. Verify you can decrypt the archive with the same key you used to encrypt it. Use the gpg command to decrypt the archive. When prompted, enter the passphrase you used when creating the GPG key. Verify that the gpg utility produced an unencrypted archive with a .tar.gz file extension. Additional resources Methods for providing an sos report to Red Hat technical support . 1.8. Creating a GPG2 key The following procedure describes how to generate a GPG2 key to use with encryption utilities. Prerequisites You need root privileges. Procedure Install and configure the pinentry utility. Create a key-input file used for generating a GPG keypair with your preferred details. For example: (Optional) By default, GPG2 stores its keyring in the ~/.gnupg file. To use a custom keyring location, set the GNUPGHOME environment variable to a directory that is only accessible by root. Generate a new GPG2 key based on the contents of the key-input file. Enter a passphrase to protect the GPG2 key. You use this passphrase to access the private key for decryption. Confirm the correct passphrase by entering it again. Verify that the new GPG2 key was created successfully. Verification Steps List the GPG keys on the server. Additional resources GNU Privacy Guard 1.9. Generating an sos report from the rescue environment If a Red Hat Enterprise Linux (RHEL) host does not boot properly, you can boot the host into a rescue environment to gather an sos report. Using the rescue environment, you can mount the target system under /mnt/sysimage , access its contents, and run the sos report command. Prerequisites If the host is a bare metal server, you need physical access to the machine. If the host is a virtual machine, you need access to the virtual machine's settings in the hypervisor. A RHEL installation source, such as an ISO image file, an installation DVD, a netboot CD, or a Preboot Execution Environment (PXE) configuration providing a RHEL installation tree. Procedure Boot the host from an installation source. In the boot menu for the installation media, select the Troubleshooting option. In the Troubleshooting menu, select the Rescue a Red Hat Enterprise Linux system option. At the Rescue menu, select 1 and press the Enter key to continue and mount the system under the /mnt/sysimage directory. Press the Enter key to obtain a shell when prompted. Use the chroot command to change the apparent root directory of the rescue session to the /mnt/sysimage directory. Optional: Your network will not be up in the inital Rescue Environment, so make sure you set it up first. For example, if the network requires static IP addresses, and you want to transfer the sos report over the network, configure the network: Identify the Ethernet device you want to use: Assign an IP address to the network interface, and set the default gateway. For example, if you wanted to add the IP address of 192.168.0.1 with a subnet of 255.255.255.0 , which is a CIDR of 24 , to device enp1s0 , enter: Add a nameserver entry to the /etc/resolv.conf file, for example: Run the sos report command and follow the on-screen instructions. You can add the --upload option to transfer the sos report to Red Hat immediately after generating it. (Optional) If you have already opened a Technical Support case with Red Hat, enter the case number to embed it in the sos report file name, and it will be uploaded to that case if you specified the --upload option and your host is connected to the internet. If you do not have a case number, leave this field blank. Entering a case number is optional and does not affect the operation of the sos utility. Take note of the sos report file name displayed at the end of the console output. If your host does not have a connection to the internet, use a file transfer utility such as scp to transfer the sos report to another host on your network, then upload it to a Red Hat Technical Support case. Verification steps Verify that the sos utility created an archive in the /var/tmp/ directory. Additional resources How to generate sosreport from the rescue environment . Enabling networking in rescue environment without chrooting . To download an ISO of the RHEL installation DVD, visit the downloads section of the Red Hat Customer Portal. See Product Downloads . Methods for providing an sos report to Red Hat technical support . 1.10. Methods for providing an sos report to Red Hat technical support You can use the following methods to upload your sos report to Red Hat Technical Support: Upload with the sos report command Use the --upload option to transfer the sos report to Red Hat immediately after generating it. If you provide one of the following options: a case ID when prompted the --case-id option the --ticket-number option the sos utility uploads the sos report to your case after you authenticate your device. If you do not provide a case number or you do not authenticate your device, the utility uploads the sos report to the Red Hat public SFTP site using an anonymous upload. Provide Red Hat Technical Support Engineers with the name and name of the auxiliary user used for the upload so they can access it. Generate and upload the sos report to the Red Hat Technical Support: If you specify the case ID, the output is: If you do not specify the case ID, the output is: Upload files via the Red Hat Customer Portal Using your Red Hat user account, you can log into the Support Cases section of the Red Hat Customer Portal website and upload an sos report to a technical support case. To log in, visit Support Cases . Upload files using the Red Hat Support Tool With the Red Hat Support Tool, you can upload a file directly from the command line to a Red Hat technical support case. The case number is required. Additional resources For additional methods on how to provide Red Hat Technical Support with your sos report, such as SFTP and curl , see the Red Hat Knowledgebase article How to provide files to Red Hat Support (vmcore, rhev logcollector, sosreports, heap dumps, log files, and so on) | [
"ll /var/tmp/sosreport * total 18704 -rw-------. 1 root root 19136596 Jan 25 07:42 sosreport-server1-12345678-2022-01-25-tgictvu. tar.xz -rw-r--r--. 1 root root 65 Jan 25 07:42 sosreport-server1-12345678-2022-01-25-tgictvu.tar.xz. sha256",
"yum install sos",
"rpm -q sos sos-4.2-15.el8.noarch",
"[user@server1 ~]USD sudo sos report [sudo] password for user: sos report (version 4.2) This command will collect diagnostic and configuration information from this Red Hat Enterprise Linux system and installed applications. An archive containing the collected information will be generated in /var/tmp/sos.qkn_b7by and may be provided to a Red Hat support representative. Press ENTER to continue, or CTRL-C to quit.",
"Please enter the case id that you are generating this report for []: <8-digit_case_number>",
"Finished running plugins Creating compressed archive Your sos report has been generated and saved in: /var/tmp/sosreport-server1-12345678-2022-04-17-qmtnqng.tar.xz Size 16.51MiB Owner root sha256 bf303917b689b13f0c059116d9ca55e341d5fadcd3f1473bef7299c4ad2a7f4f Please send this file to your support representative.",
"[user@server1 ~]USD sudo sos report --batch --case-id <8-digit_case_number>",
"[user@server1 ~]USD sudo sos report --clean",
"[user@server1 ~]USD sudo ls -l /var/tmp/sosreport * [sudo] password for user: -rw-------. 1 root root 17310544 Sep 17 19:11 /var/tmp/sosreport-server1-12345678-2022-04-17-qmtnqng.tar.xz",
"sos collect --nodes=sos-node1,sos-node2 -o process,apache --log-size=50 sos-collector (version 4.2) This utility is used to collect sosreports from multiple nodes simultaneously. It uses OpenSSH's ControlPersist feature to connect to nodes and run commands remotely. If your system installation of OpenSSH is older than 5.6, please upgrade. An archive of sosreport tarballs collected from the nodes will be generated in /var/tmp/sos.o4l55n1s and may be provided to an appropriate support representative. The generated archive may contain data considered sensitive and its content should be reviewed by the originating organization before being passed to any third party. No configuration changes will be made to the system running this utility or remote systems that it connects to. Press ENTER to continue, or CTRL-C to quit Please enter the case id you are collecting reports for: <8-digit_case_number> sos-collector ASSUMES that SSH keys are installed on all nodes unless the --password option is provided. The following is a list of nodes to collect from: primary-rhel8 sos-node1 sos-node2 Press ENTER to continue with these nodes, or press CTRL-C to quit Connecting to nodes Beginning collection of sosreports from 3 nodes, collecting a maximum of 4 concurrently primary-rhel8 : Generating sosreport sos-node1 : Generating sosreport sos-node2 : Generating sosreport primary-rhel8 : Retrieving sosreport sos-node1 : Retrieving sosreport primary-rhel8 : Successfully collected sosreport sos-node1 : Successfully collected sosreport sos-node2 : Retrieving sosreport sos-node2 : Successfully collected sosreport The following archive has been created. Please provide it to your support team. /var/tmp/sos-collector-2022-04-15-pafsr.tar.xz",
"ls -l /var/tmp/sos-collector * -rw-------. 1 root root 160492 May 15 13:35 /var/tmp/sos-collector-2022-05-15-pafsr.tar.xz",
"[user@server1 ~]USD sudo sos clean /var/tmp/sos-collector-2022-05-15-pafsr.tar.xz [sudo] password for user: sos clean (version 4.2) This command will attempt to obfuscate information that is generally considered to be potentially sensitive. Such information includes IP addresses, MAC addresses, domain names, and any user-provided keywords. Note that this utility provides a best-effort approach to data obfuscation, but it does not guarantee that such obfuscation provides complete coverage of all such data in the archive, or that any obfuscation is provided to data that does not fit the description above. Users should review any resulting data and/or archives generated or processed by this utility for remaining sensitive content before being passed to a third party. Press ENTER to continue, or CTRL-C to quit. Found 4 total reports to obfuscate, processing up to 4 concurrently sosreport-primary-rhel8-2022-05-15-nchbdmd : Extracting sosreport-sos-node1-2022-05-15-wmlomgu : Extracting sosreport-sos-node2-2022-05-15-obsudzc : Extracting sos-collector-2022-05-15-pafsr : Beginning obfuscation sosreport-sos-node1-2022-05-15-wmlomgu : Beginning obfuscation sos-collector-2022-05-15-pafsr : Obfuscation completed sosreport-primary-rhel8-2022-05-15-nchbdmd : Beginning obfuscation sosreport-sos-node2-2022-05-15-obsudzc : Beginning obfuscation sosreport-primary-rhel8-2022-05-15-nchbdmd : Re-compressing sosreport-sos-node2-2022-05-15-obsudzc : Re-compressing sosreport-sos-node1-2022-05-15-wmlomgu : Re-compressing sosreport-primary-rhel8-2022-05-15-nchbdmd : Obfuscation completed sosreport-sos-node2-2022-05-15-obsudzc : Obfuscation completed sosreport-sos-node1-2022-05-15-wmlomgu : Obfuscation completed Successfully obfuscated 4 report(s) A mapping of obfuscated elements is available at /var/tmp/sos-collector-2022-05-15-pafsr-private_map The obfuscated archive is available at /var/tmp/sos-collector-2022-05-15-pafsr-obfuscated.tar.xz Size 157.10KiB Owner root Please send the obfuscated archive to your support representative and keep the mapping file private",
"[user@server1 ~]USD sudo ls -l /var/tmp/sos-collector-2022-05-15-pafsr-private_map /var/tmp/sos-collector-2022-05-15-pafsr-obfuscated.tar.xz [sudo] password for user: -rw-------. 1 root root 160868 May 15 16:10 /var/tmp/sos-collector-2022-05-15-pafsr-obfuscated.tar.xz -rw-------. 1 root root 96622 May 15 16:10 /var/tmp/sos-collector-2022-05-15-pafsr-private_map",
"[user@server1 ~]USD sudo cat /var/tmp/sos-collector-2022-05-15-pafsr-private_map [sudo] password for user: { \"hostname_map\": { \"pmoravec-rhel8\": \"host0\" }, \"ip_map\": { \"10.44.128.0/22\": \"100.0.0.0/22\", .. \"username_map\": { \"foobaruser\": \"obfuscateduser0\", \"jsmith\": \"obfuscateduser1\", \"johndoe\": \"obfuscateduser2\" } }",
"[user@server1 ~]USD sudo sos report --encrypt-pass my-passphrase [sudo] password for user: sosreport (version 4.2) This command will collect diagnostic and configuration information from this Red Hat Enterprise Linux system and installed applications. An archive containing the collected information will be generated in /var/tmp/sos.6lck0myd and may be provided to a Red Hat support representative. Press ENTER to continue, or CTRL-C to quit.",
"Please enter the case id that you are generating this report for []: <8-digit_case_number>",
"Finished running plugins Creating compressed archive Your sosreport has been generated and saved in: /var/tmp/secured-sosreport-server1-12345678-2022-01-24-ueqijfm.tar.xz.gpg Size 17.53MiB Owner root sha256 bf303917b689b13f0c059116d9ca55e341d5fadcd3f1473bef7299c4ad2a7f4f Please send this file to your support representative.",
"[user@server1 ~]USD sudo ls -l /var/tmp/sosreport * [sudo] password for user: -rw-------. 1 root root 18381537 Jan 24 17:55 /var/tmp/secured-sosreport-server1-12345678-2022-01-24-ueqijfm.tar.xz.gpg",
"[user@server1 ~]USD sudo gpg --output decrypted-sosreport.tar.gz --decrypt /var/tmp/secured-sosreport-server1-12345678-2022-01-24-ueqijfm.tar.xz.gpg",
"┌──────────────────────────────────────────────────────┐ │ Enter passphrase │ │ │ │ │ │ Passphrase: <passphrase> │ │ │ │ <OK> <Cancel> │ └──────────────────────────────────────────────────────┘",
"[user@server1 ~]USD sudo ls -l decrypted-sosreport.tar.gz [sudo] password for user: -rw-r--r--. 1 root root 18381537 Jan 24 17:59 decrypted-sosreport. tar.gz",
"[user@server1 ~]USD sudo sos report --encrypt-key root [sudo] password for user: sosreport (version 4.2) This command will collect diagnostic and configuration information from this Red Hat Enterprise Linux system and installed applications. An archive containing the collected information will be generated in /var/tmp/sos.6ucjclgf and may be provided to a Red Hat support representative. Press ENTER to continue, or CTRL-C to quit.",
"Please enter the case id that you are generating this report for []: <8-digit_case_number>",
"Finished running plugins Creating compressed archive Your sosreport has been generated and saved in: /var/tmp/secured-sosreport-server1-23456789-2022-02-27-zhdqhdi.tar.xz.gpg Size 15.44MiB Owner root sha256 bf303917b689b13f0c059116d9ca55e341d5fadcd3f1473bef7299c4ad2a7f4f Please send this file to your support representative.",
"[user@server1 ~]USD sudo ls -l /var/tmp/sosreport * [sudo] password for user: -rw-------. 1 root root 16190013 Jan 24 17:55 /var/tmp/secured-sosreport-server1-23456789-2022-01-27-zhdqhdi.tar.xz.gpg",
"[user@server1 ~]USD sudo gpg --output decrypted-sosreport.tar.gz --decrypt /var/tmp/secured-sosreport-server1-23456789-2022-01-27-zhdqhdi.tar.xz.gpg",
"┌────────────────────────────────────────────────────────────────┐ │ Please enter the passphrase to unlock the OpenPGP secret key: │ │ \"GPG User (first key) <[email protected]>\" │ │ 2048-bit RSA key, ID BF28FFA302EF4557, │ │ created 2020-01-13. │ │ │ │ │ │ Passphrase: <passphrase> │ │ │ │ <OK> <Cancel> │ └────────────────────────────────────────────────────────────────┘",
"[user@server1 ~]USD sudo ll decrypted-sosreport.tar.gz [sudo] password for user: -rw-r--r--. 1 root root 16190013 Jan 27 17:47 decrypted-sosreport. tar.gz",
"yum install pinentry mkdir ~/.gnupg -m 700 echo \"pinentry-program /usr/bin/pinentry-curses\" >> ~/.gnupg/gpg-agent.conf",
"cat >key-input <<EOF %echo Generating a standard key Key-Type: RSA Key-Length: 2048 Name-Real: GPG User Name-Comment: first key Name-Email: [email protected] Expire-Date: 0 %commit %echo Finished creating standard key EOF",
"export GNUPGHOME= /root/backup mkdir -p USDGNUPGHOME -m 700",
"gpg2 --batch --gen-key key-input",
"┌──────────────────────────────────────────────────────┐ │ Please enter the passphrase to │ │ protect your new key │ │ │ │ Passphrase: <passphrase> │ │ │ │ <OK> <Cancel> │ └──────────────────────────────────────────────────────┘",
"┌──────────────────────────────────────────────────────┐ │ Please re-enter this passphrase │ │ │ │ Passphrase: <passphrase> │ │ │ │ <OK> <Cancel> │ └──────────────────────────────────────────────────────┘",
"gpg: keybox '/root/backup/pubring.kbx' created gpg: Generating a standard key gpg: /root/backup/trustdb.gpg: trustdb created gpg: key BF28FFA302EF4557 marked as ultimately trusted gpg: directory '/root/backup/openpgp-revocs.d' created gpg: revocation certificate stored as '/root/backup/openpgp-revocs.d/8F6FCF10C80359D5A05AED67BF28FFA302EF4557.rev' gpg: Finished creating standard key",
"gpg2 --list-secret-keys gpg: checking the trustdb gpg: marginals needed: 3 completes needed: 1 trust model: pgp gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u / root /backup/pubring.kbx ------------------------ sec rsa2048 2020-01-13 [SCEA] 8F6FCF10C80359D5A05AED67BF28FFA302EF4557 uid [ultimate] GPG User (first key) <[email protected]>",
"ip link show ... 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000 link/ether 52:54:00:74:79:56 brd ff:ff:ff:ff:ff:ff",
"ip address add < 192.168.0.1/24 > dev < enp1s0 > ip route add default via < 192.168.0.254 >",
"nameserver < 192.168.0.5 >",
"[user@server1 ~]USD sudo sos report --upload sosreport (version 4.7.0) Optionally, please enter the case id that you are generating this report for []: Your sosreport has been generated and saved in: /var/tmp/sosreport-localhost-2024-03-19-xavvwkw.tar.xz",
"Attempting upload to Red Hat Customer Portal Please visit the following URL to authenticate this device: https://sso.redhat.com/device?user_code=VGEL-PYIM Device authorized correctly. Uploading file to Red Hat Customer Portal Uploaded archive successfully",
"Attempting upload to Red Hat Secure FTP Please visit the following URL to authenticate this device: https://sso.redhat.com/device?user_code=VGEL-PYIM Device authorized correctly. Uploading file to Red Hat Secure FTP Uploaded archive successfully",
"[user@server1 ~]USD redhat-support-tool addattachment -c <8-digit_case_number> </var/tmp/sosreport_filename>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/generating_sos_reports_for_technical_support/generating-an-sos-report-for-technical-support_generating-sos-reports-for-technical-support |
Preface | Preface Preface | null | https://docs.redhat.com/en/documentation/workload_availability_for_red_hat_openshift/24.3/html/release_notes/pr01 |
10.5.64. NameVirtualHost | 10.5.64. NameVirtualHost The NameVirtualHost directive associates an IP address and port number, if necessary, for any name-based virtual hosts. Name-based virtual hosting allows one Apache HTTP Server to serve different domains without using multiple IP addresses. Note Name-based virtual hosts only work with non-secure HTTP connections. If using virtual hosts with a secure server, use IP address-based virtual hosts instead. To enable name-based virtual hosting, uncomment the NameVirtualHost configuration directive and add the correct IP address. Then add additional VirtualHost containers for each virtual host as is necessary for your configuration. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-namevirtualhost |
Chapter 24. OperatorPKI [network.operator.openshift.io/v1] | Chapter 24. OperatorPKI [network.operator.openshift.io/v1] Description OperatorPKI is a simple certificate authority. It is not intended for external use - rather, it is internal to the network operator. The CNO creates a CA and a certificate signed by that CA. The certificate has both ClientAuth and ServerAuth extended usages enabled. More specifically, given an OperatorPKI with <name>, the CNO will manage: - A Secret called <name>-ca with two data keys: - tls.key - the private key - tls.crt - the CA certificate - A ConfigMap called <name>-ca with a single data key: - cabundle.crt - the CA certificate(s) - A Secret called <name>-cert with two data keys: - tls.key - the private key - tls.crt - the certificate, signed by the CA The CA certificate will have a validity of 10 years, rotated after 9. The target certificate will have a validity of 6 months, rotated after 3 The CA certificate will have a CommonName of "<namespace>_<name>-ca@<timestamp>", where <timestamp> is the last rotation time. Type object Required spec 24.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object OperatorPKISpec is the PKI configuration. status object OperatorPKIStatus is not implemented. 24.1.1. .spec Description OperatorPKISpec is the PKI configuration. Type object Required targetCert Property Type Description targetCert object targetCert configures the certificate signed by the CA. It will have both ClientAuth and ServerAuth enabled 24.1.2. .spec.targetCert Description targetCert configures the certificate signed by the CA. It will have both ClientAuth and ServerAuth enabled Type object Required commonName Property Type Description commonName string commonName is the value in the certificate's CN 24.1.3. .status Description OperatorPKIStatus is not implemented. Type object 24.2. API endpoints The following API endpoints are available: /apis/network.operator.openshift.io/v1/operatorpkis GET : list objects of kind OperatorPKI /apis/network.operator.openshift.io/v1/namespaces/{namespace}/operatorpkis DELETE : delete collection of OperatorPKI GET : list objects of kind OperatorPKI POST : create an OperatorPKI /apis/network.operator.openshift.io/v1/namespaces/{namespace}/operatorpkis/{name} DELETE : delete an OperatorPKI GET : read the specified OperatorPKI PATCH : partially update the specified OperatorPKI PUT : replace the specified OperatorPKI 24.2.1. /apis/network.operator.openshift.io/v1/operatorpkis Table 24.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind OperatorPKI Table 24.2. HTTP responses HTTP code Reponse body 200 - OK OperatorPKIList schema 401 - Unauthorized Empty 24.2.2. /apis/network.operator.openshift.io/v1/namespaces/{namespace}/operatorpkis Table 24.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 24.4. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of OperatorPKI Table 24.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 24.6. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind OperatorPKI Table 24.7. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 24.8. HTTP responses HTTP code Reponse body 200 - OK OperatorPKIList schema 401 - Unauthorized Empty HTTP method POST Description create an OperatorPKI Table 24.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.10. Body parameters Parameter Type Description body OperatorPKI schema Table 24.11. HTTP responses HTTP code Reponse body 200 - OK OperatorPKI schema 201 - Created OperatorPKI schema 202 - Accepted OperatorPKI schema 401 - Unauthorized Empty 24.2.3. /apis/network.operator.openshift.io/v1/namespaces/{namespace}/operatorpkis/{name} Table 24.12. Global path parameters Parameter Type Description name string name of the OperatorPKI namespace string object name and auth scope, such as for teams and projects Table 24.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete an OperatorPKI Table 24.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 24.15. Body parameters Parameter Type Description body DeleteOptions schema Table 24.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified OperatorPKI Table 24.17. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 24.18. HTTP responses HTTP code Reponse body 200 - OK OperatorPKI schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified OperatorPKI Table 24.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.20. Body parameters Parameter Type Description body Patch schema Table 24.21. HTTP responses HTTP code Reponse body 200 - OK OperatorPKI schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified OperatorPKI Table 24.22. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 24.23. Body parameters Parameter Type Description body OperatorPKI schema Table 24.24. HTTP responses HTTP code Reponse body 200 - OK OperatorPKI schema 201 - Created OperatorPKI schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operator_apis/operatorpki-network-operator-openshift-io-v1 |
Chapter 3. Installing the Cluster Observability Operator | Chapter 3. Installing the Cluster Observability Operator As a cluster administrator, you can install or remove the Cluster Observability Operator (COO) from OperatorHub by using the OpenShift Container Platform web console. OperatorHub is a user interface that works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. 3.1. Installing the Cluster Observability Operator in the web console Install the Cluster Observability Operator (COO) from OperatorHub by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have logged in to the OpenShift Container Platform web console. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Type cluster observability operator in the Filter by keyword box. Click Cluster Observability Operator in the list of results. Read the information about the Operator, and configure the following installation settings: Update channel stable Version 1.0.0 or later Installation mode All namespaces on the cluster (default) Installed Namespace Operator recommended Namespace: openshift-cluster-observability-operator Select Enable Operator recommended cluster monitoring on this Namespace Update approval Automatic Optional: You can change the installation settings to suit your requirements. For example, you can select to subscribe to a different update channel, to install an older released version of the Operator, or to require manual approval for updates to new versions of the Operator. Click Install . Verification Go to Operators Installed Operators , and verify that the Cluster Observability Operator entry appears in the list. Additional resources Adding Operators to a cluster 3.2. Uninstalling the Cluster Observability Operator using the web console If you have installed the Cluster Observability Operator (COO) by using OperatorHub, you can uninstall it in the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin cluster role. You have logged in to the OpenShift Container Platform web console. Procedure Go to Operators Installed Operators . Locate the Cluster Observability Operator entry in the list. Click for this entry and select Uninstall Operator . Verification Go to Operators Installed Operators , and verify that the Cluster Observability Operator entry no longer appears in the list. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/cluster_observability_operator/installing-cluster-observability-operators |
Chapter 30. Red Hat Enterprise Linux Atomic Host 7.3.6 | Chapter 30. Red Hat Enterprise Linux Atomic Host 7.3.6 30.1. Atomic Host OStree update : New Tree Version: 7.3.6 (hash: e073a47baa605a99632904e4e05692064302afd8769a15290d8ebe8dbfd3c81b) Changes since Tree Version 7.3.5-1 (hash: c04cab425084ce81d66d1717f464e292bc5a908a86802faf0da7dd22d74d3727) Updated packages : atomic-devmode-0.3.7-1.el7 cockpit-ostree-141-1.el7 librhsm-0.0.1-2.el7 30.2. Extras Updated packages : atomic-1.17.2-9.git2760e30.el7 cockpit-141-1.el7 container-selinux-2.19-2.1.el7 docker-1.12.6-32.git88a4867.el7 docker-latest-1.13.1-13.gitb303bf6.el7 etcd-3.1.9-1.el7 flannel-0.7.1-1.el7 kubernetes-1.5.2-0.7.git269f928.el7 * oci-systemd-hook-0.1.7-4.gite533efa.el7 ostree-2017.5-3.el7 skopeo-0.1.20-2.el7 The asterisk (*) marks packages which are available for Red Hat Enterprise Linux only. 30.2.1. Container Images Updated : Red Hat Enterprise Linux Atomic cockpit-ws Container Image (rhel7/cockpit-ws) Red Hat Enterprise Linux Atomic etcd Container Image (rhel7/etcd) Red Hat Enterprise Linux Atomic flannel Container Image (rhel7/flannel) Red Hat Enterprise Linux Atomic Identity Management Server Container Image (rhel7/ipa-server) Red Hat Enterprise Linux Atomic Kubernetes apiserver Container Image (rhel7/kubernetes-apiserver) Red Hat Enterprise Linux Atomic Kubernetes controller-manager Container (rhel7/kubernetes-controller-mgr) Red Hat Enterprise Linux Atomic Kubernetes scheduler Container Image (rhel7/kubernetes-scheduler) Red Hat Enterprise Linux Atomic open-vm-tools Container Image (rhel7/open-vm-tools) Red Hat Enterprise Linux Atomic openscap Container Image (rhel7/openscap) Red Hat Enterprise Linux 7.3 Container Image (rhel7.3, rhel7, rhel7/rhel, rhel) Red Hat Enterprise Linux Atomic Tools Container Image (rhel7/rhel-tools) Red Hat Enterprise Linux Atomic Image (rhel-atomic, rhel7-atomic, rhel7/rhel-atomic) Red Hat Enterprise Linux 7 Init Container Image (rhel7/rhel7-init) Red Hat Enterprise Linux Atomic rsyslog Container Image (rhel7/rsyslog) Red Hat Enterprise Linux Atomic sadc Container Image (rhel7/sadc) Red Hat Enterprise Linux Atomic SSSD Container Image (rhel7/sssd) 30.3. New Features Red Hat Enterprise Linux 6 Init Container Image is now available The new Red Hat Enterprise Linux 6 Init Image allows creating containerized services based on RHEL6 init scripts. This container image enables running one or more services in a RHEL6 user space using init scripts. For details on using rhev6-init , see Using the Atomic RHEL6 Init Container Image in the Managing Containers Guide. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/red_hat_enterprise_linux_atomic_host_7_3_6 |
Chapter 12. Datasource Management | Chapter 12. Datasource Management 12.1. About JBoss EAP Datasources About JDBC The JDBC API is the standard that defines how databases are accessed by Java applications. An application configures a datasource that references a JDBC driver. Application code can then be written against the driver, rather than the database. The driver converts the code to the database language. This means that if the correct driver is installed, an application can be used with any supported database. For more information, see the JDBC specification Supported Databases See JBoss EAP supported configurations for the list of JDBC-compliant databases supported by JBoss EAP 7. Datasource Types The two general types of resources are referred to as datasources and XA datasources . Non-XA datasources Used for applications that do not use transactions, or applications that use transactions with a single database. XA datasources Used by applications that use multiple databases or other XA resources as part of one XA transaction. XA datasources introduce additional overhead. You specify which type of datasource to use when creating the datasource using the JBoss EAP management interfaces. The ExampleDS datasource JBoss EAP ships with an example datasource configuration, ExampleDS , which is provided to demonstrate how datasources are defined. This datasource uses an H2 database, which is a lightweight, relational database management system that provides developers with the ability to quickly build applications. Warning The ExampleDS datasource and the H2 database should not be used in a production environment. This is a very small, self-contained datasource that supports all of the standards needed for testing and building applications, but is not robust or scalable enough for production use. 12.2. JDBC Drivers Before defining datasources in JBoss EAP for your applications to use, you must first install the appropriate JDBC driver. 12.2.1. Installing a JDBC Driver as a Core Module To install a JDBC driver as a core module, you must first add the JDBC driver as a core module and then register the JDBC driver in the datasources subsystem. 12.2.1.1. Add the JDBC Driver as a Core Module JDBC drivers can be installed as a core module using the management CLI using the following steps. Download the JDBC driver. Download the appropriate JDBC driver from your database vendor. See JDBC Driver Download Locations for standard download locations for JDBC drivers of common databases. Make sure to extract the archive if the JDBC driver JAR file is contained within a ZIP or TAR archive. Start the JBoss EAP server. Launch the management CLI. Use the module add management CLI command to add the new core module. Example The following command adds a MySQL JDBC driver module: Example To launch the management CLI and add a new core module in a single step, use the following command: Important Using the module management CLI command to add and remove modules is provided as Technology Preview only. This command is not appropriate for use in a managed domain or when connecting to the management CLI remotely. Modules should be added and removed manually in a production environment. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Execute module --help for more details on using this command to add and remove modules. You must register it as a JDBC driver for it to be referenced by application datasources. 12.2.1.2. Register the JDBC Driver After the driver has been installed as a core module , you must register it as a JDBC driver using the following management CLI command. When running in a managed domain, be sure to precede this command with /profile= PROFILE_NAME . Note The driver-class-name parameter is only required if the JDBC driver jar defines more than one class in its /META-INF/services/java.sql.Driver file. For example, the /META-INF/services/java.sql.Driver file in the MySQL 5.1.36 JDBC driver JAR defines two classes: com.mysql.cj.jdbc.Driver com.mysql.fabric.jdbc.FabricMySQLDriver For this case, you would pass in driver-class-name=com.mysql.cj.jdbc.Driver . For example, the following command registers a MySQL JDBC driver. The JDBC driver is now available to be referenced by application datasources. 12.2.2. Installing a JDBC Driver as a JAR Deployment JDBC drivers can be installed as a JAR deployment using either the management CLI or the management console. As long as the driver is JDBC 4-compliant, it will automatically be recognized and installed as a JDBC driver upon deployment. The following steps describe how to install a JDBC driver using the management CLI. Note The recommended installation method for JDBC drivers is to install them as a core module . Download the JDBC driver. Download the appropriate JDBC driver from your database vendor. See JDBC Driver Download Locations for standard download locations for JDBC drivers of common databases. Make sure to extract the archive if the JDBC driver JAR file is contained within a ZIP or TAR archive. If the JDBC driver is not JDBC 4-compliant, see the steps to Update a JDBC Driver JAR to be JDBC 4-Compliant . Deploy the JAR to JBoss EAP. Note In a managed domain, specify the appropriate server groups. For example, the following command deploys a MySQL JDBC driver. A message will be displayed in the JBoss EAP server log that displays the deployed driver name, which will be used when defining datasources. The JDBC driver is now available to be referenced by application datasources. Update a JDBC Driver JAR to be JDBC 4-Compliant If the JDBC driver JAR is not JDBC 4-compliant, it can be made deployable using the following steps. Create an empty temporary directory. Create a META-INF subdirectory. Create a META-INF/services subdirectory. Create a META-INF/services/java.sql.Driver file and add one line to indicate the fully-qualified class name of the JDBC driver. For example, the below line would be added for a MySQL JDBC driver. Use the JAR command-line tool to add this new file to the JAR. 12.2.3. JDBC Driver Download Locations The following table gives the standard download locations for JDBC drivers of common databases used with JBoss EAP. Note These links point to third-party websites which are not controlled or actively monitored by Red Hat. For the most up-to-date drivers for your database, check your database vendor's documentation and website. Table 12.1. JDBC Driver Download Locations Vendor Download Location MySQL http://www.mysql.com/products/connector/ PostgreSQL http://jdbc.postgresql.org/ Oracle http://www.oracle.com/technetwork/database/features/jdbc/index-091264.html IBM http://www-01.ibm.com/support/docview.wss?uid=swg21363866 Sybase The jConnect JDBC driver is part of the SDK for SAP ASE installation. Currently there is not a separate download site for this driver by itself. Microsoft http://msdn.microsoft.com/data/jdbc/ 12.2.4. Access Vendor-Specific Classes In some cases, it is necessary for an application to use vendor-specific functionality that is not part of the JDBC API. In these cases, you can access vendor-specific APIs by declaring a dependency in that application. Warning This is advanced usage. Only applications that need functionality not found in the JDBC API should implement this procedure. Important This process is required when using the reauthentication mechanism, and accessing vendor-specific classes. You can define a dependency for an application using either the MANIFEST.MF file or a jboss-deployment-structure.xml file. If you have not yet done so, install a JDBC driver as a core module . Using the MANIFEST.MF File Edit the application's META-INF/MANIFEST.MF file. Add the Dependencies line and specify the module name. For example, the below line declares the com.mysql module as a dependency. Using a jboss-deployment-structure.xml File Create a file called jboss-deployment-structure.xml in the META-INF/ or WEB-INF/ folder of the application. Specify the module using the dependencies element. For example, the following example jboss-deployment-structure.xml file declares the com.mysql module as a dependency. <jboss-deployment-structure> <deployment> <dependencies> <module name="com.mysql"/> </dependencies> </deployment> </jboss-deployment-structure> The example code below accesses the MySQL API. import java.sql.Connection; ... Connection c = ds.getConnection(); if (c.isWrapperFor(com.mysql.jdbc.Connection.class)) { com.mysql.jdbc.Connection mc = c.unwrap(com.mysql.jdbc.Connection.class); } Important Follow the vendor-specific API guidelines closely, as the connection is being controlled by the IronJacamar container. 12.3. Creating Datasources Datasources can be created using the management console or the management CLI. JBoss EAP 7 allows you to use expressions in datasource attribute values, such as the enabled attribute. See the Property Replacement section for details on using expressions in the configuration. 12.3.1. Create a Non-XA Datasource You can create non-XA datasources using either the management CLI or the management console. Defining a Non-XA Datasource Using the Management Console Navigate to datasources in standalone or domain mode. Use the following navigation in the standalone mode: Configuration Subsystems Datasources & Drivers Datasources Use the following navigation in the domain mode: Configuration Profiles full Datasources & Drivers Datasources Click the Add ( + ) button and choose Add Datasource . It opens the Add Datasource wizard where you can choose the datasource type and click . This creates a template for your database. The following pages of the wizard are prefilled with values specific for the selected datasource. This makes the datasource creation process easy. You can test your connection on the Test Connection page before finishing the datasource creation process. Review the details and click Finish to create the datasource. Defining a Non-XA Datasource Using the Management CLI Non-XA datasources can be defined using the data-source add management CLI command. If you have not yet done so, install and register the appropriate JDBC Driver as a Core Module . Define the datasource using the data-source add command, specifying the appropriate argument values. Note In a managed domain, you must specify the --profile= PROFILE_NAME argument. See the Datasource Parameters section below for tips on these parameter values. For detailed examples, see the Example Datasource Configurations for the supported databases. Datasource Parameters jndi-name The JNDI name for the datasource must start with java:/ or java:jboss/ . For example, java:jboss/datasources/ExampleDS . driver-name The driver name value depends on whether the JDBC driver was installed as a core module or a JAR deployment. For a core module, the driver name value will be the name given to the JDBC driver when it was registered. For a JAR deployment, the driver name is the name of the JAR if there is only one class listed in its /META-INF/services/java.sql.Driver file. If there are multiple classes listed, then the value is JAR_NAME + "_" + DRIVER_CLASS_NAME + "_" + MAJOR_VERSION + "_" + MINOR_VERSION (for example mysql-connector-java-5.1.36-bin.jar_com.mysql.cj.jdbc.Driver_5_1 ). You can also see the driver name listed in the JBoss EAP server log when the JDBC JAR is deployed. connection-url For details on the connection URL formats for the supported databases, see the list of Datasource Connection URLs . For a complete listing of all available datasource attributes, see the Datasource Attributes section. user-name The user name to use when creating a new datasource connection. password The password to use when creating a new datasource connection. 12.3.2. Create an XA Datasource You can create XA datasources using either the management CLI or the management console. Defining an XA Datasource Using the Management Console Navigate to datasources in standalone or domain mode. Use the following navigation in the standalone mode: Configuration Subsystems Datasources & Drivers Datasources Use the following navigation in the domain mode: Configuration Profiles full Datasources & Drivers Datasources Click the Add ( + ) button and choose Add XA Datasource . It opens the Add XA Datasource wizard where you can choose the datasource type and click . This creates a template for your database. The following pages of the wizard are prefilled with values specific for the selected datasource. This makes the datasource creation process easy. You can test your connection on the Test Connection page before finishing the datasource creation process. Review the details and click Finish to create the datasource. Defining an XA Datasource Using the Management CLI XA datasources can be defined using the xa-data-source add management CLI command. Note In a managed domain, you will need to specify the profile to use. Depending on the format of the management CLI command, you will either precede the command with /profile= PROFILE_NAME or pass in the --profile= PROFILE_NAME argument. If you have not yet done so, install and register the appropriate JDBC Driver as a Core Module . Define the datasource using the xa-data-source add command, specifying the appropriate argument values. See the Datasource Parameters section below for tips on these parameter values. Set XA datasource properties . At least one XA datasource property is required when defining an XA datasource or you will receive an error when adding the datasource in the step. Any properties that were not set when defining the XA datasource can be set individually afterward. Set the server name. Set the database name. For detailed examples, see the Example Datasource Configurations for the supported databases. Datasource Parameters jndi-name The JNDI name for the datasource must start with java:/ or java:jboss/ . For example, java:jboss/datasources/ExampleDS . driver-name The driver name value depends on whether the JDBC driver was installed as a core module or a JAR deployment. For a core module, the driver name value will be the name given to the JDBC driver when it was registered. For a JAR deployment, the driver name is the name of the JAR if there is only one class listed in its /META-INF/services/java.sql.Driver file. If there are multiple classes listed, then the value is JAR_NAME + "_" + DRIVER_CLASS_NAME + "_" + MAJOR_VERSION + "_" + MINOR_VERSION , for example, mysql-connector-java-5.1.36-bin.jar_com.mysql.cj.jdbc.Driver_5_1 . You can also see the driver name listed in the JBoss EAP server log when the JDBC JAR is deployed. xa-datasource-class Specify the XA datasource class for the JDBC driver's implementation of the javax.sql.XADataSource class. xa-datasource-properties At least one XA datasource property is required when defining an XA datasource or you will receive an error when attempting to add it. Additional properties can also be added to the XA datasource after it has been defined. For a complete listing of all available datasource attributes, see the Datasource Attributes section. 12.4. Modifying Datasources Datasources settings can be configured using the management console or the management CLI. JBoss EAP 7 allows you to use expressions in datasource attribute values, such as the enabled attribute. See the Property Replacement section for details on using expressions in the configuration. 12.4.1. Modify a Non-XA Datasource Non-XA datasource settings can be updated using the data-source management CLI command. You can also update datasource attributes from the management console in standalone or domain mode: In standalone mode, navigate to Configuration Subsystems Datasources & Drivers Datasources . In domain mode, navigate to Configuration Profiles full Datasources & Drivers Datasources . Note Non-XA datasources can be integrated with Jakarta Transactions transactions. To integrate the datasource with Jakarta Transactions, ensure that the jta parameter is set to true . Example of updating settings for a datasource Settings for a datasource can be updated by using the following management CLI command. Note In a managed domain, you must specify the --profile= PROFILE_NAME argument. A server reload may be required for the changes to take effect. 12.4.2. Modify an XA Datasource XA datasource settings can be updated using the xa-data-source management CLI command. You can also update datasource attributes from the management console in standalone or domain mode: In standalone mode, navigate to Configuration Subsystems Datasources & Drivers Datasources . In domain mode, navigate to Configuration Profiles full Datasources & Drivers Datasources . Example of updating an XA datasource Settings for an XA datasource can be updated by using the following management CLI command. Note In a managed domain, you must specify the --profile= PROFILE_NAME argument. Example of adding an XA datasource property An XA datasource property can be added using the following management CLI command. Note In a managed domain, you must precede this command with /profile= PROFILE_NAME . A server reload may be required for the changes to take effect. 12.5. Removing Datasources Datasources can be removed using the management console or the management CLI. 12.5.1. Remove a Non-XA Datasource Non-XA datasources can be removed using the data-source remove management CLI command. You can also use the management console in standalone or domain mode to remove datasources: In standalone mode, navigate to Configuration Subsystems Datasources & Drivers Datasources . In domain mode, navigate to Configuration Profiles full Datasources & Drivers Datasources . Use the following command to remove a non-XA datasource: Note In a managed domain, you must specify the --profile= PROFILE_NAME argument. The server will require a reload after the datasource is removed. 12.5.2. Remove an XA Datasource XA datasources can be removed using the xa-data-source remove management CLI command. You can also use the management console in standalone or domain mode to remove datasources: In standalone mode, navigate to Configuration Subsystems Datasources & Drivers Datasources . In domain mode, navigate to Configuration Profiles full Datasources & Drivers Datasources . Use the following command to remove an XA datasource: Note In a managed domain, you must specify the --profile= PROFILE_NAME argument. The server will require a reload after the XA datasource is removed. 12.6. Testing datasource connections You can use the management CLI or management console to test a datasource connection to verify that its settings are correct. Test a datasource connection using the management CLI The following management CLI command can be used to test a datasource's connection. Note In a managed domain, you must precede this command with /host= HOST_NAME /server= SERVER_NAME . If you are testing an XA datasource, replace data-source= DATASOURCE_NAME with xa-data-source= XA_DATASOURCE_NAME . Test a datasource connection using the management console When using the Add Datasource wizard in the management console, you have the opportunity to test the connection before creating the datasource. On the Test Connection screen of the wizard, click the Test Connection button. When a datasource is added, you can test the connection by using the following procedure: Navigate to Configuration Subsystems Datasources & Drivers Datasources in standalone mode or Configuration Profiles full Datasources & Drivers Datasources in domain mode. Select the datasource. Select Test Connection from the drop-down list. 12.7. Flushing Datasource Connections You can flush datasource connections using the following management CLI commands. Note In a managed domain, you must precede these commands with /host= HOST_NAME /server= SERVER_NAME . Flush all connections in the pool. Gracefully flush all connections in the pool. The server will wait until connections become idle before flushing them. Flush all idle connections in the pool. Flush all invalid connections in the pool. The server will flush all connections that it determines to be invalid, for example, by the valid-connection-checker-class-name or check-valid-connection-sql validation mechanism discussed in Database Connection Validation . You can also flush connections using the management console. From the Runtime tab, select the server, select Datasources , choose a datasource, and use the drop down to select the appropriate action. 12.8. XA Datasource Recovery An XA datasource is a datasource that can participate in an XA global transaction, which is coordinated by the transaction manager and can span multiple resources in a single transaction. If one of the participants fails to commit its changes, the other participants abort the transaction and restore their state as it was before the transaction occurred. This is to maintain consistency and avoid potential data loss or corruption. XA recovery is the process of ensuring that all resources affected by a transaction are updated or rolled back, even if any of the resources or transaction participants crash or become unavailable. XA recovery happens without user intervention. Each XA resource needs to have a recovery module associated with its configuration. The recovery module is the code that is executed when recovery is being performed. JBoss EAP automatically registers recovery modules for JDBC XA resources. You can register a custom module with your XA datasource if you wish to implement custom recovery code. The recovery module must extend class com.arjuna.ats.jta.recovery.XAResourceRecovery . 12.8.1. Configuring XA Recovery For most JDBC resources, the recovery module is automatically associated with the resource. In these cases, you only need to configure the options that allow the recovery module to connect to your resource to perform recovery. The following table describes the XA datasource parameters specific to XA recovery. Each of these configuration attributes can be set during datasource creation or afterward. You can set them using either the management console or the management CLI. See Modify an XA Datasource for information on configuring XA datasources. Table 12.2. Datasource Parameters for XA Recovery Attribute Description recovery-username The user name to use to connect to the resource for recovery. Note that if this is not specified, the datasource security settings will be used. recovery-password The password to use to connect to the resource for recovery. Note that if this is not specified, the datasource security settings will be used. recovery-security-domain The security domain to use to connect to the resource for recovery. recovery-plugin-class-name If you need to use a custom recovery module, set this attribute to the fully-qualified class name of the module. The module should extend class com.arjuna.ats.jta.recovery.XAResourceRecovery . recovery-plugin-properties If you use a custom recovery module which requires properties to be set, set this attribute to the list of comma-separated KEY = VALUE pairs for the properties. Disable XA Recovery If multiple XA datasources connect to the same physical database, then XA recovery typically needs to be configured for only one of them. Use the following management CLI command to disable recovery for an XA datasource: 12.8.2. Vendor-Specific XA Recovery Vendor-Specific Configuration Some databases require specific configurations in order to cooperate in XA transactions managed by the JBoss EAP transaction manager. For detailed and up-to-date information, see your database vendor's documentation. MySQL No special configuration is required. For more information, see the MySQL documentation. Note For automated XA recovery, MySQL 8 and later requires special configuration. For more information, see the JBoss EAP 7.4 supported configurations . PostgreSQL and Postgres Plus Advanced Server For PostgreSQL to be able to handle XA transactions, change the configuration parameter max_prepared_transactions to a value greater than 0 and equal to or greater than max_connections . Oracle Ensure that the Oracle user, USER , has access to the tables needed for recovery. GRANT SELECT ON sys.dba_pending_transactions TO USER; GRANT SELECT ON sys.pending_transUSD TO USER; GRANT SELECT ON sys.dba_2pc_pending TO USER; GRANT EXECUTE ON sys.dbms_xa TO USER; If the Oracle user does not have the proper permissions, you may see an error such as the following: Microsoft SQL Server For more information, see the Microsoft SQL Server documentation, including http://msdn.microsoft.com/en-us/library/aa342335.aspx . IBM DB2 No special configuration is required. For more information, see the IBM DB2 documentation. Sybase Sybase expects XA transactions to be enabled on the database. Without correct database configuration, XA transactions will not work. The enable xact coordination parameter enables or disables Adaptive Server transaction coordination services. When this parameter is enabled, Adaptive Server ensures that updates to remote Adaptive Server data commit or roll back with the original transaction. To enable transaction coordination, use: MariaDB No special configuration is required. For more information, see the MariaDB documentation. Known Issues These known issues with handling XA transactions are for the specific database and JDBC driver versions supported with JBoss EAP 7. For up-to-date information on the supported databases, see JBoss EAP supported configurations . MySQL MySQL is not fully capable of handling XA transactions. If a client is disconnected from MySQL, then all the information about such transactions is lost. See this MySQL bug for more information. Note that this issue was fixed in MySQL 5.7. PostgreSQL and Postgres Plus Advanced Server The JDBC driver returns the XAER_RMERR XAException error code when a network failure occurs during the commit phase of two-phase commit (2PC). This error signals an unrecoverable catastrophic event to the transaction manager, but the transaction is left in the in-doubt state on the database side and could be easily corrected after network connection is established again. The correct return code should be XAER_RMFAIL or XAER_RETRY . The incorrect error code causes the transaction to be left in the Heuristic state on the JBoss EAP side and holding locks in the database which requires manual intervention. See this PostgreSQL bug for more information. If a connection failure happens when the one-phase commit optimization is used, the JDBC driver returns XAER_RMERR , but the XAER_RMFAIL error code should be returned. This could lead to data inconsistency if the database commits data during one-phase commit and the connection goes down at that moment, then the client is informed that transaction was rolled back. The Postgres Plus JDBC driver returns XIDs for all prepared transactions that exist in the Postgres Plus Server, so there is no way to determine the database to which the XID belongs. If you define more than one data source for the same database in JBoss EAP, an in-doubt transaction recovery attempt could be run under wrong account, which causes the recovery to fail. Oracle The JDBC driver returns XIDs belonging to all users on the database instance, when the Recovery Manager calls recovery using datasource configured with some user credentials. The JDBC driver throws the exception ORA-24774: cannot switch to specified transaction because it tries to recover XIDs belonging to other users. The workaround for this issue is to grant FORCE ANY TRANSACTION privilege to the user whose credentials are used in recovery datasource configuration. More information about configuring the privilege can be found here: http://docs.oracle.com/database/121/ADMIN/ds_txnman.htm#ADMIN12259 . Microsoft SQL Server The JDBC driver returns the XAER_RMERR XAException error code when a network failure occurs during the commit phase of two-phase commit (2PC). This error signals an unrecoverable catastrophic event to the transaction manager, but the transaction is left in the in-doubt state on the database side and could be easily corrected after network connection is established again. The correct return code should be XAER_RMFAIL or XAER_RETRY . The incorrect error code causes the transaction to be left in the Heuristic state on the JBoss EAP side and holding locks in the database which requires manual intervention. See this Microsoft SQL Server issue report for more information. If a connection failure happens when the one-phase commit optimization is used, the JDBC driver returns XAER_RMERR , but the XAER_RMFAIL error code should be returned. This could lead to data inconsistency if the database commits data during one-phase commit and the connection goes down at that moment, then the client is informed that transaction was rolled back. IBM DB2 If a connection failure happens during a one-phase commit, the JDBC driver returns XAER_RETRY , but the XAER_RMFAIL error code should be returned. This could lead to data inconsistency if the database commits data during one-phase commit and the connection goes down at that moment, then the client is informed that transaction was rolled back. Sybase The JDBC driver returns the XAER_RMERR XAException error code when a network failure occurs during the commit phase of two-phase commit (2PC). This error signals an unrecoverable catastrophic event to the transaction manager, but the transaction is left in the in-doubt state on the database side and could be easily corrected after network connection is established again. The correct return code should be XAER_RMFAIL or XAER_RETRY . The incorrect error code causes the transaction to be left in the Heuristic state on the JBoss EAP side and holding locks in the database which requires manual intervention. If a connection failure happens when the one-phase commit optimization is used, the JDBC driver returns XAER_RMERR , but the XAER_RMFAIL error code should be returned. This could lead to data inconsistency if the database commits data during one-phase commit and the connection goes down at that moment, then the client is informed that transaction was rolled back. If an XA transaction that includes an insert to a Sybase 15.7 or 16 database fails before the Sybase transaction branch is in a prepared state, then repeating the XA transaction and inserting the same record with the same primary key will fail with an error of com.sybase.jdbc4.jdbc.SybSQLException: Attempt to insert duplicate key row . This exception will be thrown until the original unfinished Sybase transaction branch is rolled back. MariaDB MariaDB is not fully capable of handling XA transactions. If a client is disconnected from MariaDB, then all the information about such transactions is lost. MariaDB Galera Cluster XA transactions are not supported in MariaDB Galera Cluster. 12.9. Database Connection Validation Database maintenance, network problems, or other outage events may cause JBoss EAP to lose the connection to the database. In order to recover from these situations, you can enable database connection validation for your datasources. To configure database connection validation, you specify the validation timing method to define when the validation occurs, the validation mechanism to determine how the validation is performed, and the exception sorter to define how exceptions are handled. Choose one of the validation timing methods. validate-on-match When the validate-on-match method is set to true , the database connection is validated every time it is checked out from the connection pool using the validation mechanism specified in the step. If a connection is not valid, a warning is written to the log and the connection in the pool is retrieved. This process continues until a valid connection is found. If you prefer not to cycle through every connection in the pool, you can use the use-fast-fail option. If a valid connection is not found in the pool, a new connection is created. If the connection creation fails, an exception is returned to the requesting application. background-validation When the background-validation method is set to true , connections are validated periodically in a background thread prior to use. The frequency of the validation is specified by the background-validation-millis property. The default value of background-validation-millis is 0 , meaning that it is disabled. When determining the value of the background-validation-millis property, consider the following: This value should not be set to the same value as your idle-timeout-minutes setting. The lower the value, the more frequently the pool is validated and the sooner invalid connections are removed from the pool. Lower values take more database resources. Higher values result in less frequent connection validation checks and use less database resources, but dead connections are undetected for longer periods of time. Note These validation methods are mutually exclusive as demonstrated in the following examples: If validate-on-match is set to true , you must set background-validation to false . If background-validation is set to true , you must set validate-on-match to false . For a comparison matrix for these validation methods, see Comparison of validation timing methods . Choose one of the validation mechanisms. valid-connection-checker-class-name Using valid-connection-checker-class-name is the preferred validation mechanism. This specifies a connection checker class that is used to validate connections for the particular database in use. JBoss EAP provides the following connection checkers: org.jboss.jca.adapters.jdbc.extensions.db2.DB2ValidConnectionChecker org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLValidConnectionChecker org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLReplicationValidConnectionChecker org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker org.jboss.jca.adapters.jdbc.extensions.novendor.JDBC4ValidConnectionChecker org.jboss.jca.adapters.jdbc.extensions.novendor.NullValidConnectionChecker org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseValidConnectionChecker check-valid-connection-sql Using check-valid-connection-sql , you provide the SQL statement that will be used to validate the connection. The following is an example SQL statement that you might use to validate Oracle connections. select 1 from dual The following is an example SQL statement that you might use to validate MySQL or PostgreSQL connections. select 1 Set the exception sorter class name. When an exception is marked as fatal, the connection is closed immediately, even if the connection is participating in a transaction. Use the exception sorter class option to properly detect and clean up after fatal connection exceptions. Choose the appropriate JBoss EAP exception sorter for your datasource type. org.jboss.jca.adapters.jdbc.extensions.db2.DB2ExceptionSorter org.jboss.jca.adapters.jdbc.extensions.informix.InformixExceptionSorter org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLExceptionSorter org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter org.jboss.jca.adapters.jdbc.extensions.novendor.NullExceptionSorter org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseExceptionSorter 12.10. Datasource Security Datasource security refers to encrypting or obscuring passwords for datasource connections. These passwords can be stored in plain text in configuration files, but this represents a security risk. There are several methods you can use for datasource security. Examples of each are included below. Note If you use the legacy security subsystem, you can configure a security domain to use LDAP for authenticating and authorizing a datasource connection. For more details about configuring a security domain, see Configuring a security domain . Secure a Datasource Using a Security Domain Use the following steps to secure a datasource using a security domain: Create the new security domain. A security domain for the datasource is defined. The following XML extract is the result of calling CLI commands. <security-domain name="DsRealm" cache-type="default"> <authentication> <login-module code="ConfiguredIdentity" flag="required"> <module-option name="userName" value="sa"/> <module-option name="principal" value="sa"/> <module-option name="password" value="sa"/> </login-module> </authentication> </security-domain> Add a new datasource. Set the security domain on the datasource. Reload the server for the changes to be effective. Note If you are using a security domain with multiple datasources, disable caching on the security domain. This can be accomplished by setting the value of the cache-type attribute to none or by removing the attribute altogether; however, if caching is desired, use a separate security domain for each datasource. The following XML extract shows the datasource secured with the DsRealm . <datasources> <datasource jndi-name="java:jboss/datasources/securityDs" pool-name="securityDs"> <connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1</connection-url> <driver>h2</driver> <new-connection-sql>select current_user()</new-connection-sql> <security> <security-domain>DsRealm</security-domain> </security> </datasource> </datasources> For more information on using Security Domains, see How to Configure Identity Management . Secure a Datasource Using a Password Vault Use the following steps to secure a datasource using a password vault: Set the password vault for the ExampleDS datasource. Reload the server to implement the changes. The following XML security element is added to the ExampleDS datasource secured with password vault. <security> <user-name>admin</user-name> <password>USD{VAULT::ds_ExampleDS::password::N2NhZDYzOTMtNWE0OS00ZGQ0LWE4MmEtMWNlMDMyNDdmNmI2TElORV9CUkVBS3ZhdWx0}</password> </security> For more information on using the Password Vault, see the Password Vault section in the JBoss EAP How to Configure Server Security guide. Secure a Datasource Using a Credential Store You can also use a credential store to provide passwords. The elytron subsystem provides the ability to create credential stores to securely house and use your passwords throughout JBoss EAP. You can find more details on creating and using credential stores in the Credential Store section in the JBoss EAP How to Configure Server Security guide. Adding a Credential Store Reference to ExampleDS Secure a Datasource Using an Authentication Context You can also use an Elytron authentication context to provide usernames and passwords. Use the following steps to configure and use an authentication context for datasource security. Remove password and user-name . Enable Elytron security for the datasource. Create an authentication-configuration for your credentials. The authentication configuration contains the credentials you want your datasource to use when making a connection. The below example uses a reference to a credential store, but you could also use an Elytron security domain. Create an authentication-context . Update the datasource to use the authentication context. The below example updates ExampleDS to use an authentication context. Note If authentication-context attribute is not set and elytron-enabled attribute is set to true , JBoss EAP will use the current context for authentication. Secure a Datasource using Kerberos To secure a datasource using kerberos authentication, the following configuration is required: Kerberos is configured on the database server. JBoss EAP host server has a keytab entry for the database server. To secure a datasource using kerberos: Configure JBoss EAP to use kerberos. For debugging, change the values of sun.security.krb5.debug and sun.security.spnego.debug to true . In a production environment, it is recommended to set the values to false . Configure security. You can use legacy security or Elytron security to secure a datasource. To use kerberos with legacy security: Configure infinispan cache to periodically remove expired tickets from cache. The following attributes define ticket expiration: lifespan : Interval, in milliseconds, at which new certificates are requested from the KDC. Set the value of the lifespan attribute to be just smaller than the lifespan defined by the KDC. max-idle : Interval, in milliseconds, at which a valid ticket should be removed from the cache, if it unused. max-entries : The maximum number copies of the kerberos tickets to keep in cache. The value corresponds to the number of configured connections in your datasource. Create a security domain. When using Microsoft JDBC driver for SQL server, add attribute and value "wrapGSSCredential" ⇒ "true" in module-options . For debugging, change the value of the debug attribute in module-options to true . To use kerberos with Elytron: Set up a kerberos factory in Elytron. For debugging, add attribute and value debug = true . For a list of supported attributes, see the Kerberos Security Factory Attributes section in the How to Configure Server Security guide . Create an authentication configuration to use the kerberos factory. Create an authentication context. Secure the datasource with kerberos. If you use legacy security: Configure the datasource to use a security domain. Configure vendor-specific connection properties. Example: connection properties for Oracle database. If you use Elytron: Configure the datasource to use an authentication context. Configure vendor-specific connection properties. Example: connection properties for Oracle database When using kerberos authentication, it is recommended to use the following attributes and values for the datasource: pool-prefill=false pool-use-strict-min=false idle-timeout-minutes For a list of supported attributes, see Datasource Attributes . 12.11. Datasource Statistics When statistics collection is enabled for a datasource, you can view runtime statistics for the datasource. 12.11.1. Enabling Datasource Statistics By default, datasource statistics are not enabled. You can enable datasource statistics collection using the management CLI or the management console . Enable Datasource Statistics Using the Management CLI The following management CLI command enables the collection of statistics for the ExampleDS datasource. Note In a managed domain, precede this command with /profile= PROFILE_NAME . Reload the server for the changes to take effect. Enable Datasource Statistics Using the Management Console Use the following steps to enable statistics collection for a datasource using the management console. Navigate to datasources in standalone or domain mode. Use the following navigation in the standalone mode: Configuration Subsystems Datasources & Drivers Datasources Use the following navigation in the domain mode: Configuration Profiles full Datasources & Drivers Datasources Select the datasource and click View . Click Edit under the Attributes tab. Set the Statistics Enabled field to ON and click Save . A popup appears indicating that the changes require a reload in order to take effect. Reload the server. For a standalone server, click the Reload link from the popup to reload the server. For a managed domain, click the Topology link from the popup. From the Topology tab, select the appropriate server and select the Reload drop down option to reload the server. 12.11.2. Viewing Datasource Statistics You can view runtime statistics for a datasource using the management CLI or management console . View Datasource Statistics Using the Management CLI The following management CLI command retrieves the core pool statistics for the ExampleDS datasource. Note In a managed domain, precede these commands with /host= HOST_NAME /server= SERVER_NAME . The following management CLI command retrieves the JDBC statistics for the ExampleDS datasource. Note Since statistics are runtime information, be sure to specify the include-runtime=true argument. See Datasource Statistics for a detailed list of all available statistics. View Datasource Statistics Using the Management Console To view datasource statistics from the management console, navigate to the Datasources subsystem from the Runtime tab, select a datasource, and click View . See Datasource Statistics for a detailed list of all available statistics. 12.12. Datasource Tuning For tips on monitoring and optimizing performance for the datasources subsystem, see the Datasource and Resource Adapter Tuning section of the Performance Tuning Guide . 12.13. Capacity Policies JBoss EAP supports defining capacity polices for Jakarta Connectors deployments, including datasources. Capacity policies define how physical connections for a pool are created, referred to as capacity incrementing, and destroyed, referred to as capacity decrementing. The default policies are set to create one connection per request for capacity incrementing, and destroy all connections when they time out when the idle timeout is scheduled for capacity decrementing. To configure capacity polices, you need to specify a capacity incrementer class, a capacity decrementer class, or both. Example: Defining Capacity Policies You can also configure properties on the specified capacity incrementer or decrementer class. Example: Configuring Properties for Capacity Policies MaxPoolSize Incrementer Policy Class name : org.jboss.jca.core.connectionmanager.pool.capacity.MaxPoolSizeIncrementer The MaxPoolSize incrementer policy will fill the pool to its max size for each request. This policy is useful when you want to keep the maximum number of connections available all the time. Size Incrementer Policy Class name : org.jboss.jca.core.connectionmanager.pool.capacity.SizeIncrementer The Size incrementer policy will fill the pool by the specified number of connections for each request. This policy is useful when you want to increment with an additional number of connections per request in anticipation that the request will also need a connection. Table 12.3. Size policy properties Name Description Size The number of connections that should be created Note This is the default increment policy, and has a size value of 1. Watermark Incrementer Policy Class name : org.jboss.jca.core.connectionmanager.pool.capacity.WatermarkIncrementer The Watermark incrementer policy will fill the pool to the specified number of connections for each request. This policy is useful when you want to keep a specified number of connections in the pool at all time. Table 12.4. Watermark policy properties Name Description Watermark The watermark level for the number of connections MinPoolSize Decrementer Policy Class name : org.jboss.jca.core.connectionmanager.pool.capacity.MinPoolSizeDecrementer The MinPoolSize decrementer policy will decrement the pool to its min size for each request. This policy is useful when you want to limit the number of connections after each idle timeout request. The pool will operate in a First In First Out (FIFO) manner. Size Decrementer Policy Class name : org.jboss.jca.core.connectionmanager.pool.capacity.SizeDecrementer The Size decrementer policy will decrement the pool by the specified number of connections for each idle timeout request. Table 12.5. Size policy properties Name Description Size The number of connections that should be destroyed This policy is useful when you want to decrement an additional number of connections per idle timeout request in anticipation that the pool usage will lower over time. The pool will operate in a First In First Out (FIFO) manner. TimedOut Decrementer Policy Class name : org.jboss.jca.core.connectionmanager.pool.capacity.TimedOutDecrementer The TimedOut decrementer policy will removed all connections that have timed out from the pool for each idle timeout request. The pool will operate in a First In Last Out (FILO) manner. Note This policy is the default decrement policy. TimedOut/FIFO Decrementer Policy Class name : org.jboss.jca.core.connectionmanager.pool.capacity.TimedOutFIFODecrementer The TimedOutFIFO decrementer policy will removed all connections that have timed out from the pool for each idle timeout request. The pool will operate in a First In First Out (FIFO) manner. Watermark Decrementer Policy Class name : org.jboss.jca.core.connectionmanager.pool.capacity.WatermarkDecrementer The Watermark decrementer policy will decrement the pool to the specified number of connections for each idle timeout request. This policy is useful when you want to keep a specified number of connections in the pool at all time. The pool will operate in a First In First Out (FIFO) manner. Table 12.6. Watermark policy properties Name Description Watermark The watermark level for the number of connections 12.14. Enlistment Tracing Enlistment traces can be recorded in order to help locate error situations that happen during enlistment of XAResource instances. When enlistment tracing is enabled, the jca subsystem creates an exception object for every pool operation, so that an accurate stack trace can be produced if necessary; however, this does come with performance overhead. As of JBoss EAP 7.1, enlistment tracing is disabled by default. You can enable the recording of enlistment traces for a datasource using the management CLI by setting the enlistment-trace attribute to true . Enable enlistment tracing for a non-XA datasource. Enable enlistment tracing for an XA datasource. Warning Enabling enlistment tracing can cause a performance impact. 12.15. Example Datasource Configurations 12.15.1. Example MySQL Datasource This is an example of a MySQL datasource configuration with connection information, basic security, and validation options. Example: MySQL Datasource Configuration <datasources> <datasource jndi-name="java:jboss/MySqlDS" pool-name="MySqlDS"> <connection-url>jdbc:mysql://localhost:3306/jbossdb</connection-url> <driver>mysql</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter"/> </validation> </datasource> <drivers> <driver name="mysql" module="com.mysql"> <driver-class>com.mysql.cj.jdbc.Driver</driver-class> <xa-datasource-class>com.mysql.cj.jdbc.MysqlXADataSource</xa-datasource-class> </driver> </drivers> </datasources> Example: MySQL JDBC Driver module.xml File <?xml version="1.0" ?> <module xmlns="urn:jboss:module:1.1" name="com.mysql"> <resources> <resource-root path="mysql-connector-java-8.0.12.jar"/> </resources> <dependencies> <module name="javaee.api"/> <module name="sun.jdk"/> <module name="ibm.jdk"/> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module> Example Management CLI Commands This example configuration can be achieved by using the following management CLI commands. Add the MySQL JDBC driver as a core module. Important Using the module management CLI command to add and remove modules is provided as Technology Preview only. This command is not appropriate for use in a managed domain or when connecting to the management CLI remotely. Modules should be added and removed manually in a production environment. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Register the MySQL JDBC driver. Add the MySQL datasource. 12.15.2. Example MySQL XA Datasource This is an example of a MySQL XA datasource configuration with XA datasource properties, basic security, and validation options. Example: MySQL XA Datasource Configuration <datasources> <xa-datasource jndi-name="java:jboss/MySqlXADS" pool-name="MySqlXADS"> <xa-datasource-property name="ServerName"> localhost </xa-datasource-property> <xa-datasource-property name="DatabaseName"> mysqldb </xa-datasource-property> <driver>mysql</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter"/> </validation> </xa-datasource> <drivers> <driver name="mysql" module="com.mysql"> <driver-class>com.mysql.cj.jdbc.Driver</driver-class> <xa-datasource-class>com.mysql.cj.jdbc.MysqlXADataSource</xa-datasource-class> </driver> </drivers> </datasources> Example: MySQL JDBC Driver module.xml File <?xml version="1.0" ?> <module xmlns="urn:jboss:module:1.1" name="com.mysql"> <resources> <resource-root path="mysql-connector-java-8.0.12.jar"/> </resources> <dependencies> <module name="javaee.api"/> <module name="sun.jdk"/> <module name="ibm.jdk"/> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module> Example Management CLI Commands This example configuration can be achieved by using the following management CLI commands. Add the MySQL JDBC driver as a core module. Important Using the module management CLI command to add and remove modules is provided as Technology Preview only. This command is not appropriate for use in a managed domain or when connecting to the management CLI remotely. Modules should be added and removed manually in a production environment. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Register the MySQL JDBC driver. Add the MySQL XA datasource. 12.15.3. Example PostgreSQL Datasource This is an example of a PostgreSQL datasource configuration with connection information, basic security, and validation options. Example: PostgreSQL Datasource Configuration <datasources> <datasource jndi-name="java:jboss/PostgresDS" pool-name="PostgresDS"> <connection-url>jdbc:postgresql://localhost:5432/postgresdb</connection-url> <driver>postgresql</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter"/> </validation> </datasource> <drivers> <driver name="postgresql" module="com.postgresql"> <xa-datasource-class>org.postgresql.xa.PGXADataSource</xa-datasource-class> </driver> </drivers> </datasources> Example: PostgreSQL JDBC Driver module.xml File <?xml version="1.0" ?> <module xmlns="urn:jboss:module:1.1" name="com.postgresql"> <resources> <resource-root path="postgresql-42.x.y.jar"/> </resources> <dependencies> <module name="javaee.api"/> <module name="sun.jdk"/> <module name="ibm.jdk"/> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module> In the example above, make sure you replace 42.x.y with your driver version number. Additional resources For information about JDBC drivers and how to install them, see JDBC Drivers . For information about JDBC drivers that have been tested as part of the JBoss EAP 7.4, see JBoss EAP 7.4 Supported Configurations . Example Management CLI Commands You can add the PostgreSQL JDBC driver and the PostgreSQL datasource to your JDBC API with the following CLI commands. Add the PostgreSQL JDBC driver as a core module. In the example above, make sure you replace 42.x.y with your driver version number. Important Using the module management CLI command to add and remove modules is provided as Technology Preview only. This command is not appropriate for use in a managed domain or when connecting to the management CLI remotely. Modules should be added and removed manually in a production environment. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Register the PostgreSQL JDBC driver. Add the PostgreSQL datasource. 12.15.4. Example PostgreSQL XA Datasource This is an example of a PostgreSQL XA datasource configuration with XA datasource properties, basic security, and validation options. Example: PostgreSQL XA Datasource Configuration <datasources> <xa-datasource jndi-name="java:jboss/PostgresXADS" pool-name="PostgresXADS"> <xa-datasource-property name="ServerName"> localhost </xa-datasource-property> <xa-datasource-property name="PortNumber"> 5432 </xa-datasource-property> <xa-datasource-property name="DatabaseName"> postgresdb </xa-datasource-property> <driver>postgresql</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter"/> </validation> </xa-datasource> <drivers> <driver name="postgresql" module="com.postgresql"> <xa-datasource-class>org.postgresql.xa.PGXADataSource</xa-datasource-class> </driver> </drivers> </datasources> Example: PostgreSQL JDBC Driver module.xml File <?xml version="1.0" ?> <module xmlns="urn:jboss:module:1.1" name="com.postgresql"> <resources> <resource-root path="postgresql-42.x.y.jar"/> </resources> <dependencies> <module name="javaee.api"/> <module name="sun.jdk"/> <module name="ibm.jdk"/> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module> In the example above, make sure you replace 42.x.y with your driver version number. Additional resources For information about JDBC drivers and how to install them, see JDBC Drivers . For information about JDBC drivers that have been tested as part of the JBoss EAP 7.4, see JBoss EAP 7.4 Supported Configurations . Example Management CLI Commands You can add the PostgreSQL JDBC driver and the PostgreSQL datasource to your JDBC API with the following CLI commands. Add the PostgreSQL JDBC driver as a core module. In the example above, make sure you replace 42.x.y with your driver version number. Important Using the module management CLI command to add and remove modules is provided as Technology Preview only. This command is not appropriate for use in a managed domain or when connecting to the management CLI remotely. Modules should be added and removed manually in a production environment. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Register the PostgreSQL JDBC driver. Add the PostgreSQL XA datasource. 12.15.5. Example Oracle Datasource This is an example of an Oracle datasource configuration with connection information, basic security, and validation options. Example: Oracle Datasource Configuration <datasources> <datasource jndi-name="java:jboss/OracleDS" pool-name="OracleDS"> <connection-url>jdbc:oracle:thin:@localhost:1521:XE</connection-url> <driver>oracle</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter"/> </validation> </datasource> <drivers> <driver name="oracle" module="com.oracle"> <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class> </driver> </drivers> </datasources> Example: Oracle JDBC Driver module.xml File <?xml version="1.0" ?> <module xmlns="urn:jboss:module:1.1" name="com.oracle"> <resources> <resource-root path="ojdbc7.jar"/> </resources> <dependencies> <module name="javaee.api"/> <module name="sun.jdk"/> <module name="ibm.jdk"/> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module> Example Management CLI Commands This example configuration can be achieved by using the following management CLI commands. Add the Oracle JDBC driver as a core module. Important Using the module management CLI command to add and remove modules is provided as Technology Preview only. This command is not appropriate for use in a managed domain or when connecting to the management CLI remotely. Modules should be added and removed manually in a production environment. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Register the Oracle JDBC driver. Add the Oracle datasource. 12.15.6. Example Oracle XA Datasource Important The following settings must be applied for the user accessing an Oracle XA datasource in order for XA recovery to operate correctly. The value user is the user defined to connect from JBoss EAP to Oracle: GRANT SELECT ON sys.dba_pending_transactions TO user; GRANT SELECT ON sys.pending_transUSD TO user; GRANT SELECT ON sys.dba_2pc_pending TO user; GRANT EXECUTE ON sys.dbms_xa TO user; This is an example of an Oracle XA datasource configuration with XA datasource properties, basic security, and validation options. Example: Oracle XA Datasource Configuration <datasources> <xa-datasource jndi-name="java:jboss/OracleXADS" pool-name="OracleXADS"> <xa-datasource-property name="URL"> jdbc:oracle:thin:@oracleHostName:1521:orcl </xa-datasource-property> <driver>oracle</driver> <xa-pool> <is-same-rm-override>false</is-same-rm-override> </xa-pool> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter"/> </validation> </xa-datasource> <drivers> <driver name="oracle" module="com.oracle"> <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class> </driver> </drivers> </datasources> Example: Oracle JDBC Driver module.xml File <?xml version="1.0" ?> <module xmlns="urn:jboss:module:1.1" name="com.oracle"> <resources> <resource-root path="ojdbc7.jar"/> </resources> <dependencies> <module name="javaee.api"/> <module name="sun.jdk"/> <module name="ibm.jdk"/> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module> Example Management CLI Commands This example configuration can be achieved by using the following management CLI commands. Add the Oracle JDBC driver as a core module. Important Using the module management CLI command to add and remove modules is provided as Technology Preview only. This command is not appropriate for use in a managed domain or when connecting to the management CLI remotely. Modules should be added and removed manually in a production environment. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Register the Oracle JDBC driver. Add the Oracle XA datasource. 12.15.7. Example Oracle RAC datasource This is an example of a Real Application Cluster (RAC) datasource configuration with connection information, basic security, and validation options. Example: Oracle RAC datasource configuration <datasources> <datasource jndi-name="java:jboss/OracleDS" pool-name="OracleDS"> <connection-url>jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521))</connection-url> <driver>oracle</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter"/> </validation> </datasource> <drivers> <driver name="oracle" module="com.oracle"> <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class> </driver> </drivers> </datasources> Example: Oracle JDBC driver module.xml file <?xml version="1.0" ?> <module xmlns="urn:jboss:module:1.1" name="com.oracle"> <resources> <resource-root path="ojdbc7.jar"/> </resources> <dependencies> <module name="javaee.api"/> <module name="sun.jdk"/> <module name="ibm.jdk"/> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module> Example management CLI commands To achieve this example configuration, use the following management CLI commands: Add the Oracle JDBC driver as a core module: Register the Oracle JDBC driver: Add the Oracle RAC datasource: Additional resources For more information about configuring Oracle datasources, see Oracle datasources . 12.15.8. Example Oracle RAC XA datasource This is an example of a RAC datasource configuration with XA datasource properties, basic security, and validation options. Example: Oracle RAC XA datasource configuration <datasources> <xa-datasource jndi-name="java:jboss/OracleXADS" pool-name="OracleXADS"> <xa-datasource-property name="URL"> jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521)) </xa-datasource-property> <driver>oracle</driver> <xa-pool> <is-same-rm-override>false</is-same-rm-override> </xa-pool> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter"/> </validation> </xa-datasource> <drivers> <driver name="oracle" module="com.oracle"> <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class> </driver> </drivers> </datasources> Example: Oracle JDBC driver module.xml file <?xml version="1.0" ?> <module xmlns="urn:jboss:module:1.1" name="com.oracle"> <resources> <resource-root path="ojdbc7.jar"/> </resources> <dependencies> <module name="javaee.api"/> <module name="sun.jdk"/> <module name="ibm.jdk"/> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module> Example management CLI commands To achieve this example configuration, use the following management CLI commands: Add the Oracle JDBC driver as a core module: Register the Oracle JDBC driver: Add the Oracle RAC XA datasource: 12.15.9. Example Microsoft SQL Server Datasource This is an example of a Microsoft SQL Server datasource configuration with connection information, basic security, and validation options. Example: Microsoft SQL Server Datasource Configuration <datasources> <datasource jndi-name="java:jboss/MSSQLDS" pool-name="MSSQLDS"> <connection-url>jdbc:sqlserver://localhost:1433;DatabaseName=MyDatabase</connection-url> <driver>sqlserver</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLValidConnectionChecker"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLExceptionSorter"/> </validation> </datasource> <drivers> <driver name="sqlserver" module="com.microsoft"> <xa-datasource-class>com.microsoft.sqlserver.jdbc.SQLServerXADataSource</xa-datasource-class> </driver> </drivers> </datasources> Example: Microsoft SQL Server JDBC Driver module.xml File <?xml version="1.0" ?> <module xmlns="urn:jboss:module:1.1" name="com.microsoft"> <resources> <resource-root path="sqljdbc42.jar"/> </resources> <dependencies> <module name="javaee.api"/> <module name="sun.jdk"/> <module name="ibm.jdk"/> <module name="javax.api"/> <module name="javax.transaction.api"/> <module name="javax.xml.bind.api"/> </dependencies> </module> Example Management CLI Commands This example configuration can be achieved by using the following management CLI commands. Add the Microsoft SQL Server JDBC driver as a core module. Important Using the module management CLI command to add and remove modules is provided as Technology Preview only. This command is not appropriate for use in a managed domain or when connecting to the management CLI remotely. Modules should be added and removed manually in a production environment. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Register the Microsoft SQL Server JDBC driver. Add the Microsoft SQL Server datasource. 12.15.10. Example Microsoft SQL Server XA Datasource This is an example of a Microsoft SQL Server XA datasource configuration with XA datasource properties, basic security, and validation options. Example: Microsoft SQL Server XA Datasource Configuration <datasources> <xa-datasource jndi-name="java:jboss/MSSQLXADS" pool-name="MSSQLXADS"> <xa-datasource-property name="ServerName"> localhost </xa-datasource-property> <xa-datasource-property name="DatabaseName"> mssqldb </xa-datasource-property> <xa-datasource-property name="SelectMethod"> cursor </xa-datasource-property> <driver>sqlserver</driver> <xa-pool> <is-same-rm-override>false</is-same-rm-override> </xa-pool> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLValidConnectionChecker"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLExceptionSorter"/> </validation> </xa-datasource> <drivers> <driver name="sqlserver" module="com.microsoft"> <xa-datasource-class>com.microsoft.sqlserver.jdbc.SQLServerXADataSource</xa-datasource-class> </driver> </drivers> </datasources> Example: Microsoft SQL Server JDBC Driver module.xml File <?xml version="1.0" ?> <module xmlns="urn:jboss:module:1.1" name="com.microsoft"> <resources> <resource-root path="sqljdbc42.jar"/> </resources> <dependencies> <module name="javaee.api"/> <module name="sun.jdk"/> <module name="ibm.jdk"/> <module name="javax.api"/> <module name="javax.transaction.api"/> <module name="javax.xml.bind.api"/> </dependencies> </module> Example Management CLI Commands This example configuration can be achieved by using the following management CLI commands. Add the Microsoft SQL Server JDBC driver as a core module. Important Using the module management CLI command to add and remove modules is provided as Technology Preview only. This command is not appropriate for use in a managed domain or when connecting to the management CLI remotely. Modules should be added and removed manually in a production environment. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Register the Microsoft SQL Server JDBC driver. Add the Microsoft SQL Server XA datasource. 12.15.11. Example IBM DB2 Datasource This is an example of an IBM DB2 datasource configuration with connection information, basic security, and validation options. Example: IBM DB2 Datasource Configuration <datasources> <datasource jndi-name="java:jboss/DB2DS" pool-name="DB2DS"> <connection-url>jdbc:db2://localhost:50000/ibmdb2db</connection-url> <driver>ibmdb2</driver> <pool> <min-pool-size>0</min-pool-size> <max-pool-size>50</max-pool-size> </pool> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.db2.DB2ValidConnectionChecker"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.db2.DB2ExceptionSorter"/> </validation> </datasource> <drivers> <driver name="ibmdb2" module="com.ibm"> <xa-datasource-class>com.ibm.db2.jcc.DB2XADataSource</xa-datasource-class> </driver> </drivers> </datasources> Example: IBM DB2 JDBC Driver module.xml File <?xml version="1.0" ?> <module xmlns="urn:jboss:module:1.1" name="com.ibm"> <resources> <resource-root path="db2jcc4.jar"/> </resources> <dependencies> <module name="javaee.api"/> <module name="sun.jdk"/> <module name="ibm.jdk"/> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module> Example Management CLI Commands This example configuration can be achieved by using the following management CLI commands. Add the IBM DB2 JDBC driver as a core module. Important Using the module management CLI command to add and remove modules is provided as Technology Preview only. This command is not appropriate for use in a managed domain or when connecting to the management CLI remotely. Modules should be added and removed manually in a production environment. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Register the IBM DB2 JDBC driver. Add the IBM DB2 datasource. 12.15.12. Example IBM DB2 XA Datasource This is an example of an IBM DB2 XA datasource configuration with XA datasource properties, basic security, and validation options. Example: IBM DB2 XA Datasource Configuration <datasources> <xa-datasource jndi-name="java:jboss/DB2XADS" pool-name="DB2XADS"> <xa-datasource-property name="ServerName"> localhost </xa-datasource-property> <xa-datasource-property name="DatabaseName"> ibmdb2db </xa-datasource-property> <xa-datasource-property name="PortNumber"> 50000 </xa-datasource-property> <xa-datasource-property name="DriverType"> 4 </xa-datasource-property> <driver>ibmdb2</driver> <xa-pool> <is-same-rm-override>false</is-same-rm-override> </xa-pool> <security> <user-name>admin</user-name> <password>admin</password> </security> <recovery> <recover-plugin class-name="org.jboss.jca.core.recovery.ConfigurableRecoveryPlugin"> <config-property name="EnableIsValid"> false </config-property> <config-property name="IsValidOverride"> false </config-property> <config-property name="EnableClose"> false </config-property> </recover-plugin> </recovery> <validation> <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.db2.DB2ValidConnectionChecker"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.db2.DB2ExceptionSorter"/> </validation> </xa-datasource> <drivers> <driver name="ibmdb2" module="com.ibm"> <xa-datasource-class>com.ibm.db2.jcc.DB2XADataSource</xa-datasource-class> </driver> </drivers> </datasources> Example: IBM DB2 JDBC Driver module.xml File <?xml version="1.0" ?> <module xmlns="urn:jboss:module:1.1" name="com.ibm"> <resources> <resource-root path="db2jcc4.jar"/> </resources> <dependencies> <module name="javaee.api"/> <module name="sun.jdk"/> <module name="ibm.jdk"/> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module> Example Management CLI Commands This example configuration can be achieved by using the following management CLI commands. Add the IBM DB2 JDBC driver as a core module. Important Using the module management CLI command to add and remove modules is provided as Technology Preview only. This command is not appropriate for use in a managed domain or when connecting to the management CLI remotely. Modules should be added and removed manually in a production environment. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Register the IBM DB2 JDBC driver. Add the IBM DB2 XA datasource. 12.15.13. Example Sybase Datasource This is an example of a Sybase datasource configuration with connection information, basic security, and validation options. Example: Sybase Datasource Configuration <datasources> <datasource jndi-name="java:jboss/SybaseDB" pool-name="SybaseDB"> <connection-url>jdbc:sybase:Tds:localhost:5000/DATABASE?JCONNECT_VERSION=6</connection-url> <driver>sybase</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseValidConnectionChecker"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseExceptionSorter"/> </validation> </datasource> <drivers> <driver name="sybase" module="com.sybase"> <xa-datasource-class>com.sybase.jdbc4.jdbc.SybXADataSource</xa-datasource-class> </driver> </drivers> </datasources> Example: Sybase JDBC Driver module.xml File <?xml version="1.0" ?> <module xmlns="urn:jboss:module:1.1" name="com.sybase"> <resources> <resource-root path="jconn4.jar"/> </resources> <dependencies> <module name="javaee.api"/> <module name="sun.jdk"/> <module name="ibm.jdk"/> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module> Example Management CLI Commands This example configuration can be achieved by using the following management CLI commands. Add the Sybase JDBC driver as a core module. Important Using the module management CLI command to add and remove modules is provided as Technology Preview only. This command is not appropriate for use in a managed domain or when connecting to the management CLI remotely. Modules should be added and removed manually in a production environment. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Register the Sybase JDBC driver. Add the Sybase datasource. 12.15.14. Example Sybase XA Datasource This is an example of a Sybase XA datasource configuration with XA datasource properties, basic security, and validation options. Example: Sybase XA Datasource Configuration <datasources> <xa-datasource jndi-name="java:jboss/SybaseXADS" pool-name="SybaseXADS"> <xa-datasource-property name="ServerName"> localhost </xa-datasource-property> <xa-datasource-property name="DatabaseName"> mydatabase </xa-datasource-property> <xa-datasource-property name="PortNumber"> 4100 </xa-datasource-property> <xa-datasource-property name="NetworkProtocol"> Tds </xa-datasource-property> <driver>sybase</driver> <xa-pool> <is-same-rm-override>false</is-same-rm-override> </xa-pool> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseValidConnectionChecker"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseExceptionSorter"/> </validation> </xa-datasource> <drivers> <driver name="sybase" module="com.sybase"> <xa-datasource-class>com.sybase.jdbc4.jdbc.SybXADataSource</xa-datasource-class> </driver> </drivers> </datasources> Example: Sybase JDBC Driver module.xml File <?xml version="1.0" ?> <module xmlns="urn:jboss:module:1.1" name="com.sybase"> <resources> <resource-root path="jconn4.jar"/> </resources> <dependencies> <module name="javaee.api"/> <module name="sun.jdk"/> <module name="ibm.jdk"/> <module name="javax.api"/> <module name="javax.transaction.api"/> </dependencies> </module> Example Management CLI Commands This example configuration can be achieved by using the following management CLI commands. Add the Sybase JDBC driver as a core module. Important Using the module management CLI command to add and remove modules is provided as Technology Preview only. This command is not appropriate for use in a managed domain or when connecting to the management CLI remotely. Modules should be added and removed manually in a production environment. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Register the Sybase JDBC driver. Add the Sybase XA datasource. 12.15.15. Example MariaDB Datasource This is an example of a MariaDB datasource configuration with connection information, basic security, and validation options. Example: MariaDB Datasource Configuration <datasources> <datasource jndi-name="java:jboss/MariaDBDS" pool-name="MariaDBDS"> <connection-url>jdbc:mariadb://localhost:3306/jbossdb</connection-url> <driver>mariadb</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter"/> </validation> </datasource> <drivers> <driver name="mariadb" module="org.mariadb"> <driver-class>org.mariadb.jdbc.Driver</driver-class> <xa-datasource-class>org.mariadb.jdbc.MySQLDataSource</xa-datasource-class> </driver> </drivers> </datasources> Example: MariaDB JDBC Driver module.xml File <?xml version="1.0" ?> <module xmlns="urn:jboss:module:1.1" name="org.mariadb"> <resources> <resource-root path="mariadb-java-client-3.3.0.jar"/> </resources> <dependencies> <module name="javaee.api"/> <module name="sun.jdk"/> <module name="ibm.jdk"/> <module name="javax.api"/> <module name="org.slf4j"/> <module name="javax.transaction.api"/> </dependencies> </module> Example Management CLI Commands This example configuration can be achieved by using the following management CLI commands. Add the MariaDB JDBC driver as a core module. Important Using the module management CLI command to add and remove modules is provided as Technology Preview only. This command is not appropriate for use in a managed domain or when connecting to the management CLI remotely. Modules should be added and removed manually in a production environment. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Register the MariaDB JDBC driver. Add the MariaDB datasource. 12.15.16. Example MariaDB XA Datasource This is an example of a MariaDB XA datasource configuration with XA datasource properties, basic security, and validation options. Example: MariaDB XA Datasource Configuration <datasources> <xa-datasource jndi-name="java:jboss/MariaDBXADS" pool-name="MariaDBXADS"> <xa-datasource-property name="ServerName"> localhost </xa-datasource-property> <xa-datasource-property name="DatabaseName"> mariadbdb </xa-datasource-property> <driver>mariadb</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter"/> </validation> </xa-datasource> <drivers> <driver name="mariadb" module="org.mariadb"> <driver-class>org.mariadb.jdbc.Driver</driver-class> <xa-datasource-class>org.mariadb.jdbc.MySQLDataSource</xa-datasource-class> </driver> </drivers> </datasources> Example: MariaDB JDBC Driver module.xml File <?xml version="1.0" ?> <module xmlns="urn:jboss:module:1.1" name="org.mariadb"> <resources> <resource-root path="mariadb-java-client-3.3.0.jar"/> </resources> <dependencies> <module name="javaee.api"/> <module name="sun.jdk"/> <module name="ibm.jdk"/> <module name="javax.api"/> <module name="org.slf4j"/> <module name="javax.transaction.api"/> </dependencies> </module> Example Management CLI Commands This example configuration can be achieved by using the following management CLI commands. Add the MariaDB JDBC driver as a core module. Important Using the module management CLI command to add and remove modules is provided as Technology Preview only. This command is not appropriate for use in a managed domain or when connecting to the management CLI remotely. Modules should be added and removed manually in a production environment. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Register the MariaDB JDBC driver. Add the MariaDB XA datasource. 12.15.17. Example MariaDB Galera Cluster Datasource This is an example of a MariaDB Galera Cluster datasource configuration with connection information, basic security, and validation options. Warning MariaDB Galera Cluster does not support XA transactions. Example: MariaDB Galera Cluster Datasource Configuration <datasources> <datasource jndi-name="java:jboss/MariaDBGaleraClusterDS" pool-name="MariaDBGaleraClusterDS"> <connection-url>jdbc:mariadb://192.168.1.1:3306,192.168.1.2:3306/jbossdb</connection-url> <driver>mariadb</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name="org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter"/> </validation> </datasource> <drivers> <driver name="mariadb" module="org.mariadb"> <driver-class>org.mariadb.jdbc.Driver</driver-class> <xa-datasource-class>org.mariadb.jdbc.MySQLDataSource</xa-datasource-class> </driver> </drivers> </datasources> Example: MariaDB JDBC Driver module.xml File <?xml version='1.0' encoding='UTF-8'?> <module xmlns="urn:jboss:module:1.1" name="org.mariadb"> <resources> <resource-root path="mariadb-java-client-3.3.0.jar"/> </resources> <dependencies> <module name="javaee.api"/> <module name="sun.jdk"/> <module name="ibm.jdk"/> <module name="javax.api"/> <module name="org.slf4j"/> <module name="javax.transaction.api"/> </dependencies> </module> Example Management CLI Commands This example configuration can be achieved by using the following management CLI commands. Add the MariaDB JDBC driver as a core module. Important Using the module management CLI command to add and remove modules is provided as Technology Preview only. This command is not appropriate for use in a managed domain or when connecting to the management CLI remotely. Modules should be added and removed manually in a production environment. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See Technology Preview Features Support Scope on the Red Hat Customer Portal for information about the support scope for Technology Preview features. Register the MariaDB JDBC driver. Add the MariaDB Galera Cluster datasource. | [
"EAP_HOME /bin/jboss-cli.sh",
"[disconnected /] module add --name= MODULE_NAME --resources= PATH_TO_JDBC_JAR --dependencies= DEPENDENCIES",
"[disconnected /] module add --name=com.mysql --resources= /path/to /mysql-connector-java-8.0.12.jar --dependencies=javax.transaction.api,sun.jdk,ibm.jdk,javaee.api,javax.api",
"EAP_HOME /bin/jboss-cli.sh --command=\"module add --name= MODULE_NAME --resources= PATH_TO_JDBC_JAR --dependencies= DEPENDENCIES \"",
"/subsystem=datasources/jdbc-driver= DRIVER_NAME :add(driver-name= DRIVER_NAME ,driver-module-name= MODULE_NAME ,driver-xa-datasource-class-name= XA_DATASOURCE_CLASS_NAME , driver-class-name= DRIVER_CLASS_NAME )",
"/subsystem=datasources/jdbc-driver=mysql:add(driver-name=mysql,driver-module-name=com.mysql,driver-xa-datasource-class-name=com.mysql.cj.jdbc.MysqlXADataSource, driver-class-name=com.mysql.cj.jdbc.Driver)",
"deploy PATH_TO_JDBC_JAR",
"deploy /path/to /mysql-connector-java-8.0.12.jar",
"WFLYJCA0018: Started Driver service with driver-name = mysql-connector-java-8.0.12.jar",
"com.mysql.cj.jdbc.Driver",
"jar \\-uf jdbc-driver.jar META-INF/services/java.sql.Driver",
"Dependencies: com.mysql",
"<jboss-deployment-structure> <deployment> <dependencies> <module name=\"com.mysql\"/> </dependencies> </deployment> </jboss-deployment-structure>",
"import java.sql.Connection; Connection c = ds.getConnection(); if (c.isWrapperFor(com.mysql.jdbc.Connection.class)) { com.mysql.jdbc.Connection mc = c.unwrap(com.mysql.jdbc.Connection.class); }",
"data-source add --name= DATASOURCE_NAME --jndi-name= JNDI_NAME --driver-name= DRIVER_NAME --connection-url= CONNECTION_URL --user-name= USER_NAME --password= PASSWORD",
"WFLYJCA0018: Started Driver service with driver-name = mysql-connector-java-5.1.36-bin.jar_com.mysql.cj.jdbc.Driver_5_1",
"xa-data-source add --name= XA_DATASOURCE_NAME --jndi-name= JNDI_NAME --driver-name= DRIVER_NAME --xa-datasource-class= XA_DATASOURCE_CLASS --xa-datasource-properties={\"ServerName\"=>\" HOST_NAME \",\"DatabaseName\"=>\" DATABASE_NAME \"}",
"/subsystem=datasources/xa-data-source= XA_DATASOURCE_NAME /xa-datasource-properties=ServerName:add(value= HOST_NAME )",
"/subsystem=datasources/xa-data-source= XA_DATASOURCE_NAME /xa-datasource-properties=DatabaseName:add(value= DATABASE_NAME )",
"WFLYJCA0018: Started Driver service with driver-name = mysql-connector-java-5.1.36-bin.jar_com.mysql.cj.jdbc.Driver_5_1",
"data-source --name= DATASOURCE_NAME -- ATTRIBUTE_NAME = ATTRIBUTE_VALUE",
"xa-data-source --name= XA_DATASOURCE_NAME -- ATTRIBUTE_NAME = ATTRIBUTE_VALUE",
"/subsystem=datasources/xa-data-source= XA_DATASOURCE_NAME /xa-datasource-properties= PROPERTY :add(value= VALUE )",
"data-source remove --name= DATASOURCE_NAME",
"xa-data-source remove --name= XA_DATASOURCE_NAME",
"/subsystem=datasources/data-source= DATASOURCE_NAME :test-connection-in-pool",
"/subsystem=datasources/data-source= DATASOURCE_NAME :flush-all-connection-in-pool",
"/subsystem=datasources/data-source= DATASOURCE_NAME :flush-gracefully-connection-in-pool",
"/subsystem=datasources/data-source= DATASOURCE_NAME :flush-idle-connection-in-pool",
"/subsystem=datasources/data-source= DATASOURCE_NAME :flush-invalid-connection-in-pool",
"/subsystem=datasources/xa-data-source= XA_DATASOURCE_NAME :write-attribute(name=no-recovery,value=true)",
"GRANT SELECT ON sys.dba_pending_transactions TO USER; GRANT SELECT ON sys.pending_transUSD TO USER; GRANT SELECT ON sys.dba_2pc_pending TO USER; GRANT EXECUTE ON sys.dbms_xa TO USER;",
"WARN [com.arjuna.ats.jta.logging.loggerI18N] [com.arjuna.ats.internal.jta.recovery.xarecovery1] Local XARecoveryModule.xaRecovery got XA exception javax.transaction.xa.XAException, XAException.XAER_RMERR",
"sp_configure 'enable xact coordination', 1",
"select 1 from dual",
"select 1",
"/subsystem=security/security-domain=DsRealm:add(cache-type=default) /subsystem=security/security-domain=DsRealm/authentication=classic:add(login-modules=[{code=ConfiguredIdentity,flag=required,module-options={userName=sa, principal=sa, password=sa}}])",
"<security-domain name=\"DsRealm\" cache-type=\"default\"> <authentication> <login-module code=\"ConfiguredIdentity\" flag=\"required\"> <module-option name=\"userName\" value=\"sa\"/> <module-option name=\"principal\" value=\"sa\"/> <module-option name=\"password\" value=\"sa\"/> </login-module> </authentication> </security-domain>",
"data-source add --name=securityDs --jndi-name=java:jboss/datasources/securityDs --connection-url=jdbc:h2:mem:test;DB_CLOSE_DELAY=-1 --driver-name=h2 --new-connection-sql=\"select current_user()\"",
"data-source --name=securityDs --security-domain=DsRealm",
"reload",
"<datasources> <datasource jndi-name=\"java:jboss/datasources/securityDs\" pool-name=\"securityDs\"> <connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1</connection-url> <driver>h2</driver> <new-connection-sql>select current_user()</new-connection-sql> <security> <security-domain>DsRealm</security-domain> </security> </datasource> </datasources>",
"data-source --name=ExampleDS --password=USD{VAULT::ds_ExampleDS::password::N2NhZDYzOTMtNWE0OS00ZGQ0LWE4MmEtMWNlMDMyNDdmNmI2TElORV9CUkVBS3ZhdWx0}",
"reload",
"<security> <user-name>admin</user-name> <password>USD{VAULT::ds_ExampleDS::password::N2NhZDYzOTMtNWE0OS00ZGQ0LWE4MmEtMWNlMDMyNDdmNmI2TElORV9CUkVBS3ZhdWx0}</password> </security>",
"/subsystem=datasources/data-source=ExampleDS:write-attribute(name=credential-reference,value={store=exampleCS, alias=example-ds-pw})",
"/subsystem=datasources/data-source=ExampleDS:undefine-attribute(name=password) /subsystem=datasources/data-source=ExampleDS:undefine-attribute(name=user-name)",
"/subsystem=datasources/data-source=ExampleDS:write-attribute(name=elytron-enabled,value=true) reload",
"/subsystem=elytron/authentication-configuration=exampleAuthConfig:add(authentication-name=sa,credential-reference={clear-text=sa})",
"/subsystem=elytron/authentication-context=exampleAuthContext:add(match-rules=[{authentication-configuration=exampleAuthConfig}])",
"/subsystem=datasources/data-source=ExampleDS:write-attribute(name=authentication-context,value=exampleAuthContext) reload",
"/system-property=java.security.krb5.conf:add(value=\"/path/to/krb5.conf\") /system-property=sun.security.krb5.debug:add(value=\"false\") /system-property=sun.security.spnego.debug:add(value=\"false\")",
"batch /subsystem=infinispan/cache-container=security:add(default-cache=auth-cache) /subsystem=infinispan/cache-container=security/local-cache=auth-cache:add() /subsystem=infinispan/cache-container=security/local-cache=auth-cache/expiration=EXPIRATION:add(lifespan=3540000,max-idle=3540000) /subsystem=infinispan/cache-container=security/local-cache=auth-cache/memory=object:add(size=1000) run-batch",
"batch /subsystem=security/security-domain=KerberosDatabase:add(cache-type=infinispan) /subsystem=security/security-domain=KerberosDatabase/authentication=classic:add /subsystem=security/security-domain=KerberosDatabase/authentication=classic/login-module=\"KerberosDatabase-Module\":add(code=\"org.jboss.security.negotiation.KerberosLoginModule\",module=\"org.jboss.security.negotiation\",flag=required, module-options={ \"debug\" => \"false\", \"storeKey\" => \"false\", \"useKeyTab\" => \"true\", \"keyTab\" => \"/path/to/eap.keytab\", \"principal\" => \"[email protected]\", \"doNotPrompt\" => \"true\", \"refreshKrb5Config\" => \"true\", \"isInitiator\" => \"true\", \"addGSSCredential\" => \"true\", \"credentialLifetime\" => \"-1\"}) run-batch",
"/subsystem=elytron/kerberos-security-factory=krbsf:add(debug=false, [email protected], path=/path/to/keytab, request-lifetime=-1, obtain-kerberos-ticket=true, server=false)",
"/subsystem=elytron/authentication-configuration=kerberos-conf:add(kerberos-security-factory=krbsf)",
"/subsystem=elytron/authentication-context=ds-context:add(match-rules=[{authentication-configuration=kerberos-conf}])",
"/subsystem=datasources/data-source=KerberosDS:add(connection-url=\"URL\", min-pool-size=0, max-pool-size=10, jndi-name=\"java:jboss/datasource/KerberosDS\", driver-name=<jdbc-driver>.jar, security-domain=KerberosDatabase, allow-multiple-users=false, pool-prefill=false, pool-use-strict-min=false, idle-timeout-minutes=2)",
"/subsystem=datasources/data-source=KerberosDS/connection-properties=<connection-property-name>:add(value=\"(<kerberos-value>)\")",
"/subsystem=datasources/data-source=KerberosDS/connection-properties=oracle.net.authentication_services:add(value=\"(KERBEROS5)\")",
"/subsystem=datasources/data-source=KerberosDS:add(connection-url=\"URL\", min-pool-size=0, max-pool-size=10, jndi-name=\"java:jboss/datasource/KerberosDS\", driver-name=<jdbc-driver>.jar, elytron-enabled=true, authentication-context=ds-context, allow-multiple-users=false, pool-prefill=false, pool-use-strict-min=false, idle-timeout-minutes=2)",
"/subsystem=datasources/data-source=KerberosDS/connection-properties=<connection-property-name>:add(value=\"(<kerberos-value>)\")",
"/subsystem=datasources/data-source=KerberosDS/connection-properties=oracle.net.authentication_services:add(value=\"(KERBEROS5)\")",
"/subsystem=datasources/data-source=ExampleDS:write-attribute(name=statistics-enabled,value=true)",
"/subsystem=datasources/data-source=ExampleDS/statistics=pool:read-resource(include-runtime=true) { \"outcome\" => \"success\", \"result\" => { \"ActiveCount\" => 1, \"AvailableCount\" => 20, \"AverageBlockingTime\" => 0L, \"AverageCreationTime\" => 122L, \"AverageGetTime\" => 128L, \"AveragePoolTime\" => 0L, \"AverageUsageTime\" => 0L, \"BlockingFailureCount\" => 0, \"CreatedCount\" => 1, \"DestroyedCount\" => 0, \"IdleCount\" => 1, }",
"/subsystem=datasources/data-source=ExampleDS/statistics=jdbc:read-resource(include-runtime=true) { \"outcome\" => \"success\", \"result\" => { \"PreparedStatementCacheAccessCount\" => 0L, \"PreparedStatementCacheAddCount\" => 0L, \"PreparedStatementCacheCurrentSize\" => 0, \"PreparedStatementCacheDeleteCount\" => 0L, \"PreparedStatementCacheHitCount\" => 0L, \"PreparedStatementCacheMissCount\" => 0L, \"statistics-enabled\" => true } }",
"/subsystem=datasources/data-source=ExampleDS:write-attribute(name=capacity-incrementer-class, value=\"org.jboss.jca.core.connectionmanager.pool.capacity.SizeIncrementer\") /subsystem=datasources/data-source=ExampleDS:write-attribute(name=capacity-decrementer-class, value=\"org.jboss.jca.core.connectionmanager.pool.capacity.SizeDecrementer\")",
"/subsystem=datasources/data-source=ExampleDS:write-attribute(name=capacity-incrementer-properties.size, value=2) /subsystem=datasources/data-source=ExampleDS:write-attribute(name=capacity-decrementer-properties.size, value=2)",
"data-source --name= DATASOURCE_NAME --enlistment-trace=true",
"xa-data-source --name= XA_DATASOURCE_NAME --enlistment-trace=true",
"<datasources> <datasource jndi-name=\"java:jboss/MySqlDS\" pool-name=\"MySqlDS\"> <connection-url>jdbc:mysql://localhost:3306/jbossdb</connection-url> <driver>mysql</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter\"/> </validation> </datasource> <drivers> <driver name=\"mysql\" module=\"com.mysql\"> <driver-class>com.mysql.cj.jdbc.Driver</driver-class> <xa-datasource-class>com.mysql.cj.jdbc.MysqlXADataSource</xa-datasource-class> </driver> </drivers> </datasources>",
"<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.mysql\"> <resources> <resource-root path=\"mysql-connector-java-8.0.12.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>",
"module add --name=com.mysql --resources= /path/to/mysql-connector-java-8.0.12.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api",
"/subsystem=datasources/jdbc-driver=mysql:add(driver-name=mysql,driver-module-name=com.mysql,driver-xa-datasource-class-name=com.mysql.cj.jdbc.MysqlXADataSource, driver-class-name=com.mysql.cj.jdbc.Driver)",
"data-source add --name=MySqlDS --jndi-name=java:jboss/MySqlDS --driver-name=mysql --connection-url=jdbc:mysql://localhost:3306/jbossdb --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter",
"<datasources> <xa-datasource jndi-name=\"java:jboss/MySqlXADS\" pool-name=\"MySqlXADS\"> <xa-datasource-property name=\"ServerName\"> localhost </xa-datasource-property> <xa-datasource-property name=\"DatabaseName\"> mysqldb </xa-datasource-property> <driver>mysql</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter\"/> </validation> </xa-datasource> <drivers> <driver name=\"mysql\" module=\"com.mysql\"> <driver-class>com.mysql.cj.jdbc.Driver</driver-class> <xa-datasource-class>com.mysql.cj.jdbc.MysqlXADataSource</xa-datasource-class> </driver> </drivers> </datasources>",
"<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.mysql\"> <resources> <resource-root path=\"mysql-connector-java-8.0.12.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>",
"module add --name=com.mysql --resources= /path/to/mysql-connector-java-8.0.12.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api",
"/subsystem=datasources/jdbc-driver=mysql:add(driver-name=mysql,driver-module-name=com.mysql,driver-xa-datasource-class-name=com.mysql.cj.jdbc.MysqlXADataSource, driver-class-name=com.mysql.cj.jdbc.Driver)",
"xa-data-source add --name=MySqlXADS --jndi-name=java:jboss/MySqlXADS --driver-name=mysql --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter --xa-datasource-properties={\"ServerName\"=>\"localhost\",\"DatabaseName\"=>\"mysqldb\"}",
"<datasources> <datasource jndi-name=\"java:jboss/PostgresDS\" pool-name=\"PostgresDS\"> <connection-url>jdbc:postgresql://localhost:5432/postgresdb</connection-url> <driver>postgresql</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter\"/> </validation> </datasource> <drivers> <driver name=\"postgresql\" module=\"com.postgresql\"> <xa-datasource-class>org.postgresql.xa.PGXADataSource</xa-datasource-class> </driver> </drivers> </datasources>",
"<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.postgresql\"> <resources> <resource-root path=\"postgresql-42.x.y.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>",
"module add --name=com.postgresql --resources= /path/to/postgresql-42.x.y.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api",
"/subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql,driver-module-name=com.postgresql,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)",
"data-source add --name=PostgresDS --jndi-name=java:jboss/PostgresDS --driver-name=postgresql --connection-url=jdbc:postgresql://localhost:5432/postgresdb --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter",
"<datasources> <xa-datasource jndi-name=\"java:jboss/PostgresXADS\" pool-name=\"PostgresXADS\"> <xa-datasource-property name=\"ServerName\"> localhost </xa-datasource-property> <xa-datasource-property name=\"PortNumber\"> 5432 </xa-datasource-property> <xa-datasource-property name=\"DatabaseName\"> postgresdb </xa-datasource-property> <driver>postgresql</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter\"/> </validation> </xa-datasource> <drivers> <driver name=\"postgresql\" module=\"com.postgresql\"> <xa-datasource-class>org.postgresql.xa.PGXADataSource</xa-datasource-class> </driver> </drivers> </datasources>",
"<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.postgresql\"> <resources> <resource-root path=\"postgresql-42.x.y.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>",
"module add --name=com.postgresql --resources= /path/to/postgresql-42.x.y.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api",
"/subsystem=datasources/jdbc-driver=postgresql:add(driver-name=postgresql,driver-module-name=com.postgresql,driver-xa-datasource-class-name=org.postgresql.xa.PGXADataSource)",
"xa-data-source add --name=PostgresXADS --jndi-name=java:jboss/PostgresXADS --driver-name=postgresql --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.postgres.PostgreSQLExceptionSorter --xa-datasource-properties={\"ServerName\"=>\"localhost\",\"PortNumber\"=>\"5432\",\"DatabaseName\"=>\"postgresdb\"}",
"<datasources> <datasource jndi-name=\"java:jboss/OracleDS\" pool-name=\"OracleDS\"> <connection-url>jdbc:oracle:thin:@localhost:1521:XE</connection-url> <driver>oracle</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter\"/> </validation> </datasource> <drivers> <driver name=\"oracle\" module=\"com.oracle\"> <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class> </driver> </drivers> </datasources>",
"<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.oracle\"> <resources> <resource-root path=\"ojdbc7.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>",
"module add --name=com.oracle --resources= /path/to/ojdbc7.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api",
"/subsystem=datasources/jdbc-driver=oracle:add(driver-name=oracle,driver-module-name=com.oracle,driver-xa-datasource-class-name=oracle.jdbc.xa.client.OracleXADataSource)",
"data-source add --name=OracleDS --jndi-name=java:jboss/OracleDS --driver-name=oracle --connection-url=jdbc:oracle:thin:@localhost:1521:XE --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter",
"<datasources> <xa-datasource jndi-name=\"java:jboss/OracleXADS\" pool-name=\"OracleXADS\"> <xa-datasource-property name=\"URL\"> jdbc:oracle:thin:@oracleHostName:1521:orcl </xa-datasource-property> <driver>oracle</driver> <xa-pool> <is-same-rm-override>false</is-same-rm-override> </xa-pool> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter\"/> </validation> </xa-datasource> <drivers> <driver name=\"oracle\" module=\"com.oracle\"> <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class> </driver> </drivers> </datasources>",
"<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.oracle\"> <resources> <resource-root path=\"ojdbc7.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>",
"module add --name=com.oracle --resources= /path/to/ojdbc7.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api",
"/subsystem=datasources/jdbc-driver=oracle:add(driver-name=oracle,driver-module-name=com.oracle,driver-xa-datasource-class-name=oracle.jdbc.xa.client.OracleXADataSource)",
"xa-data-source add --name=OracleXADS --jndi-name=java:jboss/OracleXADS --driver-name=oracle --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter --same-rm-override=false --xa-datasource-properties={\"URL\"=>\"jdbc:oracle:thin:@oracleHostName:1521:orcl\"}",
"<datasources> <datasource jndi-name=\"java:jboss/OracleDS\" pool-name=\"OracleDS\"> <connection-url>jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521))</connection-url> <driver>oracle</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter\"/> </validation> </datasource> <drivers> <driver name=\"oracle\" module=\"com.oracle\"> <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class> </driver> </drivers> </datasources>",
"<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.oracle\"> <resources> <resource-root path=\"ojdbc7.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>",
"module add --name=com.oracle --resources= /path/to/ojdbc7.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api",
"/subsystem=datasources/jdbc-driver=oracle:add(driver-name=oracle,driver-module-name=com.oracle,driver-xa-datasource-class-name=oracle.jdbc.xa.client.OracleXADataSource)",
"data-source add --name=OracleDS --jndi-name=java:jboss/OracleDS --driver-name=oracle --connection-url=\"jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521))\" --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter",
"<datasources> <xa-datasource jndi-name=\"java:jboss/OracleXADS\" pool-name=\"OracleXADS\"> <xa-datasource-property name=\"URL\"> jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521)) </xa-datasource-property> <driver>oracle</driver> <xa-pool> <is-same-rm-override>false</is-same-rm-override> </xa-pool> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter\"/> </validation> </xa-datasource> <drivers> <driver name=\"oracle\" module=\"com.oracle\"> <xa-datasource-class>oracle.jdbc.xa.client.OracleXADataSource</xa-datasource-class> </driver> </drivers> </datasources>",
"<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.oracle\"> <resources> <resource-root path=\"ojdbc7.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>",
"module add --name=com.oracle --resources= /path/to/ojdbc7.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api",
"/subsystem=datasources/jdbc-driver=oracle:add(driver-name=oracle,driver-module-name=com.oracle,driver-xa-datasource-class-name=oracle.jdbc.xa.client.OracleXADataSource)",
"xa-data-source add --name=OracleXADS --jndi-name=java:jboss/OracleXADS --driver-name=oracle --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.oracle.OracleExceptionSorter --same-rm-override=false --xa-datasource-properties={\"URL\"=>\"jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=host1)(PORT=1521))(ADDRESS=(PROTOCOL=TCP)(HOST=host2)(PORT=1521))\"}",
"<datasources> <datasource jndi-name=\"java:jboss/MSSQLDS\" pool-name=\"MSSQLDS\"> <connection-url>jdbc:sqlserver://localhost:1433;DatabaseName=MyDatabase</connection-url> <driver>sqlserver</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLExceptionSorter\"/> </validation> </datasource> <drivers> <driver name=\"sqlserver\" module=\"com.microsoft\"> <xa-datasource-class>com.microsoft.sqlserver.jdbc.SQLServerXADataSource</xa-datasource-class> </driver> </drivers> </datasources>",
"<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.microsoft\"> <resources> <resource-root path=\"sqljdbc42.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> <module name=\"javax.xml.bind.api\"/> </dependencies> </module>",
"module add --name=com.microsoft --resources= /path/to/sqljdbc42.jar --dependencies=javax.api,javax.transaction.api,javax.xml.bind.api,javaee.api,sun.jdk,ibm.jdk",
"/subsystem=datasources/jdbc-driver=sqlserver:add(driver-name=sqlserver,driver-module-name=com.microsoft,driver-xa-datasource-class-name=com.microsoft.sqlserver.jdbc.SQLServerXADataSource)",
"data-source add --name=MSSQLDS --jndi-name=java:jboss/MSSQLDS --driver-name=sqlserver --connection-url=jdbc:sqlserver://localhost:1433;DatabaseName=MyDatabase --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLExceptionSorter",
"<datasources> <xa-datasource jndi-name=\"java:jboss/MSSQLXADS\" pool-name=\"MSSQLXADS\"> <xa-datasource-property name=\"ServerName\"> localhost </xa-datasource-property> <xa-datasource-property name=\"DatabaseName\"> mssqldb </xa-datasource-property> <xa-datasource-property name=\"SelectMethod\"> cursor </xa-datasource-property> <driver>sqlserver</driver> <xa-pool> <is-same-rm-override>false</is-same-rm-override> </xa-pool> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLExceptionSorter\"/> </validation> </xa-datasource> <drivers> <driver name=\"sqlserver\" module=\"com.microsoft\"> <xa-datasource-class>com.microsoft.sqlserver.jdbc.SQLServerXADataSource</xa-datasource-class> </driver> </drivers> </datasources>",
"<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.microsoft\"> <resources> <resource-root path=\"sqljdbc42.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> <module name=\"javax.xml.bind.api\"/> </dependencies> </module>",
"module add --name=com.microsoft --resources= /path/to/sqljdbc42.jar --dependencies=javax.api,javax.transaction.api,javax.xml.bind.api,javaee.api,sun.jdk,ibm.jdk",
"/subsystem=datasources/jdbc-driver=sqlserver:add(driver-name=sqlserver,driver-module-name=com.microsoft,driver-xa-datasource-class-name=com.microsoft.sqlserver.jdbc.SQLServerXADataSource)",
"xa-data-source add --name=MSSQLXADS --jndi-name=java:jboss/MSSQLXADS --driver-name=sqlserver --user-name=admin --password=admin --validate-on-match=true --background-validation=false --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLExceptionSorter --same-rm-override=false --xa-datasource-properties={\"ServerName\"=>\"localhost\",\"DatabaseName\"=>\"mssqldb\",\"SelectMethod\"=>\"cursor\"}",
"<datasources> <datasource jndi-name=\"java:jboss/DB2DS\" pool-name=\"DB2DS\"> <connection-url>jdbc:db2://localhost:50000/ibmdb2db</connection-url> <driver>ibmdb2</driver> <pool> <min-pool-size>0</min-pool-size> <max-pool-size>50</max-pool-size> </pool> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.db2.DB2ValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.db2.DB2ExceptionSorter\"/> </validation> </datasource> <drivers> <driver name=\"ibmdb2\" module=\"com.ibm\"> <xa-datasource-class>com.ibm.db2.jcc.DB2XADataSource</xa-datasource-class> </driver> </drivers> </datasources>",
"<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.ibm\"> <resources> <resource-root path=\"db2jcc4.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>",
"module add --name=com.ibm --resources= /path/to/db2jcc4.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api",
"/subsystem=datasources/jdbc-driver=ibmdb2:add(driver-name=ibmdb2,driver-module-name=com.ibm,driver-xa-datasource-class-name=com.ibm.db2.jcc.DB2XADataSource)",
"data-source add --name=DB2DS --jndi-name=java:jboss/DB2DS --driver-name=ibmdb2 --connection-url=jdbc:db2://localhost:50000/ibmdb2db --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.db2.DB2ValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.db2.DB2ExceptionSorter --min-pool-size=0 --max-pool-size=50",
"<datasources> <xa-datasource jndi-name=\"java:jboss/DB2XADS\" pool-name=\"DB2XADS\"> <xa-datasource-property name=\"ServerName\"> localhost </xa-datasource-property> <xa-datasource-property name=\"DatabaseName\"> ibmdb2db </xa-datasource-property> <xa-datasource-property name=\"PortNumber\"> 50000 </xa-datasource-property> <xa-datasource-property name=\"DriverType\"> 4 </xa-datasource-property> <driver>ibmdb2</driver> <xa-pool> <is-same-rm-override>false</is-same-rm-override> </xa-pool> <security> <user-name>admin</user-name> <password>admin</password> </security> <recovery> <recover-plugin class-name=\"org.jboss.jca.core.recovery.ConfigurableRecoveryPlugin\"> <config-property name=\"EnableIsValid\"> false </config-property> <config-property name=\"IsValidOverride\"> false </config-property> <config-property name=\"EnableClose\"> false </config-property> </recover-plugin> </recovery> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.db2.DB2ValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.db2.DB2ExceptionSorter\"/> </validation> </xa-datasource> <drivers> <driver name=\"ibmdb2\" module=\"com.ibm\"> <xa-datasource-class>com.ibm.db2.jcc.DB2XADataSource</xa-datasource-class> </driver> </drivers> </datasources>",
"<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.ibm\"> <resources> <resource-root path=\"db2jcc4.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>",
"module add --name=com.ibm --resources= /path/to/db2jcc4.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api",
"/subsystem=datasources/jdbc-driver=ibmdb2:add(driver-name=ibmdb2,driver-module-name=com.ibm,driver-xa-datasource-class-name=com.ibm.db2.jcc.DB2XADataSource)",
"xa-data-source add --name=DB2XADS --jndi-name=java:jboss/DB2XADS --driver-name=ibmdb2 --user-name=admin --password=admin --validate-on-match=true --background-validation=false --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.db2.DB2ValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.db2.DB2ExceptionSorter --same-rm-override=false --recovery-plugin-class-name=org.jboss.jca.core.recovery.ConfigurableRecoveryPlugin --recovery-plugin-properties={\"EnableIsValid\"=>\"false\",\"IsValidOverride\"=>\"false\",\"EnableClose\"=>\"false\"} --xa-datasource-properties={\"ServerName\"=>\"localhost\",\"DatabaseName\"=>\"ibmdb2db\",\"PortNumber\"=>\"50000\",\"DriverType\"=>\"4\"}",
"<datasources> <datasource jndi-name=\"java:jboss/SybaseDB\" pool-name=\"SybaseDB\"> <connection-url>jdbc:sybase:Tds:localhost:5000/DATABASE?JCONNECT_VERSION=6</connection-url> <driver>sybase</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseExceptionSorter\"/> </validation> </datasource> <drivers> <driver name=\"sybase\" module=\"com.sybase\"> <xa-datasource-class>com.sybase.jdbc4.jdbc.SybXADataSource</xa-datasource-class> </driver> </drivers> </datasources>",
"<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.sybase\"> <resources> <resource-root path=\"jconn4.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>",
"module add --name=com.sybase --resources= /path/to/jconn4.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api",
"/subsystem=datasources/jdbc-driver=sybase:add(driver-name=sybase,driver-module-name=com.sybase,driver-xa-datasource-class-name=com.sybase.jdbc4.jdbc.SybXADataSource)",
"data-source add --name=SybaseDB --jndi-name=java:jboss/SybaseDB --driver-name=sybase --connection-url=jdbc:sybase:Tds:localhost:5000/DATABASE?JCONNECT_VERSION=6 --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseExceptionSorter",
"<datasources> <xa-datasource jndi-name=\"java:jboss/SybaseXADS\" pool-name=\"SybaseXADS\"> <xa-datasource-property name=\"ServerName\"> localhost </xa-datasource-property> <xa-datasource-property name=\"DatabaseName\"> mydatabase </xa-datasource-property> <xa-datasource-property name=\"PortNumber\"> 4100 </xa-datasource-property> <xa-datasource-property name=\"NetworkProtocol\"> Tds </xa-datasource-property> <driver>sybase</driver> <xa-pool> <is-same-rm-override>false</is-same-rm-override> </xa-pool> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseExceptionSorter\"/> </validation> </xa-datasource> <drivers> <driver name=\"sybase\" module=\"com.sybase\"> <xa-datasource-class>com.sybase.jdbc4.jdbc.SybXADataSource</xa-datasource-class> </driver> </drivers> </datasources>",
"<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"com.sybase\"> <resources> <resource-root path=\"jconn4.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>",
"module add --name=com.sybase --resources= /path/to/jconn4.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api",
"/subsystem=datasources/jdbc-driver=sybase:add(driver-name=sybase,driver-module-name=com.sybase,driver-xa-datasource-class-name=com.sybase.jdbc4.jdbc.SybXADataSource)",
"xa-data-source add --name=SybaseXADS --jndi-name=java:jboss/SybaseXADS --driver-name=sybase --user-name=admin --password=admin --validate-on-match=true --background-validation=false --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.sybase.SybaseExceptionSorter --same-rm-override=false --xa-datasource-properties={\"ServerName\"=>\"localhost\",\"DatabaseName\"=>\"mydatabase\",\"PortNumber\"=>\"4100\",\"NetworkProtocol\"=>\"Tds\"}",
"<datasources> <datasource jndi-name=\"java:jboss/MariaDBDS\" pool-name=\"MariaDBDS\"> <connection-url>jdbc:mariadb://localhost:3306/jbossdb</connection-url> <driver>mariadb</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter\"/> </validation> </datasource> <drivers> <driver name=\"mariadb\" module=\"org.mariadb\"> <driver-class>org.mariadb.jdbc.Driver</driver-class> <xa-datasource-class>org.mariadb.jdbc.MySQLDataSource</xa-datasource-class> </driver> </drivers> </datasources>",
"<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"org.mariadb\"> <resources> <resource-root path=\"mariadb-java-client-3.3.0.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"org.slf4j\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>",
"module add --name=org.mariadb --resources= /path/to/mariadb-java-client-3.3.0.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api,org.slf4j",
"/subsystem=datasources/jdbc-driver=mariadb:add(driver-name=mariadb,driver-module-name=org.mariadb,driver-xa-datasource-class-name=org.mariadb.jdbc.MySQLDataSource, driver-class-name=org.mariadb.jdbc.Driver)",
"data-source add --name=MariaDBDS --jndi-name=java:jboss/MariaDBDS --driver-name=mariadb --connection-url=jdbc:mariadb://localhost:3306/jbossdb --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter",
"<datasources> <xa-datasource jndi-name=\"java:jboss/MariaDBXADS\" pool-name=\"MariaDBXADS\"> <xa-datasource-property name=\"ServerName\"> localhost </xa-datasource-property> <xa-datasource-property name=\"DatabaseName\"> mariadbdb </xa-datasource-property> <driver>mariadb</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter\"/> </validation> </xa-datasource> <drivers> <driver name=\"mariadb\" module=\"org.mariadb\"> <driver-class>org.mariadb.jdbc.Driver</driver-class> <xa-datasource-class>org.mariadb.jdbc.MySQLDataSource</xa-datasource-class> </driver> </drivers> </datasources>",
"<?xml version=\"1.0\" ?> <module xmlns=\"urn:jboss:module:1.1\" name=\"org.mariadb\"> <resources> <resource-root path=\"mariadb-java-client-3.3.0.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"org.slf4j\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>",
"module add --name=org.mariadb --resources= /path/to/mariadb-java-client-3.3.0.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api,org.slf4j",
"/subsystem=datasources/jdbc-driver=mariadb:add(driver-name=mariadb,driver-module-name=org.mariadb,driver-xa-datasource-class-name=org.mariadb.jdbc.MySQLDataSource, driver-class-name=org.mariadb.jdbc.Driver)",
"xa-data-source add --name=MariaDBXADS --jndi-name=java:jboss/MariaDBXADS --driver-name=mariadb --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter --xa-datasource-properties={\"ServerName\"=>\"localhost\",\"DatabaseName\"=>\"mariadbdb\"}",
"<datasources> <datasource jndi-name=\"java:jboss/MariaDBGaleraClusterDS\" pool-name=\"MariaDBGaleraClusterDS\"> <connection-url>jdbc:mariadb://192.168.1.1:3306,192.168.1.2:3306/jbossdb</connection-url> <driver>mariadb</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> <validation> <valid-connection-checker class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker\"/> <validate-on-match>true</validate-on-match> <background-validation>false</background-validation> <exception-sorter class-name=\"org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter\"/> </validation> </datasource> <drivers> <driver name=\"mariadb\" module=\"org.mariadb\"> <driver-class>org.mariadb.jdbc.Driver</driver-class> <xa-datasource-class>org.mariadb.jdbc.MySQLDataSource</xa-datasource-class> </driver> </drivers> </datasources>",
"<?xml version='1.0' encoding='UTF-8'?> <module xmlns=\"urn:jboss:module:1.1\" name=\"org.mariadb\"> <resources> <resource-root path=\"mariadb-java-client-3.3.0.jar\"/> </resources> <dependencies> <module name=\"javaee.api\"/> <module name=\"sun.jdk\"/> <module name=\"ibm.jdk\"/> <module name=\"javax.api\"/> <module name=\"org.slf4j\"/> <module name=\"javax.transaction.api\"/> </dependencies> </module>",
"module add --name=org.mariadb --resources= /path/to/mariadb-java-client-3.3.0.jar --dependencies=javaee.api,sun.jdk,ibm.jdk,javax.api,javax.transaction.api,org.slf4j",
"/subsystem=datasources/jdbc-driver=mariadb:add(driver-name=mariadb,driver-module-name=org.mariadb,driver-xa-datasource-class-name=org.mariadb.jdbc.MySQLDataSource, driver-class-name=org.mariadb.jdbc.Driver)",
"data-source add --name=MariaDBGaleraClusterDS --jndi-name=java:jboss/MariaDBGaleraClusterDS --driver-name=mariadb --connection-url=jdbc:mariadb://192.168.1.1:3306,192.168.1.2:3306/jbossdb --user-name=admin --password=admin --validate-on-match=true --background-validation=false --valid-connection-checker-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLValidConnectionChecker --exception-sorter-class-name=org.jboss.jca.adapters.jdbc.extensions.mysql.MySQLExceptionSorter"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuration_guide/datasource_management |
16.5. Synchronizing Users | 16.5. Synchronizing Users Users are not automatically synchronized between Directory Server and Active Directory. Synchronization both directions has to be configured: Users in the Active Directory domain are synchronized if it is configured in the sync agreement by selecting the Sync New Windows Users option. All of the Windows users are copied to the Directory Server when synchronization is initiated and then new users are synchronized over when they are created. A Directory Server user account is synchronized to Active Directory through specific attributes that are present on the Directory Server entry. Any Directory Server entry must have the ntUser object class and the ntUserCreateNewAccount attribute; the ntUserCreateNewAccount attribute (even on an existing entry) signals the Directory Server Windows Synchronization plug-in to write the entry over to the Active Directory server. New or modified user entries with the ntUser object class added are created and synchronized over to the Windows machine at the regular update, which is a standard poll of entry. Note A user is not active on the Active Directory domain until it has a password. When an existing user is modified to have the required Windows attributes, that user entry will be synchronized over to the Active Directory domain, but will not be able to log in until the password is changed on the Directory Server side or an administrator sets the password on Active Directory. This is because passwords stored in the Directory Server are encrypted, and Password Sync cannot sync already encrypted passwords. To make the user active on the Active Directory domain, reset the user's password. All synchronized entries in the Directory Server , whether they originated in the Directory Server or in Active Directory, have special synchronization attributes: ntUserDomainId. This corresponds to the sAMAccountName attribute for Active Directory entries. ntUniqueId. This contains the value of the objectGUID attribute for the corresponding Windows entry. This attribute is set by the synchronization process and should not be set or modified manually. ntUserDeleteAccount. This attribute is set automatically when a Windows entry is synchronized over but must be set manually for Directory Server entries. If ntUserDeleteAccount has the value true , the corresponding Windows entry be deleted when the Directory Server entry is deleted. Otherwise, the entry remains in Active Directory, but is removed from the Directory Server database if it is deleted in the Directory Server. Setting ntUserCreateNewAccount and ntUserDeleteAccount on Directory Server entries allows the Directory Manager precise control over which users within the synchronized subtree are synchronized on Active Directory. 16.5.1. User Attributes Synchronized between Directory Server and Active Directory Only a subset of Directory Server and Active Directory attributes are synchronized. These attributes are hard-coded and are defined regardless of which way the entry is being synchronized. Any other attributes present in the entry, either in Directory Server or in Active Directory, remain unaffected by synchronization. Some attributes used in Directory Server and Active Directory are identical. These are usually attributes defined in an LDAP standard, which are common among all LDAP services. These attributes are synchronized to one another exactly. Table 16.2, "User Schema That Are the Same in Directory Server and Windows Servers" shows attributes that are the same between the Directory Server and Windows servers. Some attributes define the same information, but the names of the attributes or their schema definitions are different. These attributes are mapped between Active Directory and Directory Server, so that attribute A in one server is treated as attribute B in the other. For synchronization, many of these attributes relate to Windows-specific information. Table 16.1, "User Schema Mapped between Directory Server and Active Directory" shows the attributes that are mapped between the Directory Server and Windows servers. For more information on the differences in ways that Directory Server and Active Directory handle some schema elements, see Section 16.5.2, "User Schema Differences between Red Hat Directory Server and Active Directory" . Table 16.1. User Schema Mapped between Directory Server and Active Directory Directory Server Active Directory cn [a] name ntUserDomainId sAMAccountName ntUserHomeDir homeDirectory ntUserScriptPath scriptPath ntUserLastLogon lastLogon ntUserLastLogoff lastLogoff ntUserAcctExpires accountExpires ntUserCodePage codePage ntUserLogonHours logonHours ntUserMaxStorage maxStorage ntUserProfile profilePath ntUserParms userParameters ntUserWorkstations userWorkstations [a] The cn is treated differently than other synchronized attributes. It is mapped directly ( cn to cn ) when syncing from Directory Server to Active Directory. When syncing from Active Directory to Directory Server, however, cn is mapped from the name attribute on Windows to the cn attribute in Directory Server. Table 16.2. User Schema That Are the Same in Directory Server and Windows Servers cn [a] physicalDeliveryOfficeName description postOfficeBox destinationIndicator postalAddress facsimileTelephoneNumber postalCode givenname registeredAddress homePhone sn homePostalAddress st initials street l telephoneNumber mail teletexTerminalIdentifier mobile telexNumber o title ou usercertificate pager x121Address [a] The cn is treated differently than other synchronized attributes. It is mapped directly ( cn to cn ) when syncing from Directory Server to Active Directory. When syncing from Active Directory to Directory Server, however, cn is mapped from the name attribute on Windows to the cn attribute in Directory Server. 16.5.2. User Schema Differences between Red Hat Directory Server and Active Directory Although Active Directory supports the same basic X.500 object classes as Directory Server, there are a few incompatibilities of which administrators should be aware. 16.5.2.1. Values for cn Attributes In Directory Server, the cn attribute can be multi-valued, while in Active Directory this attribute must have only a single value. When the Directory Server cn attribute is synchronized, then, only one value is sent to the Active Directory peer. What this means for synchronization is that,potentially, if a cn value is added to an Active Directory entry and that value is not one of the values for cn in Directory Server, then all of the Directory Server cn values are overwritten with the single Active Directory value. One other important difference is that Active Directory uses the cn attribute attribute as its naming attribute, where Directory Server uses uid . This means that there is the potential to rename the entry entirely (and accidentally) if the cn attribute is edited in the Directory Server. If that cn change is written over to the Active Directory entry, then the entry is renamed, and the new named entry is written back over to Directory Server. 16.5.2.2. Password Policies Both Active Directory and Directory Server can enforce password policies such as password minimum length or maximum age. Windows Synchronization makes no attempt to ensure that the policies are consistent, enforced, or synchronized. If password policy is not consistent in both Directory Server and Active Directory, then password changes made on one system may fail when synchronized to the other system. The default password syntax setting on Directory Server mimics the default password complexity rules that Active Directory enforces. 16.5.2.3. Values for street and streetAddress Active Directory uses the attribute streetAddress for a user or group's postal address; this is the way that Directory Server uses the street attribute. There are two important differences in the way that Active Directory and Directory Server use the streetAddress and street attributes, respectively: In Directory Server, streetAddress is an alias for street . Active Directory also has the street attribute, but it is a separate attribute that can hold an independent value, not an alias for streetAddress . Active Directory defines both streetAddress and street as single-valued attributes, while Directory Server defines street as a multi-valued attribute, as specified in RFC 4519. Because of the different ways that Directory Server and Active Directory handle streetAddress and street attributes, there are two rules to follow when setting address attributes in Active Directory and Directory Server: Windows Synchronization maps streetAddress in the Windows entry to street in Directory Server. To avoid conflicts, the street attribute should not be used in Active Directory. Only one Directory Server street attribute value is synchronized to Active Directory. If the streetAddress attribute is changed in Active Directory and the new value does not already exist in Directory Server, then all street attribute values in Directory Server are replaced with the new, single Active Directory value. 16.5.2.4. Constraints on the initials Attribute For the initials attribute, Active Directory imposes a maximum length constraint of six characters, but Directory Server does not have a length limit. If an initials attribute longer than six characters is added to Directory Server, the value is trimmed when it is synchronized with the Active Directory entry. 16.5.3. Configuring User Synchronization for Directory Server Users For Directory Server users to be synchronized over to Active Directory, the user entries must have the appropriate sync attributes set. To enable synchronization through the command line, add the required sync attributes to an entry or create an entry with those attributes. Three schema elements are required for synchronization: The ntUser object class The ntUserDomainId attribute, to give the Windows ID The ntUserCreateNewAccount attribute, to signal to the synchronization plug-in to sync the Directory Server entry over to Active Directory For example, using the ldapmodify utility: Many additional Windows and user attributes can be added to the entry. All of the schema which is synchronized is listed in Section 16.5.1, "User Attributes Synchronized between Directory Server and Active Directory" . Windows-specific attributes, belonging to the ntUser object class, are described in more detail in the Red Hat Directory Server 11 Configuration, Command, and File Reference . Note Reset the user's password. A user is not active on the Active Directory domain until it has a password. When an existing user is modified to have the required Windows attributes, that user entry will be synchronized over to the Active Directory domain, but will not be able to log in until the password is changed on the Directory Server side or an administrator sets the password on Active Directory. Password Sync cannot sync encrypted passwords. So, to make the user active on the Active Directory domain, reset the user's password. 16.5.4. Configuring User Synchronization for Active Directory Users Synchronization for Windows users (users which originate in the Active Directory domain) is configured in the sync agreement. To enable user synchronization: To disable user synchronization, set the --sync-users option to off . | [
"dn: uid=scarter,ou=People,dc=example,dc=com changetype: modify add: objectClass objectClass:ntUser - add: ntUserDomainId ntUserDomainId: Sam Carter - add: ntUserCreateNewAccount ntUserCreateNewAccount: true - add: ntUserDeleteAccount ntUserDeleteAccount: true",
"dsconf -D \"cn=Directory Manager\" ldap://server.example.com repl-winsync-agmt set --sync-users=\"on\" --suffix=\" dc=example,dc=com \" example-agreement"
] | https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/using_windows_sync-synchronizing_users |
5.3. Using Valgrind to Profile Memory Usage | 5.3. Using Valgrind to Profile Memory Usage Valgrind is a framework that provides instrumentation to user-space binaries. It ships with a number of tools that can be used to profile and analyze program performance. The tools outlined in this section provide analysis that can aid in the detection of memory errors such as the use of uninitialized memory and improper allocation or deallocation of memory. All are included in the valgrind package, and can be run with the following command: Replace toolname with the name of the tool you wish to use (for memory profiling, memcheck , massif , or cachegrind ), and program with the program you wish to profile with Valgrind. Be aware that Valgrind's instrumentation will cause your program to run more slowly than it would normally. An overview of Valgrind's capabilities is provided in Section 3.7.3, "Valgrind" . Further details, including information about available plugins for Eclipse, are included in the Developer Guide , available from http://access.redhat.com/site/documentation/Red_Hat_Enterprise_Linux/ . Accompanying documentation can be viewed with the man valgrind command when the valgrind package is installed, or found in the following locations: /usr/share/doc/valgrind- version /valgrind_manual.pdf , and /usr/share/doc/valgrind- version /html/index.html . 5.3.1. Profiling Memory Usage with Memcheck Memcheck is the default Valgrind tool, and can be run with valgrind program , without specifying --tool=memcheck . It detects and reports on a number of memory errors that can be difficult to detect and diagnose, such as memory access that should not occur, the use of undefined or uninitialized values, incorrectly freed heap memory, overlapping pointers, and memory leaks. Programs run ten to thirty times more slowly with Memcheck than when run normally. Memcheck returns specific errors depending on the type of issue it detects. These errors are outlined in detail in the Valgrind documentation included at /usr/share/doc/valgrind- version /valgrind_manual.pdf . Note that Memcheck can only report these errors - it cannot prevent them from occurring. If your program accesses memory in a way that would normally result in a segmentation fault, the segmentation fault still occurs. However, Memcheck will log an error message immediately prior to the fault. Memcheck provides command line options that can be used to focus the checking process. Some of the options available are: --leak-check When enabled, Memcheck searches for memory leaks when the client program finishes. The default value is summary , which outputs the number of leaks found. Other possible values are yes and full , both of which give details of each individual leak, and no , which disables memory leak checking. --undef-value-errors When enabled (set to yes ), Memcheck reports errors when undefined values are used. When disabled (set to no ), undefined value errors are not reported. This is enabled by default. Disabling it speeds up Memcheck slightly. --ignore-ranges Allows the user to specify one or more ranges that Memcheck should ignore when checking for addressability. Multiple ranges are delimited by commas, for example, --ignore-ranges=0xPP-0xQQ,0xRR-0xSS . For a full list of options, refer to the documentation included at /usr/share/doc/valgrind- version /valgrind_manual.pdf . | [
"valgrind --tool= toolname program"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/s-memory-valgrind |
Chapter 1. Red Hat OpenShift support for Windows Containers overview | Chapter 1. Red Hat OpenShift support for Windows Containers overview Red Hat OpenShift support for Windows Containers is a feature providing the ability to run Windows compute nodes in an OpenShift Container Platform cluster. This is possible by using the Red Hat Windows Machine Config Operator (WMCO) to install and manage Windows nodes. With a Red Hat subscription, you can get support for running Windows workloads in OpenShift Container Platform. Windows instances deployed by the WMCO are configured with the containerd container runtime. For more information, see the release notes . You can add Windows nodes either by creating a compute machine set or by specifying existing Bring-Your-Own-Host (BYOH) Window instances through a configuration map . Note Compute machine sets are not supported for bare metal or provider agnostic clusters. For workloads including both Linux and Windows, OpenShift Container Platform allows you to deploy Windows workloads running on Windows Server containers while also providing traditional Linux workloads hosted on Red Hat Enterprise Linux CoreOS (RHCOS) or Red Hat Enterprise Linux (RHEL). For more information, see getting started with Windows container workloads . You need the WMCO to run Windows workloads in your cluster. The WMCO orchestrates the process of deploying and managing Windows workloads on a cluster. For more information, see how to enable Windows container workloads . You can create a Windows MachineSet object to create infrastructure Windows machine sets and related machines so that you can move supported Windows workloads to the new Windows machines. You can create a Windows MachineSet object on multiple platforms. You can schedule Windows workloads to Windows compute nodes. You can perform Windows Machine Config Operator upgrades to ensure that your Windows nodes have the latest updates. You can remove a Windows node by deleting a specific machine. You can use Bring-Your-Own-Host (BYOH) Windows instances to repurpose Windows Server VMs and bring them to OpenShift Container Platform. BYOH Windows instances benefit users who are looking to mitigate major disruptions in the event that a Windows server goes offline. You can use BYOH Windows instances as nodes on OpenShift Container Platform 4.8 and later versions. You can disable Windows container workloads by performing the following: Uninstalling the Windows Machine Config Operator Deleting the Windows Machine Config Operator namespace | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/windows_container_support_for_openshift/windows-container-overview |
function::register | function::register Name function::register - Return the signed value of the named CPU register Synopsis Arguments name Name of the register to return Description Return the value of the named CPU register, as it was saved when the current probe point was hit. If the register is 32 bits, it is sign-extended to 64 bits. For the i386 architecture, the following names are recognized. (name1/name2 indicates that name1 and name2 are alternative names for the same register.) eax/ax, ebp/bp, ebx/bx, ecx/cx, edi/di, edx/dx, eflags/flags, eip/ip, esi/si, esp/sp, orig_eax/orig_ax, xcs/cs, xds/ds, xes/es, xfs/fs, xss/ss. For the x86_64 architecture, the following names are recognized: 64-bit registers: r8, r9, r10, r11, r12, r13, r14, r15, rax/ax, rbp/bp, rbx/bx, rcx/cx, rdi/di, rdx/dx, rip/ip, rsi/si, rsp/sp; 32-bit registers: eax, ebp, ebx, ecx, edx, edi, edx, eip, esi, esp, flags/eflags, orig_eax; segment registers: xcs/cs, xss/ss. For powerpc, the following names are recognized: r0, r1, ... r31, nip, msr, orig_gpr3, ctr, link, xer, ccr, softe, trap, dar, dsisr, result. For s390x, the following names are recognized: r0, r1, ... r15, args, psw.mask, psw.addr, orig_gpr2, ilc, trap. For AArch64, the following names are recognized: x0, x1, ... x30, fp, lr, sp, pc, and orig_x0. | [
"register:long(name:string)"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-register |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback. | null | https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/managing_system_content_and_patch_updates_with_red_hat_insights_with_fedramp/proc-providing-feedback-on-redhat-documentation |
4.9.2. Object Selection | 4.9.2. Object Selection This section provides a series of tables that list the information you can display about the LVM objects with the pvs , vgs , and lvs commands. For convenience, a field name prefix can be dropped if it matches the default for the command. For example, with the pvs command, name means pv_name , but with the vgs command, name is interpreted as vg_name . Executing the following command is the equivalent of executing pvs -o pv_free . The pvs Command Table 4.1, "pvs Display Fields" lists the display arguments of the pvs command, along with the field name as it appears in the header display and a description of the field. Table 4.1. pvs Display Fields Argument Header Description dev_size DevSize The size of the underlying device on which the physical volume was created pe_start 1st PE Offset to the start of the first physical extent in the underlying device pv_attr Attr Status of the physical volume: (a)llocatable or e(x)ported. pv_fmt Fmt The metadata format of the physical volume ( lvm2 or lvm1 ) pv_free PFree The free space remaining on the physical volume pv_name PV The physical volume name pv_pe_alloc_count Alloc Number of used physical extents pv_pe_count PE Number of physical extents pvseg_size SSize The segment size of the physical volume pvseg_start Start The starting physical extent of the physical volume segment pv_size PSize The size of the physical volume pv_tags PV Tags LVM tags attached to the physical volume pv_used Used The amount of space currently used on the physical volume pv_uuid PV UUID The UUID of the physical volume The pvs command displays the following fields by default: pv_name , vg_name , pv_fmt , pv_attr , pv_size , pv_free . The display is sorted by pv_name . Using the -v argument with the pvs command adds the following fields to the default display: dev_size , pv_uuid . You can use the --segments argument of the pvs command to display information about each physical volume segment. A segment is a group of extents. A segment view can be useful if you want to see whether your logical volume is fragmented. The pvs --segments command displays the following fields by default: pv_name , vg_name , pv_fmt , pv_attr , pv_size , pv_free , pvseg_start , pvseg_size . The display is sorted by pv_name and pvseg_size within the physical volume. You can use the pvs -a command to see devices detected by LVM that have not been initialized as LVM physical volumes. The vgs Command Table 4.2, "vgs Display Fields" lists the display arguments of the vgs command, along with the field name as it appears in the header display and a description of the field. Table 4.2. vgs Display Fields Argument Header Description lv_count #LV The number of logical volumes the volume group contains max_lv MaxLV The maximum number of logical volumes allowed in the volume group (0 if unlimited) max_pv MaxPV The maximum number of physical volumes allowed in the volume group (0 if unlimited) pv_count #PV The number of physical volumes that define the volume group snap_count #SN The number of snapshots the volume group contains vg_attr Attr Status of the volume group: (w)riteable, (r)eadonly, resi(z)eable, e(x)ported, (p)artial and (c)lustered. vg_extent_count #Ext The number of physical extents in the volume group vg_extent_size Ext The size of the physical extents in the volume group vg_fmt Fmt The metadata format of the volume group ( lvm2 or lvm1 ) vg_free VFree Size of the free space remaining in the volume group vg_free_count Free Number of free physical extents in the volume group vg_name VG The volume group name vg_seqno Seq Number representing the revision of the volume group vg_size VSize The size of the volume group vg_sysid SYS ID LVM1 System ID vg_tags VG Tags LVM tags attached to the volume group vg_uuid VG UUID The UUID of the volume group The vgs command displays the following fields by default: vg_name , pv_count , lv_count , snap_count , vg_attr , vg_size , vg_free . The display is sorted by vg_name . Using the -v argument with the vgs command adds the following fields to the default display: vg_extent_size , vg_uuid . The lvs Command Table 4.3, "lvs Display Fields" lists the display arguments of the lvs command, along with the field name as it appears in the header display and a description of the field. Table 4.3. lvs Display Fields Argument Header Description chunksize chunk_size Chunk Unit size in a snapshot volume copy_percent Copy% The synchronization percentage of a mirrored logical volume; also used when physical extents are being moved with the pv_move command devices Devices The underlying devices that make up the logical volume: the physical volumes, logical volumes, and start physical extents and logical extents lv_attr Attr The status of the logical volume. The logical volume attribute bits are as follows: Bit 1: Volume type: (m)irrored, (M)irrored without initial sync, (o)rigin, (p)vmove, (s)napshot, invalid (S)napshot, (v)irtual Bit2: Permissions: (w)riteable, (r)ead-only Bit 3: Allocation policy: (c)ontiguous, (n)ormal, (a)nywhere, (i)nherited. This is capitalized if the volume is currently locked against allocation changes, for example while executing the pvmove command. Bit 4: fixed (m)inor Bit 5 State: (a)ctive, (s)uspended, (I)nvalid snapshot, invalid (S)uspended snapshot, mapped (d)evice present without tables, mapped device present with (i)nactive table Bit 6: device (o)pen lv_kernel_major KMaj Actual major device number of the logical volume (-1 if inactive) lv_kernel_minor KMIN Actual minor device number of the logical volume (-1 if inactive) lv_major Maj The persistent major device number of the logical volume (-1 if not specified) lv_minor Min The persistent minor device number of the logical volume (-1 if not specified) lv_name LV The name of the logical volume lv_size LSize The size of the logical volume lv_tags LV Tags LVM tags attached to the logical volume lv_uuid LV UUID The UUID of the logical volume. mirror_log Log Device on which the mirror log resides modules Modules Corresponding kernel device-mapper target necessary to use this logical volume move_pv Move Source physical volume of a temporary logical volume created with the pvmove command origin Origin The origin device of a snapshot volume regionsize region_size Region The unit size of a mirrored logical volume seg_count #Seg The number of segments in the logical volume seg_size SSize The size of the segments in the logical volume seg_start Start Offset of the segment in the logical volume seg_tags Seg Tags LVM tags attached to the segments of the logical volume segtype Type The segment type of a logical volume (for example: mirror, striped, linear) snap_percent Snap% Current percentage of a snapshot volume that is in use stripes #Str Number of stripes or mirrors in a logical volume stripesize stripe_size Stripe Unit size of the stripe in a striped logical volume The lvs command displays the following fields by default: lv_name , vg_name , lv_attr , lv_size , origin , snap_percent , move_pv , mirror_log , copy_percent . The default display is sorted by vg_name and lv_name within the volume group. Using the -v argauament with the lvs command adds the following fields to the default display: seg_count , lv_major , lv_minor , lv_kernel_major , lv_kernel_minor , lv_uuid . You can use the --segments argument of the lvs command to display information with default columns that emphasize the segment information. When you use the segments argument, the seg prefix is optional. The lvs --segments command displays the following fields by default: lv_name , vg_name , lv_attr , stripes , segtype , seg_size . The default display is sorted by vg_name , lv_name within the volume group, and seg_start within the logical volume. If the logical volumes were fragmented, the output from this command would show that. Using the -v argument with the lvs --segments command adds the following fields to the default display: seg_start , stripesize , chunksize . The following example shows the default output of the lvs command on a system with one logical volume configured, followed by the default output of the lvs command with the segments argument specified. | [
"pvs -o free PFree 17.14G 17.09G 17.14G",
"pvs PV VG Fmt Attr PSize PFree /dev/sdb1 new_vg lvm2 a- 17.14G 17.14G /dev/sdc1 new_vg lvm2 a- 17.14G 17.09G /dev/sdd1 new_vg lvm2 a- 17.14G 17.13G",
"pvs -v Scanning for physical volume names PV VG Fmt Attr PSize PFree DevSize PV UUID /dev/sdb1 new_vg lvm2 a- 17.14G 17.14G 17.14G onFF2w-1fLC-ughJ-D9eB-M7iv-6XqA-dqGeXY /dev/sdc1 new_vg lvm2 a- 17.14G 17.09G 17.14G Joqlch-yWSj-kuEn-IdwM-01S9-XO8M-mcpsVe /dev/sdd1 new_vg lvm2 a- 17.14G 17.13G 17.14G yvfvZK-Cf31-j75k-dECm-0RZ3-0dGW-tUqkCS",
"pvs --segments PV VG Fmt Attr PSize PFree Start SSize /dev/hda2 VolGroup00 lvm2 a- 37.16G 32.00M 0 1172 /dev/hda2 VolGroup00 lvm2 a- 37.16G 32.00M 1172 16 /dev/hda2 VolGroup00 lvm2 a- 37.16G 32.00M 1188 1 /dev/sda1 vg lvm2 a- 17.14G 16.75G 0 26 /dev/sda1 vg lvm2 a- 17.14G 16.75G 26 24 /dev/sda1 vg lvm2 a- 17.14G 16.75G 50 26 /dev/sda1 vg lvm2 a- 17.14G 16.75G 76 24 /dev/sda1 vg lvm2 a- 17.14G 16.75G 100 26 /dev/sda1 vg lvm2 a- 17.14G 16.75G 126 24 /dev/sda1 vg lvm2 a- 17.14G 16.75G 150 22 /dev/sda1 vg lvm2 a- 17.14G 16.75G 172 4217 /dev/sdb1 vg lvm2 a- 17.14G 17.14G 0 4389 /dev/sdc1 vg lvm2 a- 17.14G 17.14G 0 4389 /dev/sdd1 vg lvm2 a- 17.14G 17.14G 0 4389 /dev/sde1 vg lvm2 a- 17.14G 17.14G 0 4389 /dev/sdf1 vg lvm2 a- 17.14G 17.14G 0 4389 /dev/sdg1 vg lvm2 a- 17.14G 17.14G 0 4389",
"pvs -a PV VG Fmt Attr PSize PFree /dev/VolGroup00/LogVol01 -- 0 0 /dev/new_vg/lvol0 -- 0 0 /dev/ram -- 0 0 /dev/ram0 -- 0 0 /dev/ram2 -- 0 0 /dev/ram3 -- 0 0 /dev/ram4 -- 0 0 /dev/ram5 -- 0 0 /dev/ram6 -- 0 0 /dev/root -- 0 0 /dev/sda -- 0 0 /dev/sdb -- 0 0 /dev/sdb1 new_vg lvm2 a- 17.14G 17.14G /dev/sdc -- 0 0 /dev/sdc1 new_vg lvm2 a- 17.14G 17.09G /dev/sdd -- 0 0 /dev/sdd1 new_vg lvm2 a- 17.14G 17.14G",
"vgs VG #PV #LV #SN Attr VSize VFree new_vg 3 1 1 wz--n- 51.42G 51.36G",
"vgs -v Finding all volume groups Finding volume group \"new_vg\" VG Attr Ext #PV #LV #SN VSize VFree VG UUID new_vg wz--n- 4.00M 3 1 1 51.42G 51.36G jxQJ0a-ZKk0-OpMO-0118-nlwO-wwqd-fD5D32",
"lvs LV VG Attr LSize Origin Snap% Move Log Copy% lvol0 new_vg owi-a- 52.00M newvgsnap1 new_vg swi-a- 8.00M lvol0 0.20",
"lvs -v Finding all logical volumes LV VG #Seg Attr LSize Maj Min KMaj KMin Origin Snap% Move Copy% Log LV UUID lvol0 new_vg 1 owi-a- 52.00M -1 -1 253 3 LBy1Tz-sr23-OjsI-LT03-nHLC-y8XW-EhCl78 newvgsnap1 new_vg 1 swi-a- 8.00M -1 -1 253 5 lvol0 0.20 1ye1OU-1cIu-o79k-20h2-ZGF0-qCJm-CfbsIx",
"lvs --segments LV VG Attr #Str Type SSize LogVol00 VolGroup00 -wi-ao 1 linear 36.62G LogVol01 VolGroup00 -wi-ao 1 linear 512.00M lv vg -wi-a- 1 linear 104.00M lv vg -wi-a- 1 linear 104.00M lv vg -wi-a- 1 linear 104.00M lv vg -wi-a- 1 linear 88.00M",
"lvs -v --segments Finding all logical volumes LV VG Attr Start SSize #Str Type Stripe Chunk lvol0 new_vg owi-a- 0 52.00M 1 linear 0 0 newvgsnap1 new_vg swi-a- 0 8.00M 1 linear 0 8.00K",
"lvs LV VG Attr LSize Origin Snap% Move Log Copy% lvol0 new_vg -wi-a- 52.00M lvs --segments LV VG Attr #Str Type SSize lvol0 new_vg -wi-a- 1 linear 52.00M"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/report_object_selection |
Chapter 12. Threading and scheduling | Chapter 12. Threading and scheduling AMQ C++ supports full multithreading with C++11 and later. Limited multithreading is possible with older versions of C++. See Section 12.6, "Using older versions of C++" . 12.1. The threading model The container object can handle multiple connections concurrently. When AMQP events occur on connections, the container calls messaging_handler callback functions. Callbacks for any one connection are serialized (not called concurrently), but callbacks for different connections can be safely executed in parallel. You can assign a handler to a connection in container::connect() or listen_handler::on_accept() using the handler connection option. It is recommended to create a separate handler for each connection so that the handler does not need locks or other synchronization to protect it against concurrent use by library threads. If any non-library threads use the handler concurrently, then you need synchronization. 12.2. Thread-safety rules The connection , session , sender , receiver , tracker , and delivery objects are not thread-safe and are subject to the following rules. You must use them only from a messaging_handler callback or a work_queue function. You must not use objects belonging to one connection from a callback for another connection. You can store AMQ C++ objects in member variables for use in a later callback, provided you respect rule two. The message object is a value type with the same threading constraints as a standard C++ built-in type. It cannot be concurrently modified. 12.3. Work queues The work_queue interface provides a safe way to communicate between different connection handlers or between non-library threads and connection handlers. Each connection has an associated work_queue . The work queue is thread-safe (C++11 or greater). Any thread can add work. A work item is a std::function , and bound arguments are called like an event callback. When the library calls the work function, it is serialized safely so that you can treat the work function like an event callback and safely access the handler and AMQ C++ objects stored on it. 12.4. The wake primitive The connection::wake() method allows any thread to prompt activity on a connection by triggering an on_connection_wake() callback. This is the only thread-safe method on connection . wake() is a lightweight, low-level primitive for signaling between threads. It does not carry any code or data, unlike work_queue . Multiple calls to wake() might be coalesced into a single on_connection_wake() . Calls to on_connection_wake() can occur without any application call to wake() since the library uses wake() internally. The semantics of wake() are similar to std::condition_variable::notify_one() . There will be a wakeup, but there must be some shared application state to determine why the wakeup occurred and what, if anything, to do about it. Work queues are easier to use in many instances, but wake() may be useful if you already have your own external thread-safe queues and need an efficient way to wake a connection to check them for data. 12.5. Scheduling deferred work AMQ C++ has the ability to execute code after a delay. You can use this to implement time-based behaviors in your application, such as periodically scheduled work or timeouts. To defer work for a fixed amount of time, use the schedule method to set the delay and register a function defining the work. Example: Sending a message after a delay void on_sender_open(proton::sender& snd) override { proton::duration interval {5 * proton::duration::SECOND}; snd.work_queue().schedule(interval, [=] { send(snd); }); } void send(proton::sender snd) { if (snd.credit() > 0) { proton::message msg {"hello"}; snd.send(msg); } } This example uses the schedule method on the work queue of the sender in order to establish it as the execution context for the work. 12.6. Using older versions of C++ Before C++11 there was no standard support for threading in C++. You can use AMQ C++ with threads but with the following limitations. The container does not create threads. It only uses the single thread that calls container::run() . None of the AMQ C++ library classes are thread-safe, including container and work_queue . You need an external lock to use container in multiple threads. The only exception is connection::wake() . It is thread-safe even in older C++. The container::schedule() and work_queue APIs accept C++11 lambda functions to define units of work. If you are using a version of C++ that does not support lambdas, you must use the make_work() function instead. | [
"void on_sender_open(proton::sender& snd) override { proton::duration interval {5 * proton::duration::SECOND}; snd.work_queue().schedule(interval, [=] { send(snd); }); } void send(proton::sender snd) { if (snd.credit() > 0) { proton::message msg {\"hello\"}; snd.send(msg); } }"
] | https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_cpp_client/threading_and_scheduling |
Chapter 7. Internationalization | Chapter 7. Internationalization 7.1. Red Hat Enterprise Linux 8 International Languages Red Hat Enterprise Linux 8 supports the installation of multiple languages and the changing of languages based on your requirements. East Asian Languages - Japanese, Korean, Simplified Chinese, and Traditional Chinese. European Languages - English, German, Spanish, French, Italian, Portuguese, and Russian. The following table lists the fonts and input methods provided for various major languages. Language Default Font (Font Package) Input Methods English dejavu-sans-fonts French dejavu-sans-fonts German dejavu-sans-fonts Italian dejavu-sans-fonts Russian dejavu-sans-fonts Spanish dejavu-sans-fonts Portuguese dejavu-sans-fonts Simplified Chinese google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-libpinyin, libpinyin Traditional Chinese google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-libzhuyin, libzhuyin Japanese google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-kkc, libkkc Korean google-noto-sans-cjk-ttc-fonts, google-noto-serif-cjk-ttc-fonts ibus-hangul, libhangu 7.2. Notable changes to internationalization in RHEL 8 RHEL 8 introduces the following changes to internationalization compared to RHEL 7: Support for the Unicode 11 computing industry standard has been added. Internationalization is distributed in multiple packages, which allows for smaller footprint installations. For more information, see glibc localization for RHEL is distributed in multiple packages . The glibc package updates for multiple locales are now synchronized with the Common Locale Data Repository (CLDR). | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.0_release_notes/internationalization |
4.2. Logging Into the Piranha Configuration Tool | 4.2. Logging Into the Piranha Configuration Tool When configuring the Load Balancer Add-On, you should always begin by configuring the primary router with the Piranha Configuration Tool . To do this,verify that the piranha-gui service is running and an administrative password has been set, as described in Section 2.2, "Setting a Password for the Piranha Configuration Tool" . If you are accessing the machine locally, you can open http://localhost:3636 in a Web browser to access the Piranha Configuration Tool . Otherwise, type in the host name or real IP address for the server followed by :3636 . Once the browser connects, you will see the screen shown in Figure 4.1, "The Welcome Panel" . Figure 4.1. The Welcome Panel Click on the Login button and enter piranha for the Username and the administrative password you created in the Password field. The Piranha Configuration Tool is made of four main screens or panels . In addition, the Virtual Servers panel contains four subsections . The CONTROL/MONITORING panel is the first panel after the login screen. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s1-piranha-login-vsa |
Chapter 4. Using Elytron client default SSLcontext security provider in JBoss EAP clients | Chapter 4. Using Elytron client default SSLcontext security provider in JBoss EAP clients To make a Java Virtual Machine (JVM) use the Elytron client configuration to provide a default SSLcontext , you can use the WildFlyElytronClientDefaultSSLContextProvider . Use this provider to make your client libraries automatically use the Elytron client configuration when requesting the default SSLContext . 4.1. Elytron client default SSL context security provider Elytron client provides a Java security provider, org.wildfly.security.auth.client.WildFlyElytronClientDefaultSSLContextProvider , that you can use to register a Java virtual machine (JVM)-wide default SSL context. The WildFlyElytronClientDefaultSSLContextProvider provider works as follows: The provider instantiates the SSLContext when the SSLContext.getDefault() method is called. The SSLContext is initiated from the authentication context obtained from one of the following places: Elytron client configuration file passed as an argument to the provider. Automatically discovered wildfly-config.xml file on the filesystem. For more information, see The Default Configuration Approach . A client configuration file passed as an argument to the provider has the precedence. When the SSLContext.getDefault() method is called, the JVM returns the SSLContext instantiated by the provider. As the Elytron client can have multiple SSL contexts configured, rules are used to choose a single SSL context for the connection. The default SSL Context is the one that matches all rules. The provider returns this default SSL context. Note If no default SSLContext is configured or no configuration is present, the provider will be ignored. When you register the WildFlyElytronClientDefaultSSLContextProvider provider, all client libraries that use SSLContext.getDefault() method use the Elytron client configuration without having to use Elytron client APIs in their code. To register the provider, you must add runtime dependency on the following artifacts: wildfly-elytron-client wildfly-client-config You can register the provider either programmatically, in your client code, or statically in the java.security file. Use programmatic registration when you want to decide dynamically what providers to register and use. Registering the provider programmatically You can register the provider programmatically in your client code as shown below: Security.insertProviderAt(new WildFlyElytronClientDefaultSSLContextProvider(CONFIG_FILE_PATH), 1); Registering the provider statically You can register the provider in the java.security file as shown below: security.provider.1=org.wildfly.security.auth.client.WildFlyElytronClientDefaultSSLContextProvider <CONFIG_FILE_PATH> Additional resources Example of creating a client that loads the default SSLContext 4.2. Example of creating a client that loads the default SSL context The following example demonstrates registering of the WildFlyElytronClientDefaultSSLContextProvider provider programmatically and using the SSLContext.getDefault() method to obtain the SSLContext initialized by an Elytron client. The example uses static client configuration supplied as an argument to the provider. 4.2.1. Creating a Maven project for JBoss EAP client To create a client for applications deployed to JBoss EAP, create a Maven project with the required dependencies and the directory structure. Prerequisites You have installed Maven. For more information, see Downloading Apache Maven . Procedure Set up a Maven project by using the mvn command. The command creates the directory structure for the project and the pom.xml configuration file. Navigate to the application root directory. Replace the content of the generated pom.xml file with the following text: <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.example.client</groupId> <artifactId>client-ssl-context</artifactId> <version>1.0-SNAPSHOT</version> <name>client-ssl-context</name> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> </properties> <repositories> <repository> <id>jboss-public-maven-repository</id> <name>JBoss Public Maven Repository</name> <url>https://repository.jboss.org/nexus/content/groups/public/</url> <releases> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </releases> <snapshots> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </snapshots> <layout>default</layout> </repository> <repository> <id>redhat-ga-maven-repository</id> <name>Red Hat GA Maven Repository</name> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </releases> <snapshots> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </snapshots> <layout>default</layout> </repository> </repositories> <dependencies> <dependency> 1 <groupId>org.wildfly.security</groupId> <artifactId>wildfly-elytron-client</artifactId> <version>2.0.0.Final-redhat-00001</version> </dependency> <dependency> 2 <groupId>org.wildfly.client</groupId> <artifactId>wildfly-client-config</artifactId> <version>1.0.1.Final-redhat-00001</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.4.0</version> <configuration> <mainClass>com.example.client.App</mainClass> </configuration> </plugin> </plugins> </build> </project> 1 Dependency for wildfly-elytron-client . 2 Dependency for wildfly-client-config . Delete the src/test directory. Verification In the application root directory, enter the following command: You get an output similar to the following: steps Creating a client that loads the default SSLContext 4.2.2. Creating a client that loads the default SSLContext Create a client for applications deployed to JBoss EAP that loads the SSLContext by using the SSLContext.getDefault() method. In this procedure, <application_home> refers to the directory that contains the pom.xml configuration file for the application. Prerequisites You have secured the applications deployed to JBoss EAP with two-way TLS. To do this, follow these procedures: Generate client certificates . Configure a trust store and a trust manager for client certificate Configure a server certificate for two-way SSL/TLS Configure SSL context to secure applications deployed onJBoss EAP with SSL/TLS You have created a Maven project. For more information, see Creating a Maven project for JBoss EAP client . JBoss EAP is running. Procedure Create a directory to store the Java files. USD mkdir -p <application_home> /src/main/java/com/example/client Navigate to the new directory. USD cd <application_home> /src/main/java/com/example/client Create the Java file App.java with the following content: package com.example.client; import java.io.IOException; import java.net.URI; import java.net.http.HttpClient; import java.net.http.HttpRequest; import java.net.http.HttpResponse; import java.net.http.HttpResponse.BodyHandlers; import java.security.NoSuchAlgorithmException; import java.security.Security; import java.util.Properties; import javax.net.ssl.SSLContext; import org.wildfly.security.auth.client.WildFlyElytronClientDefaultSSLContextProvider; public class App { public static void main( String[] args ) { String url = "https://localhost:8443/"; 1 try { Security.insertProviderAt(new WildFlyElytronClientDefaultSSLContextProvider("src/wildfly-config-two-way-tls.xml"), 1); 2 HttpClient httpClient = HttpClient.newBuilder().sslContext(SSLContext.getDefault()).build(); HttpRequest request = HttpRequest.newBuilder() .uri(URI.create(url)) .GET() .build(); HttpResponse<Void> httpRresponse = httpClient.send(request, BodyHandlers.discarding()); String sslContext = SSLContext.getDefault().getProvider().getName(); 3 System.out.println ("\nSSL Default SSLContext is: " + sslContext); } catch (NoSuchAlgorithmException | IOException | InterruptedException e) { e.printStackTrace(); } System.exit(0); } } 1 Defines the JBoss EAP home page URL. 2 Register the security provider. 1 defines the priority for this provider. To statically register the provider, you can instead add the provider in the java.security file as: security.provider.1=org.wildfly.security.auth.client.WildFlyElytronClientDefaultSSLContextProvider <PATH> / <TO> /wildfly-config-two-way-tls.xml 3 Obtain the default SSL context. Create the client configuration file called "wildfly-config-two-way-tls.xml" in the <application_home> /src directory. <?xml version="1.0" encoding="UTF-8"?> <configuration> <authentication-client xmlns="urn:elytron:client:1.7"> <key-stores> <key-store name="truststore" type="PKCS12"> <file name=" USD{path_to_client_truststore} /client.truststore.p12"/> <key-store-clear-password password="secret"/> </key-store> <key-store name="keystore" type="PKCS12"> <file name=" USD{path_to_client_keystore} /exampleclient.keystore.pkcs12"/> <key-store-clear-password password="secret"/> </key-store> </key-stores> <ssl-contexts> <ssl-context name="client-context"> <trust-store key-store-name="truststore"/> <key-store-ssl-certificate key-store-name="keystore" alias="exampleclientkeystore"> <key-store-clear-password password="secret"/> </key-store-ssl-certificate> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context="client-context"/> </ssl-context-rules> </authentication-client> </configuration> Replace the following place holder values with actual paths: USD{path_to_client_truststore} USD{path_to_client_keystore} Verification Navigate to the <application_home> directory. Run the application. Example output | [
"Security.insertProviderAt(new WildFlyElytronClientDefaultSSLContextProvider(CONFIG_FILE_PATH), 1);",
"security.provider.1=org.wildfly.security.auth.client.WildFlyElytronClientDefaultSSLContextProvider <CONFIG_FILE_PATH>",
"mvn archetype:generate -DgroupId=com.example.client -DartifactId=client-ssl-context -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-quickstart -DinteractiveMode=false",
"cd client-ssl-context",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\"> <modelVersion>4.0.0</modelVersion> <groupId>com.example.client</groupId> <artifactId>client-ssl-context</artifactId> <version>1.0-SNAPSHOT</version> <name>client-ssl-context</name> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> </properties> <repositories> <repository> <id>jboss-public-maven-repository</id> <name>JBoss Public Maven Repository</name> <url>https://repository.jboss.org/nexus/content/groups/public/</url> <releases> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </releases> <snapshots> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </snapshots> <layout>default</layout> </repository> <repository> <id>redhat-ga-maven-repository</id> <name>Red Hat GA Maven Repository</name> <url>https://maven.repository.redhat.com/ga/</url> <releases> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </releases> <snapshots> <enabled>true</enabled> <updatePolicy>never</updatePolicy> </snapshots> <layout>default</layout> </repository> </repositories> <dependencies> <dependency> 1 <groupId>org.wildfly.security</groupId> <artifactId>wildfly-elytron-client</artifactId> <version>2.0.0.Final-redhat-00001</version> </dependency> <dependency> 2 <groupId>org.wildfly.client</groupId> <artifactId>wildfly-client-config</artifactId> <version>1.0.1.Final-redhat-00001</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.4.0</version> <configuration> <mainClass>com.example.client.App</mainClass> </configuration> </plugin> </plugins> </build> </project>",
"rm -rf src/test/",
"mvn install",
"[INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.682 s [INFO] Finished at: 2023-10-31T01:32:17+05:30 [INFO] ------------------------------------------------------------------------",
"mkdir -p <application_home> /src/main/java/com/example/client",
"cd <application_home> /src/main/java/com/example/client",
"package com.example.client; import java.io.IOException; import java.net.URI; import java.net.http.HttpClient; import java.net.http.HttpRequest; import java.net.http.HttpResponse; import java.net.http.HttpResponse.BodyHandlers; import java.security.NoSuchAlgorithmException; import java.security.Security; import java.util.Properties; import javax.net.ssl.SSLContext; import org.wildfly.security.auth.client.WildFlyElytronClientDefaultSSLContextProvider; public class App { public static void main( String[] args ) { String url = \"https://localhost:8443/\"; 1 try { Security.insertProviderAt(new WildFlyElytronClientDefaultSSLContextProvider(\"src/wildfly-config-two-way-tls.xml\"), 1); 2 HttpClient httpClient = HttpClient.newBuilder().sslContext(SSLContext.getDefault()).build(); HttpRequest request = HttpRequest.newBuilder() .uri(URI.create(url)) .GET() .build(); HttpResponse<Void> httpRresponse = httpClient.send(request, BodyHandlers.discarding()); String sslContext = SSLContext.getDefault().getProvider().getName(); 3 System.out.println (\"\\nSSL Default SSLContext is: \" + sslContext); } catch (NoSuchAlgorithmException | IOException | InterruptedException e) { e.printStackTrace(); } System.exit(0); } }",
"<?xml version=\"1.0\" encoding=\"UTF-8\"?> <configuration> <authentication-client xmlns=\"urn:elytron:client:1.7\"> <key-stores> <key-store name=\"truststore\" type=\"PKCS12\"> <file name=\" USD{path_to_client_truststore} /client.truststore.p12\"/> <key-store-clear-password password=\"secret\"/> </key-store> <key-store name=\"keystore\" type=\"PKCS12\"> <file name=\" USD{path_to_client_keystore} /exampleclient.keystore.pkcs12\"/> <key-store-clear-password password=\"secret\"/> </key-store> </key-stores> <ssl-contexts> <ssl-context name=\"client-context\"> <trust-store key-store-name=\"truststore\"/> <key-store-ssl-certificate key-store-name=\"keystore\" alias=\"exampleclientkeystore\"> <key-store-clear-password password=\"secret\"/> </key-store-ssl-certificate> </ssl-context> </ssl-contexts> <ssl-context-rules> <rule use-ssl-context=\"client-context\"/> </ssl-context-rules> </authentication-client> </configuration>",
"mvn compile exec:java",
"INFO: ELY00001: WildFly Elytron version 2.0.0.Final-redhat-00001 SSL Default SSLContext is: WildFlyElytronClientDefaultSSLContextProvider"
] | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/configuring_ssltls_in_jboss_eap/using-elytron-client-default-ssl-context-security-provider-in-server-clients_default |
Chapter 17. Configuring logging by using RHEL system roles | Chapter 17. Configuring logging by using RHEL system roles You can use the logging RHEL system role to configure your local and remote hosts as logging servers in an automated fashion to collect logs from many client systems. Logging solutions provide multiple ways of reading logs and multiple logging outputs. For example, a logging system can receive the following inputs: Local files systemd/journal Another logging system over the network In addition, a logging system can have the following outputs: Logs stored in the local files in the /var/log/ directory Logs sent to Elasticsearch engine Logs forwarded to another logging system With the logging RHEL system role, you can combine the inputs and outputs to fit your scenario. For example, you can configure a logging solution that stores inputs from journal in a local file, whereas inputs read from files are both forwarded to another logging system and stored in the local log files. 17.1. Filtering local log messages by using the logging RHEL system role You can use the property-based filter of the logging RHEL system role to filter your local log messages based on various conditions. As a result, you can achieve for example: Log clarity: In a high-traffic environment, logs can grow rapidly. The focus on specific messages, like errors, can help to identify problems faster. Optimized system performance: Excessive amount of logs is usually connected with system performance degradation. Selective logging for only the important events can prevent resource depletion, which enables your systems to run more efficiently. Enhanced security: Efficient filtering through security messages, like system errors and failed logins, helps to capture only the relevant logs. This is important for detecting breaches and meeting compliance standards. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Filter logs based on a specific value they contain ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: files_input type: basics logging_outputs: - name: files_output0 type: files property: msg property_op: contains property_value: error path: /var/log/errors.log - name: files_output1 type: files property: msg property_op: "!contains" property_value: error path: /var/log/others.log logging_flows: - name: flow0 inputs: [files_input] outputs: [files_output0, files_output1] The settings specified in the example playbook include the following: logging_inputs Defines a list of logging input dictionaries. The type: basics option covers inputs from systemd journal or Unix socket. logging_outputs Defines a list of logging output dictionaries. The type: files option supports storing logs in the local files, usually in the /var/log/ directory. The property: msg ; property: contains ; and property_value: error options specify that all logs that contain the error string are stored in the /var/log/errors.log file. The property: msg ; property: !contains ; and property_value: error options specify that all other logs are put in the /var/log/others.log file. You can replace the error value with the string by which you want to filter. logging_flows Defines a list of logging flow dictionaries to specify relationships between logging_inputs and logging_outputs . The inputs: [files_input] option specifies a list of inputs, from which processing of logs starts. The outputs: [files_output0, files_output1] option specifies a list of outputs, to which the logs are sent. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification On the managed node, test the syntax of the /etc/rsyslog.conf file: On the managed node, verify that the system sends messages that contain the error string to the log: Send a test message: View the /var/log/errors.log log, for example: Where hostname is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this case root . Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory rsyslog.conf(5) and syslog(3) man pages on your system 17.2. Applying a remote logging solution by using the logging RHEL system role You can use the logging RHEL system role to configure a remote logging solution, where one or more clients take logs from the systemd-journal service and forward them to a remote server. The server receives remote input from the remote_rsyslog and remote_files configurations, and outputs the logs to local files in directories named by remote host names. As a result, you can cover use cases where you need for example: Centralized log management: Collecting, accessing, and managing log messages of multiple machines from a single storage point simplifies day-to-day monitoring and troubleshooting tasks. Also, this use case reduces the need to log into individual machines to check the log messages. Enhanced security: Storing log messages in one central place increases chances they are in a secure and tamper-proof environment. Such an environment makes it easier to detect and respond to security incidents more effectively and to meet audit requirements. Improved efficiency in log analysis: Correlating log messages from multiple systems is important for fast troubleshooting of complex problems that span multiple machines or services. That way you can quickly analyze and cross-reference events from different sources. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Define the ports in the SELinux policy of the server or client system and open the firewall for those ports. The default SELinux policy includes ports 601, 514, 6514, 10514, and 20514. To use a different port, see modify the SELinux policy on the client and server systems . Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Configure the server to receive remote input ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: remote_udp_input type: remote udp_ports: [ 601 ] - name: remote_tcp_input type: remote tcp_ports: [ 601 ] logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: flow_0 inputs: [remote_udp_input, remote_tcp_input] outputs: [remote_files_output] - name: Deploy the logging solution hosts: managed-node-02.example.com tasks: - name: Configure the server to output the logs to local files in directories named by remote host names ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: forward_output0 type: forwards severity: info target: <host1.example.com> udp_port: 601 - name: forward_output1 type: forwards facility: mail target: <host1.example.com> tcp_port: 601 logging_flows: - name: flows0 inputs: [basic_input] outputs: [forward_output0, forward_output1] [basic_input] [forward_output0, forward_output1] The settings specified in the first play of the example playbook include the following: logging_inputs Defines a list of logging input dictionaries. The type: remote option covers remote inputs from the other logging system over the network. The udp_ports: [ 601 ] option defines a list of UDP port numbers to monitor. The tcp_ports: [ 601 ] option defines a list of TCP port numbers to monitor. If both udp_ports and tcp_ports is set, udp_ports is used and tcp_ports is dropped. logging_outputs Defines a list of logging output dictionaries. The type: remote_files option makes output store logs to the local files per remote host and program name originated the logs. logging_flows Defines a list of logging flow dictionaries to specify relationships between logging_inputs and logging_outputs . The inputs: [remote_udp_input, remote_tcp_input] option specifies a list of inputs, from which processing of logs starts. The outputs: [remote_files_output] option specifies a list of outputs, to which the logs are sent. The settings specified in the second play of the example playbook include the following: logging_inputs Defines a list of logging input dictionaries. The type: basics option covers inputs from systemd journal or Unix socket. logging_outputs Defines a list of logging output dictionaries. The type: forwards option supports sending logs to the remote logging server over the network. The severity: info option refers to log messages of the informative importance. The facility: mail option refers to the type of system program that is generating the log message. The target: <host1.example.com> option specifies the hostname of the remote logging server. The udp_port: 601 / tcp_port: 601 options define the UDP/TCP ports on which the remote logging server listens. logging_flows Defines a list of logging flow dictionaries to specify relationships between logging_inputs and logging_outputs . The inputs: [basic_input] option specifies a list of inputs, from which processing of logs starts. The outputs: [forward_output0, forward_output1] option specifies a list of outputs, to which the logs are sent. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification On both the client and the server system, test the syntax of the /etc/rsyslog.conf file: Verify that the client system sends messages to the server: On the client system, send a test message: On the server system, view the /var/log/ <host2.example.com> /messages log, for example: Where <host2.example.com> is the host name of the client system. Note that the log contains the user name of the user that entered the logger command, in this case root . Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory rsyslog.conf(5) and syslog(3) manual pages 17.3. Using the logging RHEL system role with TLS Transport Layer Security (TLS) is a cryptographic protocol designed to allow secure communication over the computer network. You can use the logging RHEL system role to configure a secure transfer of log messages, where one or more clients take logs from the systemd-journal service and transfer them to a remote server while using TLS. Typically, TLS for transferring logs in a remote logging solution is used when sending sensitive data over less trusted or public networks, such as the Internet. Also, by using certificates in TLS you can ensure that the client is forwarding logs to the correct and trusted server. This prevents attacks like "man-in-the-middle". 17.3.1. Configuring client logging with TLS You can use the logging RHEL system role to configure logging on RHEL clients and transfer logs to a remote logging system using TLS encryption. This procedure creates a private key and a certificate. , it configures TLS on all hosts in the clients group in the Ansible inventory. The TLS protocol encrypts the message transmission for secure transfer of logs over the network. Note You do not have to call the certificate RHEL system role in the playbook to create the certificate. The logging RHEL system role calls it automatically when the logging_certificates variable is set. In order for the CA to be able to sign the created certificate, the managed nodes must be enrolled in an IdM domain. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The managed nodes are enrolled in an IdM domain. If the logging server you want to configure on the manage node runs RHEL 9.2 or later and the FIPS mode is enabled, clients must either support the Extended Master Secret (EMS) extension or use TLS 1.3. TLS 1.2 connections without EMS fail. For more information, see the Red Hat Knowledgebase solution TLS extension "Extended Master Secret" enforced . Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying files input and forwards output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: files input_log_path: /var/log/containers/*.log logging_outputs: - name: output_name type: forwards target: your_target_host tcp_port: 514 tls: true pki_authmode: x509/name permitted_server: 'server.example.com' logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name] The settings specified in the example playbook include the following: logging_certificates The value of this parameter is passed on to certificate_requests in the certificate RHEL system role and used to create a private key and certificate. logging_pki_files Using this parameter, you can configure the paths and other settings that logging uses to find the CA, certificate, and key files used for TLS, specified with one or more of the following sub-parameters: ca_cert , ca_cert_src , cert , cert_src , private_key , private_key_src , and tls . Note If you are using logging_certificates to create the files on the managed node, do not use ca_cert_src , cert_src , and private_key_src , which are used to copy files not created by logging_certificates . ca_cert Represents the path to the CA certificate file on the managed node. Default path is /etc/pki/tls/certs/ca.pem and the file name is set by the user. cert Represents the path to the certificate file on the managed node. Default path is /etc/pki/tls/certs/server-cert.pem and the file name is set by the user. private_key Represents the path to the private key file on the managed node. Default path is /etc/pki/tls/private/server-key.pem and the file name is set by the user. ca_cert_src Represents the path to the CA certificate file on the control node which is copied to the target host to the location specified by ca_cert . Do not use this if using logging_certificates . cert_src Represents the path to a certificate file on the control node which is copied to the target host to the location specified by cert . Do not use this if using logging_certificates . private_key_src Represents the path to a private key file on the control node which is copied to the target host to the location specified by private_key . Do not use this if using logging_certificates . tls Setting this parameter to true ensures secure transfer of logs over the network. If you do not want a secure wrapper, you can set tls: false . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory /usr/share/ansible/roles/rhel-system-roles.certificate/README.md file /usr/share/doc/rhel-system-roles/certificate/ directory Requesting certificates using RHEL system roles . rsyslog.conf(5) and syslog(3) manual pages 17.3.2. Configuring server logging with TLS You can use the logging RHEL system role to configure logging on RHEL servers and set them to receive logs from a remote logging system using TLS encryption. This procedure creates a private key and a certificate. , it configures TLS on all hosts in the server group in the Ansible inventory. Note You do not have to call the certificate RHEL system role in the playbook to create the certificate. The logging RHEL system role calls it automatically. In order for the CA to be able to sign the created certificate, the managed nodes must be enrolled in an IdM domain. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The managed nodes are enrolled in an IdM domain. If the logging server you want to configure on the manage node runs RHEL 9.2 or later and the FIPS mode is enabled, clients must either support the Extended Master Secret (EMS) extension or use TLS 1.3. TLS 1.2 connections without EMS fail. For more information, see the Red Hat Knowledgebase solution TLS extension "Extended Master Secret" enforced . Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: remote tcp_ports: 514 tls: true permitted_clients: ['clients.example.com'] logging_outputs: - name: output_name type: remote_files remote_log_path: /var/log/remote/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log async_writing: true client_count: 20 io_buffer_size: 8192 logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name] The settings specified in the example playbook include the following: logging_certificates The value of this parameter is passed on to certificate_requests in the certificate RHEL system role and used to create a private key and certificate. logging_pki_files Using this parameter, you can configure the paths and other settings that logging uses to find the CA, certificate, and key files used for TLS, specified with one or more of the following sub-parameters: ca_cert , ca_cert_src , cert , cert_src , private_key , private_key_src , and tls . Note If you are using logging_certificates to create the files on the managed node, do not use ca_cert_src , cert_src , and private_key_src , which are used to copy files not created by logging_certificates . ca_cert Represents the path to the CA certificate file on the managed node. Default path is /etc/pki/tls/certs/ca.pem and the file name is set by the user. cert Represents the path to the certificate file on the managed node. Default path is /etc/pki/tls/certs/server-cert.pem and the file name is set by the user. private_key Represents the path to the private key file on the managed node. Default path is /etc/pki/tls/private/server-key.pem and the file name is set by the user. ca_cert_src Represents the path to the CA certificate file on the control node which is copied to the target host to the location specified by ca_cert . Do not use this if using logging_certificates . cert_src Represents the path to a certificate file on the control node which is copied to the target host to the location specified by cert . Do not use this if using logging_certificates . private_key_src Represents the path to a private key file on the control node which is copied to the target host to the location specified by private_key . Do not use this if using logging_certificates . tls Setting this parameter to true ensures secure transfer of logs over the network. If you do not want a secure wrapper, you can set tls: false . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory Requesting certificates using RHEL system roles . rsyslog.conf(5) and syslog(3) manual pages 17.4. Using the logging RHEL system roles with RELP Reliable Event Logging Protocol (RELP) is a networking protocol for data and message logging over the TCP network. It ensures reliable delivery of event messages and you can use it in environments that do not tolerate any message loss. The RELP sender transfers log entries in the form of commands and the receiver acknowledges them once they are processed. To ensure consistency, RELP stores the transaction number to each transferred command for any kind of message recovery. You can consider a remote logging system in between the RELP Client and RELP Server. The RELP Client transfers the logs to the remote logging system and the RELP Server receives all the logs sent by the remote logging system. To achieve that use case, you can use the logging RHEL system role to configure the logging system to reliably send and receive log entries. 17.4.1. Configuring client logging with RELP You can use the logging RHEL system role to configure a transfer of log messages stored locally to the remote logging system with RELP. This procedure configures RELP on all hosts in the clients group in the Ansible inventory. The RELP configuration uses Transport Layer Security (TLS) to encrypt the message transmission for secure transfer of logs over the network. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure client-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploy basic input and RELP output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: relp_client type: relp target: logging.server.com port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/client-cert.pem private_key: /etc/pki/tls/private/client-key.pem pki_authmode: name permitted_servers: - '*.server.example.com' logging_flows: - name: example_flow inputs: [basic_input] outputs: [relp_client] The settings specified in the example playbook include the following: target This is a required parameter that specifies the host name where the remote logging system is running. port Port number the remote logging system is listening. tls Ensures secure transfer of logs over the network. If you do not want a secure wrapper you can set the tls variable to false . By default tls parameter is set to true while working with RELP and requires key/certificates and triplets { ca_cert , cert , private_key } and/or { ca_cert_src , cert_src , private_key_src }. If the { ca_cert_src , cert_src , private_key_src } triplet is set, the default locations /etc/pki/tls/certs and /etc/pki/tls/private are used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet If the { ca_cert , cert , private_key } triplet is set, files are expected to be on the default path before the logging configuration. If both triplets are set, files are transferred from local path from control node to specific path of the managed node. ca_cert Represents the path to CA certificate. Default path is /etc/pki/tls/certs/ca.pem and the file name is set by the user. cert Represents the path to certificate. Default path is /etc/pki/tls/certs/server-cert.pem and the file name is set by the user. private_key Represents the path to private key. Default path is /etc/pki/tls/private/server-key.pem and the file name is set by the user. ca_cert_src Represents local CA certificate file path which is copied to the managed node. If ca_cert is specified, it is copied to the location. cert_src Represents the local certificate file path which is copied to the managed node. If cert is specified, it is copied to the location. private_key_src Represents the local key file path which is copied to the managed node. If private_key is specified, it is copied to the location. pki_authmode Accepts the authentication mode as name or fingerprint . permitted_servers List of servers that will be allowed by the logging client to connect and send logs over TLS. inputs List of logging input dictionary. outputs List of logging output dictionary. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory rsyslog.conf(5) and syslog(3) manual pages 17.4.2. Configuring server logging with RELP You can use the logging RHEL system role to configure a server for receiving log messages from the remote logging system with RELP. This procedure configures RELP on all hosts in the server group in the Ansible inventory. The RELP configuration uses TLS to encrypt the message transmission for secure transfer of logs over the network. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure server-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: relp_server type: relp port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/server-cert.pem private_key: /etc/pki/tls/private/server-key.pem pki_authmode: name permitted_clients: - '*example.client.com' logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: example_flow inputs: relp_server outputs: remote_files_output The settings specified in the example playbook include the following: port Port number the remote logging system is listening. tls Ensures secure transfer of logs over the network. If you do not want a secure wrapper you can set the tls variable to false . By default tls parameter is set to true while working with RELP and requires key/certificates and triplets { ca_cert , cert , private_key } and/or { ca_cert_src , cert_src , private_key_src }. If the { ca_cert_src , cert_src , private_key_src } triplet is set, the default locations /etc/pki/tls/certs and /etc/pki/tls/private are used as the destination on the managed node to transfer files from control node. In this case, the file names are identical to the original ones in the triplet If the { ca_cert , cert , private_key } triplet is set, files are expected to be on the default path before the logging configuration. If both triplets are set, files are transferred from local path from control node to specific path of the managed node. ca_cert Represents the path to CA certificate. Default path is /etc/pki/tls/certs/ca.pem and the file name is set by the user. cert Represents the path to the certificate. Default path is /etc/pki/tls/certs/server-cert.pem and the file name is set by the user. private_key Represents the path to private key. Default path is /etc/pki/tls/private/server-key.pem and the file name is set by the user. ca_cert_src Represents local CA certificate file path which is copied to the managed node. If ca_cert is specified, it is copied to the location. cert_src Represents the local certificate file path which is copied to the managed node. If cert is specified, it is copied to the location. private_key_src Represents the local key file path which is copied to the managed node. If private_key is specified, it is copied to the location. pki_authmode Accepts the authentication mode as name or fingerprint . permitted_clients List of clients that will be allowed by the logging server to connect and send logs over TLS. inputs List of logging input dictionary. outputs List of logging output dictionary. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.logging/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.logging/README.md file /usr/share/doc/rhel-system-roles/logging/ directory rsyslog.conf(5) and syslog(3) manual pages | [
"--- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Filter logs based on a specific value they contain ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: files_input type: basics logging_outputs: - name: files_output0 type: files property: msg property_op: contains property_value: error path: /var/log/errors.log - name: files_output1 type: files property: msg property_op: \"!contains\" property_value: error path: /var/log/others.log logging_flows: - name: flow0 inputs: [files_input] outputs: [files_output0, files_output1]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run rsyslogd: End of config validation run. Bye.",
"logger error",
"cat /var/log/errors.log Aug 5 13:48:31 hostname root[6778]: error",
"--- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Configure the server to receive remote input ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: remote_udp_input type: remote udp_ports: [ 601 ] - name: remote_tcp_input type: remote tcp_ports: [ 601 ] logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: flow_0 inputs: [remote_udp_input, remote_tcp_input] outputs: [remote_files_output] - name: Deploy the logging solution hosts: managed-node-02.example.com tasks: - name: Configure the server to output the logs to local files in directories named by remote host names ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: forward_output0 type: forwards severity: info target: <host1.example.com> udp_port: 601 - name: forward_output1 type: forwards facility: mail target: <host1.example.com> tcp_port: 601 logging_flows: - name: flows0 inputs: [basic_input] outputs: [forward_output0, forward_output1] [basic_input] [forward_output0, forward_output1]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: End of config validation run. Bye.",
"logger test",
"cat /var/log/ <host2.example.com> /messages Aug 5 13:48:31 <host2.example.com> root[6778]: test",
"--- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying files input and forwards output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: files input_log_path: /var/log/containers/*.log logging_outputs: - name: output_name type: forwards target: your_target_host tcp_port: 514 tls: true pki_authmode: x509/name permitted_server: 'server.example.com' logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: remote tcp_ports: 514 tls: true permitted_clients: ['clients.example.com'] logging_outputs: - name: output_name type: remote_files remote_log_path: /var/log/remote/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log async_writing: true client_count: 20 io_buffer_size: 8192 logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Configure client-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploy basic input and RELP output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: relp_client type: relp target: logging.server.com port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/client-cert.pem private_key: /etc/pki/tls/private/client-key.pem pki_authmode: name permitted_servers: - '*.server.example.com' logging_flows: - name: example_flow inputs: [basic_input] outputs: [relp_client]",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Configure server-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: relp_server type: relp port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/server-cert.pem private_key: /etc/pki/tls/private/server-key.pem pki_authmode: name permitted_clients: - '*example.client.com' logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: example_flow inputs: relp_server outputs: remote_files_output",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/automating_system_administration_by_using_rhel_system_roles/configuring-logging-by-using-rhel-system-roles_automating-system-administration-by-using-rhel-system-roles |
Chapter 12. Configuring TLS security profiles | Chapter 12. Configuring TLS security profiles TLS security profiles provide a way for servers to regulate which ciphers a client can use when connecting to the server. This ensures that OpenShift Container Platform components use cryptographic libraries that do not allow known insecure protocols, ciphers, or algorithms. Cluster administrators can choose which TLS security profile to use for each of the following components: the Ingress Controller the control plane This includes the Kubernetes API server, Kubernetes controller manager, Kubernetes scheduler, OpenShift API server, OpenShift OAuth API server, OpenShift OAuth server, etcd, the Machine Config Operator, and the Machine Config Server. the kubelet, when it acts as an HTTP server for the Kubernetes API server 12.1. Understanding TLS security profiles You can use a TLS (Transport Layer Security) security profile to define which TLS ciphers are required by various OpenShift Container Platform components. The OpenShift Container Platform TLS security profiles are based on Mozilla recommended configurations . You can specify one of the following TLS security profiles for each component: Table 12.1. TLS security profiles Profile Description Old This profile is intended for use with legacy clients or libraries. The profile is based on the Old backward compatibility recommended configuration. The Old profile requires a minimum TLS version of 1.0. Note For the Ingress Controller, the minimum TLS version is converted from 1.0 to 1.1. Intermediate This profile is the recommended configuration for the majority of clients. It is the default TLS security profile for the Ingress Controller, kubelet, and control plane. The profile is based on the Intermediate compatibility recommended configuration. The Intermediate profile requires a minimum TLS version of 1.2. Modern This profile is intended for use with modern clients that have no need for backwards compatibility. This profile is based on the Modern compatibility recommended configuration. The Modern profile requires a minimum TLS version of 1.3. Custom This profile allows you to define the TLS version and ciphers to use. Warning Use caution when using a Custom profile, because invalid configurations can cause problems. Note When using one of the predefined profile types, the effective profile configuration is subject to change between releases. For example, given a specification to use the Intermediate profile deployed on release X.Y.Z, an upgrade to release X.Y.Z+1 might cause a new profile configuration to be applied, resulting in a rollout. 12.2. Viewing TLS security profile details You can view the minimum TLS version and ciphers for the predefined TLS security profiles for each of the following components: Ingress Controller, control plane, and kubelet. Important The effective configuration of minimum TLS version and list of ciphers for a profile might differ between components. Procedure View details for a specific TLS security profile: USD oc explain <component>.spec.tlsSecurityProfile.<profile> 1 1 For <component> , specify ingresscontroller , apiserver , or kubeletconfig . For <profile> , specify old , intermediate , or custom . For example, to check the ciphers included for the intermediate profile for the control plane: USD oc explain apiserver.spec.tlsSecurityProfile.intermediate Example output KIND: APIServer VERSION: config.openshift.io/v1 DESCRIPTION: intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: TLSv1.2 View all details for the tlsSecurityProfile field of a component: USD oc explain <component>.spec.tlsSecurityProfile 1 1 For <component> , specify ingresscontroller , apiserver , or kubeletconfig . For example, to check all details for the tlsSecurityProfile field for the Ingress Controller: USD oc explain ingresscontroller.spec.tlsSecurityProfile Example output KIND: IngressController VERSION: operator.openshift.io/v1 RESOURCE: tlsSecurityProfile <Object> DESCRIPTION: ... FIELDS: custom <> custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate <> intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ... 1 modern <> modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ... 2 NOTE: Currently unsupported. old <> old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ... 3 type <string> ... 1 Lists ciphers and minimum version for the intermediate profile here. 2 Lists ciphers and minimum version for the modern profile here. 3 Lists ciphers and minimum version for the old profile here. 12.3. Configuring the TLS security profile for the Ingress Controller To configure a TLS security profile for an Ingress Controller, edit the IngressController custom resource (CR) to specify a predefined or custom TLS security profile. If a TLS security profile is not configured, the default value is based on the TLS security profile set for the API server. Sample IngressController CR that configures the Old TLS security profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: old: {} type: Old ... The TLS security profile defines the minimum TLS version and the TLS ciphers for TLS connections for Ingress Controllers. You can see the ciphers and the minimum TLS version of the configured TLS security profile in the IngressController custom resource (CR) under Status.Tls Profile and the configured TLS security profile under Spec.Tls Security Profile . For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed under both parameters. Note The HAProxy Ingress Controller image supports TLS 1.3 and the Modern profile. The Ingress Operator also converts the TLS 1.0 of an Old or Custom profile to 1.1 . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the IngressController CR in the openshift-ingress-operator project to configure the TLS security profile: USD oc edit IngressController default -n openshift-ingress-operator Add the spec.tlsSecurityProfile field: Sample IngressController CR for a Custom profile apiVersion: operator.openshift.io/v1 kind: IngressController ... spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 ... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. Save the file to apply the changes. Verification Verify that the profile is set in the IngressController CR: USD oc describe IngressController default -n openshift-ingress-operator Example output Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController ... Spec: ... Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom ... 12.4. Configuring the TLS security profile for the control plane To configure a TLS security profile for the control plane, edit the APIServer custom resource (CR) to specify a predefined or custom TLS security profile. Setting the TLS security profile in the APIServer CR propagates the setting to the following control plane components: Kubernetes API server Kubernetes controller manager Kubernetes scheduler OpenShift API server OpenShift OAuth API server OpenShift OAuth server etcd Machine Config Operator Machine Config Server If a TLS security profile is not configured, the default TLS security profile is Intermediate . Note The default TLS security profile for the Ingress Controller is based on the TLS security profile set for the API server. Sample APIServer CR that configures the Old TLS security profile apiVersion: config.openshift.io/v1 kind: APIServer ... spec: tlsSecurityProfile: old: {} type: Old ... The TLS security profile defines the minimum TLS version and the TLS ciphers required to communicate with the control plane components. You can see the configured TLS security profile in the APIServer custom resource (CR) under Spec.Tls Security Profile . For the Custom TLS security profile, the specific ciphers and minimum TLS version are listed. Note The control plane does not support TLS 1.3 as the minimum TLS version; the Modern profile is not supported because it requires TLS 1.3 . Prerequisites You have access to the cluster as a user with the cluster-admin role. Procedure Edit the default APIServer CR to configure the TLS security profile: USD oc edit APIServer cluster Add the spec.tlsSecurityProfile field: Sample APIServer CR for a Custom profile apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. Save the file to apply the changes. Verification Verify that the TLS security profile is set in the APIServer CR: USD oc describe apiserver cluster Example output Name: cluster Namespace: ... API Version: config.openshift.io/v1 Kind: APIServer ... Spec: Audit: Profile: Default Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom ... Verify that the TLS security profile is set in the etcd CR: USD oc describe etcd cluster Example output Name: cluster Namespace: ... API Version: operator.openshift.io/v1 Kind: Etcd ... Spec: Log Level: Normal Management State: Managed Observed Config: Serving Info: Cipher Suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 Min TLS Version: VersionTLS12 ... Verify that the TLS security profile is set in the Machine Config Server pod: USD oc logs machine-config-server-5msdv -n openshift-machine-config-operator Example output # ... I0905 13:48:36.968688 1 start.go:51] Launching server with tls min version: VersionTLS12 & cipher suites [TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256] # ... 12.5. Configuring the TLS security profile for the kubelet To configure a TLS security profile for the kubelet when it is acting as an HTTP server, create a KubeletConfig custom resource (CR) to specify a predefined or custom TLS security profile for specific nodes. If a TLS security profile is not configured, the default TLS security profile is Intermediate . The kubelet uses its HTTP/GRPC server to communicate with the Kubernetes API server, which sends commands to pods, gathers logs, and run exec commands on pods through the kubelet. Sample KubeletConfig CR that configures the Old TLS security profile on worker nodes apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig # ... spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" # ... You can see the ciphers and the minimum TLS version of the configured TLS security profile in the kubelet.conf file on a configured node. Prerequisites You are logged in to OpenShift Container Platform as a user with the cluster-admin role. Procedure Create a KubeletConfig CR to configure the TLS security profile: Sample KubeletConfig CR for a Custom profile apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: "" 4 #... 1 Specify the TLS security profile type ( Old , Intermediate , or Custom ). The default is Intermediate . 2 Specify the appropriate field for the selected type: old: {} intermediate: {} custom: 3 For the custom type, specify a list of TLS ciphers and minimum accepted TLS version. 4 Optional: Specify the machine config pool label for the nodes you want to apply the TLS security profile. Create the KubeletConfig object: USD oc create -f <filename> Depending on the number of worker nodes in the cluster, wait for the configured nodes to be rebooted one by one. Verification To verify that the profile is set, perform the following steps after the nodes are in the Ready state: Start a debug session for a configured node: USD oc debug node/<node_name> Set /host as the root directory within the debug shell: sh-4.4# chroot /host View the kubelet.conf file: sh-4.4# cat /etc/kubernetes/kubelet.conf Example output "kind": "KubeletConfiguration", "apiVersion": "kubelet.config.k8s.io/v1beta1", #... "tlsCipherSuites": [ "TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256", "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384", "TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256", "TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256" ], "tlsMinVersion": "VersionTLS12", #... | [
"oc explain <component>.spec.tlsSecurityProfile.<profile> 1",
"oc explain apiserver.spec.tlsSecurityProfile.intermediate",
"KIND: APIServer VERSION: config.openshift.io/v1 DESCRIPTION: intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ciphers: - TLS_AES_128_GCM_SHA256 - TLS_AES_256_GCM_SHA384 - TLS_CHACHA20_POLY1305_SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES256-GCM-SHA384 - ECDHE-RSA-AES256-GCM-SHA384 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - DHE-RSA-AES128-GCM-SHA256 - DHE-RSA-AES256-GCM-SHA384 minTLSVersion: TLSv1.2",
"oc explain <component>.spec.tlsSecurityProfile 1",
"oc explain ingresscontroller.spec.tlsSecurityProfile",
"KIND: IngressController VERSION: operator.openshift.io/v1 RESOURCE: tlsSecurityProfile <Object> DESCRIPTION: FIELDS: custom <> custom is a user-defined TLS security profile. Be extremely careful using a custom profile as invalid configurations can be catastrophic. An example custom profile looks like this: ciphers: - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: TLSv1.1 intermediate <> intermediate is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28recommended.29 and looks like this (yaml): ... 1 modern <> modern is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility and looks like this (yaml): ... 2 NOTE: Currently unsupported. old <> old is a TLS security profile based on: https://wiki.mozilla.org/Security/Server_Side_TLS#Old_backward_compatibility and looks like this (yaml): ... 3 type <string>",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: old: {} type: Old",
"oc edit IngressController default -n openshift-ingress-operator",
"apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11",
"oc describe IngressController default -n openshift-ingress-operator",
"Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController Spec: Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom",
"apiVersion: config.openshift.io/v1 kind: APIServer spec: tlsSecurityProfile: old: {} type: Old",
"oc edit APIServer cluster",
"apiVersion: config.openshift.io/v1 kind: APIServer metadata: name: cluster spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11",
"oc describe apiserver cluster",
"Name: cluster Namespace: API Version: config.openshift.io/v1 Kind: APIServer Spec: Audit: Profile: Default Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom",
"oc describe etcd cluster",
"Name: cluster Namespace: API Version: operator.openshift.io/v1 Kind: Etcd Spec: Log Level: Normal Management State: Managed Observed Config: Serving Info: Cipher Suites: TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 Min TLS Version: VersionTLS12",
"oc logs machine-config-server-5msdv -n openshift-machine-config-operator",
"I0905 13:48:36.968688 1 start.go:51] Launching server with tls min version: VersionTLS12 & cipher suites [TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256 TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256]",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\"",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 4 #",
"oc create -f <filename>",
"oc debug node/<node_name>",
"sh-4.4# chroot /host",
"sh-4.4# cat /etc/kubernetes/kubelet.conf",
"\"kind\": \"KubeletConfiguration\", \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\", # \"tlsCipherSuites\": [ \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\" ], \"tlsMinVersion\": \"VersionTLS12\", #"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/security_and_compliance/tls-security-profiles |
Chapter 8. Managing Cluster Resources | Chapter 8. Managing Cluster Resources This chapter describes various commands you can use to manage cluster resources. It provides information on the following procedures. Section 8.1, "Manually Moving Resources Around the Cluster" Section 8.2, "Moving Resources Due to Failure" Section 8.4, "Enabling, Disabling, and Banning Cluster Resources" Section 8.5, "Disabling a Monitor Operation" 8.1. Manually Moving Resources Around the Cluster You can override the cluster and force resources to move from their current location. There are two occasions when you would want to do this: When a node is under maintenance, and you need to move all resources running on that node to a different node When individually specified resources needs to be moved To move all resources running on a node to a different node, you put the node in standby mode. For information on putting a cluster node in standby node, see Section 4.4.5, "Standby Mode" . You can move individually specified resources in either of the following ways. You can use the pcs resource move command to move a resource off a node on which it is currently running, as described in Section 8.1.1, "Moving a Resource from its Current Node" . You can use the pcs resource relocate run command to move a resource to its preferred node, as determined by current cluster status, constraints, location of resources and other settings. For information on this command, see Section 8.1.2, "Moving a Resource to its Preferred Node" . 8.1.1. Moving a Resource from its Current Node To move a resource off the node on which it is currently running, use the following command, specifying the resource_id of the resource as defined. Specify the destination_node if you want to indicate on which node to run the resource that you are moving. Note When you execute the pcs resource move command, this adds a constraint to the resource to prevent it from running on the node on which it is currently running. You can execute the pcs resource clear or the pcs constraint delete command to remove the constraint. This does not necessarily move the resources back to the original node; where the resources can run at that point depends on how you have configured your resources initially. If you specify the --master parameter of the pcs resource move command, the scope of the constraint is limited to the master role and you must specify master_id rather than resource_id . You can optionally configure a lifetime parameter for the pcs resource move command to indicate a period of time the constraint should remain. You specify the units of a lifetime parameter according to the format defined in ISO 8601, which requires that you specify the unit as a capital letter such as Y (for years), M (for months), W (for weeks), D (for days), H (for hours), M (for minutes), and S (for seconds). To distinguish a unit of minutes(M) from a unit of months(M), you must specify PT before indicating the value in minutes. For example, a lifetime parameter of 5M indicates an interval of five months, while a lifetime parameter of PT5M indicates an interval of five minutes. The lifetime parameter is checked at intervals defined by the cluster-recheck-interval cluster property. By default this value is 15 minutes. If your configuration requires that you check this parameter more frequently, you can reset this value with the following command. You can optionally configure a --wait[= n ] parameter for the pcs resource move command to indicate the number of seconds to wait for the resource to start on the destination node before returning 0 if the resource is started or 1 if the resource has not yet started. If you do not specify n, the default resource timeout will be used. The following command moves the resource resource1 to node example-node2 and prevents it from moving back to the node on which it was originally running for one hour and thirty minutes. The following command moves the resource resource1 to node example-node2 and prevents it from moving back to the node on which it was originally running for thirty minutes. For information on resource constraints, see Chapter 7, Resource Constraints . 8.1.2. Moving a Resource to its Preferred Node After a resource has moved, either due to a failover or to an administrator manually moving the node, it will not necessarily move back to its original node even after the circumstances that caused the failover have been corrected. To relocate resources to their preferred node, use the following command. A preferred node is determined by the current cluster status, constraints, resource location, and other settings and may change over time. If you do not specify any resources, all resource are relocated to their preferred nodes. This command calculates the preferred node for each resource while ignoring resource stickiness. After calculating the preferred node, it creates location constraints which will cause the resources to move to their preferred nodes. Once the resources have been moved, the constraints are deleted automatically. To remove all constraints created by the pcs resource relocate run command, you can enter the pcs resource relocate clear command. To display the current status of resources and their optimal node ignoring resource stickiness, enter the pcs resource relocate show command. | [
"pcs resource move resource_id [ destination_node ] [--master] [lifetime= lifetime ]",
"pcs property set cluster-recheck-interval= value",
"pcs resource move resource1 example-node2 lifetime=PT1H30M",
"pcs resource move resource1 example-node2 lifetime=PT30M",
"pcs resource relocate run [ resource1 ] [ resource2 ]"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/ch-manageresource-haar |
Chapter 53. module | Chapter 53. module This chapter describes the commands under the module command. 53.1. module list List module versions Usage: Table 53.1. Optional Arguments Value Summary -h, --help Show this help message and exit --all Show all modules that have version information Table 53.2. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 53.3. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 53.4. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 53.5. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. | [
"openstack module list [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--all]"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/module |
Chapter 7. KafkaListenerAuthenticationTls schema reference | Chapter 7. KafkaListenerAuthenticationTls schema reference Used in: GenericKafkaListener The type property is a discriminator that distinguishes use of the KafkaListenerAuthenticationTls type from KafkaListenerAuthenticationScramSha512 , KafkaListenerAuthenticationOAuth , KafkaListenerAuthenticationCustom . It must have the value tls for the type KafkaListenerAuthenticationTls . Property Property type Description type string Must be tls . | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaListenerAuthenticationTls-reference |
Chapter 4. Viewing change events | Chapter 4. Viewing change events After deploying the Debezium MySQL connector, it starts capturing changes to the inventory database. When the connector starts, it writes events to a set of Apache Kafka topics, each of which represents one of the tables in the MySQL database. The name of each topic begins with the name of the database server, dbserver1 . The connector writes to the following Kafka topics: dbserver1 The schema change topic to which DDL statements that apply to the tables for which changes are being captured are written. dbserver1.inventory.products Receives change event records for the products table in the inventory database. dbserver1.inventory.products_on_hand Receives change event records for the products_on_hand table in the inventory database. dbserver1.inventory.customers Receives change event records for the customers table in the inventory database. dbserver1.inventory.orders Receives change event records for the orders table in the inventory database. The remainder of this tutorial examines the dbserver1.inventory.customers Kafka topic. As you look more closely at the topic, you'll see how it represents different types of change events, and find information about the connector captured each event. The tutorial contains the following sections: Viewing a create event Updating the database and viewing the update event Deleting a record in the database and viewing the delete event Restarting Kafka Connect and changing the database 4.1. Viewing a create event By viewing the dbserver1.inventory.customers topic, you can see how the MySQL connector captured create events in the inventory database. In this case, the create events capture new customers being added to the database. Procedure Open a new terminal and use kafka-console-consumer to consume the dbserver1.inventory.customers topic from the beginning of the topic. This command runs a simple consumer ( kafka-console-consumer.sh ) in the Pod that is running Kafka ( my-cluster-kafka-0 ): USD oc exec -it my-cluster-kafka-0 -- /opt/kafka/bin/kafka-console-consumer.sh \ --bootstrap-server localhost:9092 \ --from-beginning \ --property print.key=true \ --topic dbserver1.inventory.customers The consumer returns four messages (in JSON format), one for each row in the customers table. Each message contains the event records for the corresponding table row. There are two JSON documents for each event: a key and a value . The key corresponds to the row's primary key, and the value shows the details of the row (the fields that the row contains, the value of each field, and the type of operation that was performed on the row). For the last event, review the details of the key . Here are the details of the key of the last event (formatted for readability): { "schema":{ "type":"struct", "fields":[ { "type":"int32", "optional":false, "field":"id" } ], "optional":false, "name":"dbserver1.inventory.customers.Key" }, "payload":{ "id":1004 } } The event has two parts: a schema and a payload . The schema contains a Kafka Connect schema describing what is in the payload. In this case, the payload is a struct named dbserver1.inventory.customers.Key that is not optional and has one required field ( id of type int32 ). The payload has a single id field, with a value of 1004 . By reviewing the key of the event, you can see that this event applies to the row in the inventory.customers table whose id primary key column had a value of 1004 . Review the details of the same event's value . The event's value shows that the row was created, and describes what it contains (in this case, the id , first_name , last_name , and email of the inserted row). Here are the details of the value of the last event (formatted for readability): { "schema": { "type": "struct", "fields": [ { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "dbserver1.inventory.customers.Value", "field": "before" }, { "type": "struct", "fields": [ { "type": "int32", "optional": false, "field": "id" }, { "type": "string", "optional": false, "field": "first_name" }, { "type": "string", "optional": false, "field": "last_name" }, { "type": "string", "optional": false, "field": "email" } ], "optional": true, "name": "dbserver1.inventory.customers.Value", "field": "after" }, { "type": "struct", "fields": [ { "type": "string", "optional": true, "field": "version" }, { "type": "string", "optional": false, "field": "name" }, { "type": "int64", "optional": false, "field": "server_id" }, { "type": "int64", "optional": false, "field": "ts_sec" }, { "type": "string", "optional": true, "field": "gtid" }, { "type": "string", "optional": false, "field": "file" }, { "type": "int64", "optional": false, "field": "pos" }, { "type": "int32", "optional": false, "field": "row" }, { "type": "boolean", "optional": true, "field": "snapshot" }, { "type": "int64", "optional": true, "field": "thread" }, { "type": "string", "optional": true, "field": "db" }, { "type": "string", "optional": true, "field": "table" } ], "optional": false, "name": "io.debezium.connector.mysql.Source", "field": "source" }, { "type": "string", "optional": false, "field": "op" }, { "type": "int64", "optional": true, "field": "ts_ms" }, { "type": "int64", "optional": true, "field": "ts_us" }, { "type": "int64", "optional": true, "field": "ts_ns" } ], "optional": false, "name": "dbserver1.inventory.customers.Envelope", "version": 1 }, "payload": { "before": null, "after": { "id": 1004, "first_name": "Anne", "last_name": "Kretchmar", "email": "[email protected]" }, "source": { "version": "2.7.3.Final", "name": "dbserver1", "server_id": 0, "ts_sec": 0, "gtid": null, "file": "mysql-bin.000003", "pos": 154, "row": 0, "snapshot": true, "thread": null, "db": "inventory", "table": "customers" }, "op": "r", "ts_ms": 1486500577691, "ts_us": 1486500577691547, "ts_ns": 1486500577691547930 } } This portion of the event is much longer, but like the event's key , it also has a schema and a payload . The schema contains a Kafka Connect schema named dbserver1.inventory.customers.Envelope (version 1) that can contain five fields: op A required field that contains a string value describing the type of operation. Values for the MySQL connector are c for create (or insert), u for update, d for delete, and r for read (in the case of a snapshot). before An optional field that, if present, contains the state of the row before the event occurred. The structure will be described by the dbserver1.inventory.customers.Value Kafka Connect schema, which the dbserver1 connector uses for all rows in the inventory.customers table. after An optional field that, if present, contains the state of the row after the event occurred. The structure is described by the same dbserver1.inventory.customers.Value Kafka Connect schema used in before . source A required field that contains a structure describing the source metadata for the event, which in the case of MySQL, contains several fields: the connector name, the name of the binlog file where the event was recorded, the position in that binlog file where the event appeared, the row within the event (if there is more than one), the names of the affected database and table, the MySQL thread ID that made the change, whether this event was part of a snapshot, and, if available, the MySQL server ID, and the timestamp in seconds. ts_ms An optional field that, if present, contains the time (using the system clock in the JVM running the Kafka Connect task) at which the connector processed the event. Note The JSON representations of the events are much longer than the rows they describe. This is because, with every event key and value, Kafka Connect ships the schema that describes the payload . Over time, this structure may change. However, having the schemas for the key and the value in the event itself makes it much easier for consuming applications to understand the messages, especially as they evolve over time. The Debezium MySQL connector constructs these schemas based upon the structure of the database tables. If you use DDL statements to alter the table definitions in the MySQL databases, the connector reads these DDL statements and updates its Kafka Connect schemas. This is the only way that each event is structured exactly like the table from where it originated at the time the event occurred. However, the Kafka topic containing all of the events for a single table might have events that correspond to each state of the table definition. The JSON converter includes the key and value schemas in every message, so it does produce very verbose events. Compare the event's key and value schemas to the state of the inventory database. In the terminal that is running the MySQL command line client, run the following statement: mysql> SELECT * FROM customers; +------+------------+-----------+-----------------------+ | id | first_name | last_name | email | +------+------------+-----------+-----------------------+ | 1001 | Sally | Thomas | [email protected] | | 1002 | George | Bailey | [email protected] | | 1003 | Edward | Walker | [email protected] | | 1004 | Anne | Kretchmar | [email protected] | +------+------------+-----------+-----------------------+ 4 rows in set (0.00 sec) This shows that the event records you reviewed match the records in the database. 4.2. Updating the database and viewing the update event Now that you have seen how the Debezium MySQL connector captured the create events in the inventory database, you will now change one of the records and see how the connector captures it. By completing this procedure, you will learn how to find details about what changed in a database commit, and how you can compare change events to determine when the change occurred in relation to other changes. Procedure In the terminal that is running the MySQL command line client, run the following statement: mysql> UPDATE customers SET first_name='Anne Marie' WHERE id=1004; Query OK, 1 row affected (0.05 sec) Rows matched: 1 Changed: 1 Warnings: 0 View the updated customers table: mysql> SELECT * FROM customers; +------+------------+-----------+-----------------------+ | id | first_name | last_name | email | +------+------------+-----------+-----------------------+ | 1001 | Sally | Thomas | [email protected] | | 1002 | George | Bailey | [email protected] | | 1003 | Edward | Walker | [email protected] | | 1004 | Anne Marie | Kretchmar | [email protected] | +------+------------+-----------+-----------------------+ 4 rows in set (0.00 sec) Switch to the terminal running kafka-console-consumer to see a new fifth event. By changing a record in the customers table, the Debezium MySQL connector generated a new event. You should see two new JSON documents: one for the event's key , and one for the new event's value . Here are the details of the key for the update event (formatted for readability): { "schema": { "type": "struct", "name": "dbserver1.inventory.customers.Key" "optional": false, "fields": [ { "field": "id", "type": "int32", "optional": false } ] }, "payload": { "id": 1004 } } This key is the same as the key for the events. Here is that new event's value . There are no changes in the schema section, so only the payload section is shown (formatted for readability): { "schema": {...}, "payload": { "before": { 1 "id": 1004, "first_name": "Anne", "last_name": "Kretchmar", "email": "[email protected]" }, "after": { 2 "id": 1004, "first_name": "Anne Marie", "last_name": "Kretchmar", "email": "[email protected]" }, "source": { 3 "name": "2.7.3.Final", "name": "dbserver1", "server_id": 223344, "ts_sec": 1486501486, "gtid": null, "file": "mysql-bin.000003", "pos": 364, "row": 0, "snapshot": null, "thread": 3, "db": "inventory", "table": "customers" }, "op": "u", 4 "ts_ms": 1486501486308, 5 "ts_us": 1486501486308910, 6 "ts_ns": 1486501486308910814 7 } } Table 4.1. Descriptions of fields in the payload of an update event value Item Description 1 The before field shows the values present in the row before the database commit. The original first_name value is Anne . 2 The after field shows the state of the row after the change event. The first_name value is now Anne Marie . 3 The source field structure has many of the same values as before, except that the ts_sec and pos fields have changed (the file might have changed in other circumstances). 4 The op field value is now u , signifying that this row changed because of an update. 5 The ts_ms , ts_us , ts_ns field shows a timestamp that indicates when Debezium processed this event. By viewing the payload section, you can learn several important things about the update event: By comparing the before and after structures, you can determine what actually changed in the affected row because of the commit. By reviewing the source structure, you can find information about MySQL's record of the change (providing traceability). By comparing the payload section of an event to other events in the same topic (or a different topic), you can determine whether the event occurred before, after, or as part of the same MySQL commit as another event. 4.3. Deleting a record in the database and viewing the delete event Now that you have seen how the Debezium MySQL connector captured the create and update events in the inventory database, you will now delete one of the records and see how the connector captures it. By completing this procedure, you will learn how to find details about delete events, and how Kafka uses log compaction to reduce the number of delete events while still enabling consumers to get all of the events. Procedure In the terminal that is running the MySQL command line client, run the following statement: mysql> DELETE FROM customers WHERE id=1004; Query OK, 1 row affected (0.00 sec) Note If the above command fails with a foreign key constraint violation, then you must remove the reference of the customer address from the addresses table using the following statement: mysql> DELETE FROM addresses WHERE customer_id=1004; Switch to the terminal running kafka-console-consumer to see two new events. By deleting a row in the customers table, the Debezium MySQL connector generated two new events. Review the key and value for the first new event. Here are the details of the key for the first new event (formatted for readability): { "schema": { "type": "struct", "name": "dbserver1.inventory.customers.Key" "optional": false, "fields": [ { "field": "id", "type": "int32", "optional": false } ] }, "payload": { "id": 1004 } } This key is the same as the key in the two events you looked at. Here is the value of the first new event (formatted for readability): { "schema": {...}, "payload": { "before": { 1 "id": 1004, "first_name": "Anne Marie", "last_name": "Kretchmar", "email": "[email protected]" }, "after": null, 2 "source": { 3 "name": "2.7.3.Final", "name": "dbserver1", "server_id": 223344, "ts_sec": 1486501558, "gtid": null, "file": "mysql-bin.000003", "pos": 725, "row": 0, "snapshot": null, "thread": 3, "db": "inventory", "table": "customers" }, "op": "d", 4 "ts_ms": 1486501558315, 5 "ts_us": 1486501558315901, 6 "ts_ns": 1486501558315901687 7 } } Table 4.2. Descriptions of fields in an event value Item Description 1 The before field now has the state of the row that was deleted with the database commit. 2 The after field is null because the row no longer exists. 3 The source field structure has many of the same values as before, except that the values of the ts_sec and pos fields have changed. In some circumstances, the file value might also change. 4 he op field value is now d , indicating that the record was deleted. 5 The ts_ms , ts_us , and ts_ns fields show timestamps that indicate when Debezium processed the event. Thus, this event provides a consumer with the information that it needs to process the removal of the row. The old values are also provided, because some consumers might require them to properly handle the removal. Review the key and value for the second new event. Here is the key for the second new event (formatted for readability): { "schema": { "type": "struct", "name": "dbserver1.inventory.customers.Key" "optional": false, "fields": [ { "field": "id", "type": "int32", "optional": false } ] }, "payload": { "id": 1004 } } Once again, this key is exactly the same key as in the three events you looked at. Here is the value of that same event (formatted for readability): { "schema": null, "payload": null } If Kafka is set up to be log compacted , it will remove older messages from the topic if there is at least one message later in the topic with same key. This last event is called a tombstone event, because it has a key and an empty value. This means that Kafka will remove all prior messages with the same key. Even though the prior messages will be removed, the tombstone event means that consumers can still read the topic from the beginning and not miss any events. 4.4. Restarting the Kafka Connect service Now that you have seen how the Debezium MySQL connector captures create, update, and delete events, you will now see how it can capture change events even when it is not running. The Kafka Connect service automatically manages tasks for its registered connectors. Therefore, if it goes offline, when it restarts, it will start any non-running tasks. This means that even if Debezium is not running, it can still report changes in a database. In this procedure, you will stop Kafka Connect, change some data in the database, and then restart Kafka Connect to see the change events. Procedure Stop the Kafka Connect service. Open the configuration for the Kafka Connect deployment: USD oc edit deployment/my-connect-cluster-connect The deployment configuration opens: apiVersion: apps.openshift.io/v1 kind: Deployment metadata: ... spec: replicas: 1 ... Change the spec.replicas value to 0 . Save the configuration. Verify that the Kafka Connect service has stopped. This command shows that the Kafka Connect service is completed, and that no pods are running: USD oc get pods -l strimzi.io/name=my-connect-cluster-connect NAME READY STATUS RESTARTS AGE my-connect-cluster-connect-1-dxcs9 0/1 Completed 0 7h While the Kafka Connect service is down, switch to the terminal running the MySQL client, and add a new record to the database. mysql> INSERT INTO customers VALUES (default, "Sarah", "Thompson", "[email protected]"); Restart the Kafka Connect service. Open the deployment configuration for the Kafka Connect service. USD oc edit deployment/my-connect-cluster-connect The deployment configuration opens: apiVersion: apps.openshift.io/v1 kind: Deployment metadata: ... spec: replicas: 0 ... Change the spec.replicas value to 1 . Save the deployment configuration. Verify that the Kafka Connect service has restarted. This command shows that the Kafka Connect service is running, and that the pod is ready: USD oc get pods -l strimzi.io/name=my-connect-cluster-connect NAME READY STATUS RESTARTS AGE my-connect-cluster-connect-2-q9kkl 1/1 Running 0 74s Switch to the terminal that is running kafka-console-consumer.sh . New events pop up as they arrive. Examine the record that you created when Kafka Connect was offline (formatted for readability): { ... "payload":{ "id":1005 } } { ... "payload":{ "before":null, "after":{ "id":1005, "first_name":"Sarah", "last_name":"Thompson", "email":"[email protected]" }, "source":{ "version":"2.7.3.Final", "connector":"mysql", "name":"dbserver1", "ts_ms":1582581502000, "snapshot":"false", "db":"inventory", "table":"customers", "server_id":223344, "gtid":null, "file":"mysql-bin.000004", "pos":364, "row":0, "thread":5, "query":null }, "op":"c", "ts_ms":1582581502317 } } | [
"oc exec -it my-cluster-kafka-0 -- /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --property print.key=true --topic dbserver1.inventory.customers",
"{ \"schema\":{ \"type\":\"struct\", \"fields\":[ { \"type\":\"int32\", \"optional\":false, \"field\":\"id\" } ], \"optional\":false, \"name\":\"dbserver1.inventory.customers.Key\" }, \"payload\":{ \"id\":1004 } }",
"{ \"schema\": { \"type\": \"struct\", \"fields\": [ { \"type\": \"struct\", \"fields\": [ { \"type\": \"int32\", \"optional\": false, \"field\": \"id\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"first_name\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"last_name\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"email\" } ], \"optional\": true, \"name\": \"dbserver1.inventory.customers.Value\", \"field\": \"before\" }, { \"type\": \"struct\", \"fields\": [ { \"type\": \"int32\", \"optional\": false, \"field\": \"id\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"first_name\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"last_name\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"email\" } ], \"optional\": true, \"name\": \"dbserver1.inventory.customers.Value\", \"field\": \"after\" }, { \"type\": \"struct\", \"fields\": [ { \"type\": \"string\", \"optional\": true, \"field\": \"version\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"name\" }, { \"type\": \"int64\", \"optional\": false, \"field\": \"server_id\" }, { \"type\": \"int64\", \"optional\": false, \"field\": \"ts_sec\" }, { \"type\": \"string\", \"optional\": true, \"field\": \"gtid\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"file\" }, { \"type\": \"int64\", \"optional\": false, \"field\": \"pos\" }, { \"type\": \"int32\", \"optional\": false, \"field\": \"row\" }, { \"type\": \"boolean\", \"optional\": true, \"field\": \"snapshot\" }, { \"type\": \"int64\", \"optional\": true, \"field\": \"thread\" }, { \"type\": \"string\", \"optional\": true, \"field\": \"db\" }, { \"type\": \"string\", \"optional\": true, \"field\": \"table\" } ], \"optional\": false, \"name\": \"io.debezium.connector.mysql.Source\", \"field\": \"source\" }, { \"type\": \"string\", \"optional\": false, \"field\": \"op\" }, { \"type\": \"int64\", \"optional\": true, \"field\": \"ts_ms\" }, { \"type\": \"int64\", \"optional\": true, \"field\": \"ts_us\" }, { \"type\": \"int64\", \"optional\": true, \"field\": \"ts_ns\" } ], \"optional\": false, \"name\": \"dbserver1.inventory.customers.Envelope\", \"version\": 1 }, \"payload\": { \"before\": null, \"after\": { \"id\": 1004, \"first_name\": \"Anne\", \"last_name\": \"Kretchmar\", \"email\": \"[email protected]\" }, \"source\": { \"version\": \"2.7.3.Final\", \"name\": \"dbserver1\", \"server_id\": 0, \"ts_sec\": 0, \"gtid\": null, \"file\": \"mysql-bin.000003\", \"pos\": 154, \"row\": 0, \"snapshot\": true, \"thread\": null, \"db\": \"inventory\", \"table\": \"customers\" }, \"op\": \"r\", \"ts_ms\": 1486500577691, \"ts_us\": 1486500577691547, \"ts_ns\": 1486500577691547930 } }",
"mysql> SELECT * FROM customers; +------+------------+-----------+-----------------------+ | id | first_name | last_name | email | +------+------------+-----------+-----------------------+ | 1001 | Sally | Thomas | [email protected] | | 1002 | George | Bailey | [email protected] | | 1003 | Edward | Walker | [email protected] | | 1004 | Anne | Kretchmar | [email protected] | +------+------------+-----------+-----------------------+ 4 rows in set (0.00 sec)",
"mysql> UPDATE customers SET first_name='Anne Marie' WHERE id=1004; Query OK, 1 row affected (0.05 sec) Rows matched: 1 Changed: 1 Warnings: 0",
"mysql> SELECT * FROM customers; +------+------------+-----------+-----------------------+ | id | first_name | last_name | email | +------+------------+-----------+-----------------------+ | 1001 | Sally | Thomas | [email protected] | | 1002 | George | Bailey | [email protected] | | 1003 | Edward | Walker | [email protected] | | 1004 | Anne Marie | Kretchmar | [email protected] | +------+------------+-----------+-----------------------+ 4 rows in set (0.00 sec)",
"{ \"schema\": { \"type\": \"struct\", \"name\": \"dbserver1.inventory.customers.Key\" \"optional\": false, \"fields\": [ { \"field\": \"id\", \"type\": \"int32\", \"optional\": false } ] }, \"payload\": { \"id\": 1004 } }",
"{ \"schema\": {...}, \"payload\": { \"before\": { 1 \"id\": 1004, \"first_name\": \"Anne\", \"last_name\": \"Kretchmar\", \"email\": \"[email protected]\" }, \"after\": { 2 \"id\": 1004, \"first_name\": \"Anne Marie\", \"last_name\": \"Kretchmar\", \"email\": \"[email protected]\" }, \"source\": { 3 \"name\": \"2.7.3.Final\", \"name\": \"dbserver1\", \"server_id\": 223344, \"ts_sec\": 1486501486, \"gtid\": null, \"file\": \"mysql-bin.000003\", \"pos\": 364, \"row\": 0, \"snapshot\": null, \"thread\": 3, \"db\": \"inventory\", \"table\": \"customers\" }, \"op\": \"u\", 4 \"ts_ms\": 1486501486308, 5 \"ts_us\": 1486501486308910, 6 \"ts_ns\": 1486501486308910814 7 } }",
"mysql> DELETE FROM customers WHERE id=1004; Query OK, 1 row affected (0.00 sec)",
"mysql> DELETE FROM addresses WHERE customer_id=1004;",
"{ \"schema\": { \"type\": \"struct\", \"name\": \"dbserver1.inventory.customers.Key\" \"optional\": false, \"fields\": [ { \"field\": \"id\", \"type\": \"int32\", \"optional\": false } ] }, \"payload\": { \"id\": 1004 } }",
"{ \"schema\": {...}, \"payload\": { \"before\": { 1 \"id\": 1004, \"first_name\": \"Anne Marie\", \"last_name\": \"Kretchmar\", \"email\": \"[email protected]\" }, \"after\": null, 2 \"source\": { 3 \"name\": \"2.7.3.Final\", \"name\": \"dbserver1\", \"server_id\": 223344, \"ts_sec\": 1486501558, \"gtid\": null, \"file\": \"mysql-bin.000003\", \"pos\": 725, \"row\": 0, \"snapshot\": null, \"thread\": 3, \"db\": \"inventory\", \"table\": \"customers\" }, \"op\": \"d\", 4 \"ts_ms\": 1486501558315, 5 \"ts_us\": 1486501558315901, 6 \"ts_ns\": 1486501558315901687 7 } }",
"{ \"schema\": { \"type\": \"struct\", \"name\": \"dbserver1.inventory.customers.Key\" \"optional\": false, \"fields\": [ { \"field\": \"id\", \"type\": \"int32\", \"optional\": false } ] }, \"payload\": { \"id\": 1004 } }",
"{ \"schema\": null, \"payload\": null }",
"oc edit deployment/my-connect-cluster-connect",
"apiVersion: apps.openshift.io/v1 kind: Deployment metadata: spec: replicas: 1",
"oc get pods -l strimzi.io/name=my-connect-cluster-connect NAME READY STATUS RESTARTS AGE my-connect-cluster-connect-1-dxcs9 0/1 Completed 0 7h",
"mysql> INSERT INTO customers VALUES (default, \"Sarah\", \"Thompson\", \"[email protected]\");",
"oc edit deployment/my-connect-cluster-connect",
"apiVersion: apps.openshift.io/v1 kind: Deployment metadata: spec: replicas: 0",
"oc get pods -l strimzi.io/name=my-connect-cluster-connect NAME READY STATUS RESTARTS AGE my-connect-cluster-connect-2-q9kkl 1/1 Running 0 74s",
"{ \"payload\":{ \"id\":1005 } } { \"payload\":{ \"before\":null, \"after\":{ \"id\":1005, \"first_name\":\"Sarah\", \"last_name\":\"Thompson\", \"email\":\"[email protected]\" }, \"source\":{ \"version\":\"2.7.3.Final\", \"connector\":\"mysql\", \"name\":\"dbserver1\", \"ts_ms\":1582581502000, \"snapshot\":\"false\", \"db\":\"inventory\", \"table\":\"customers\", \"server_id\":223344, \"gtid\":null, \"file\":\"mysql-bin.000004\", \"pos\":364, \"row\":0, \"thread\":5, \"query\":null }, \"op\":\"c\", \"ts_ms\":1582581502317 } }"
] | https://docs.redhat.com/en/documentation/red_hat_build_of_debezium/2.7.3/html/getting_started_with_debezium/viewing-change-events |
Chapter 113. Ganglia Component | Chapter 113. Ganglia Component Available as of Camel version 2.15 Provides a mechanism to send a value (the message body) as a metric to the Ganglia monitoring system. Uses the gmetric4j library. Can be used in conjunction with standard Ganglia and JMXetric for monitoring metrics from the OS, JVM and business processes through a single platform. You should have a Ganglia gmond agent running on the machine where your JVM runs. The gmond sends a heartbeat to the Ganglia infrastructure, camel-ganglia can't send the heartbeat itself currently. On most Linux systems (Debian, Ubuntu, Fedora and RHEL/CentOS with EPEL) you can just install the Ganglia agent package and it runs automatically using multicast configuration. You can configure it to use regular UDP unicast if you prefer. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ganglia</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 113.1. URI format ganglia:address:port[?options] You can append query options to the URI in the following format, ?option=value&option=value&... 113.2. Ganglia component and endpoint URI options The Ganglia component supports 2 options, which are listed below. Name Description Default Type configuration (advanced) To use the shared configuration GangliaConfiguration resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Ganglia endpoint is configured using URI syntax: with the following path and query parameters: 113.2.1. Path Parameters (2 parameters): Name Description Default Type host Host name for Ganglia server 239.2.11.71 String port Port for Ganglia server 8649 int 113.2.2. Query Parameters (13 parameters): Name Description Default Type dmax (producer) Minumum time in seconds before Ganglia will purge the metric value if it expires. Set to 0 and the value will remain in Ganglia indefinitely until a gmond agent restart. 0 int groupName (producer) The group that the metric belongs to. java String metricName (producer) The name to use for the metric. metric String mode (producer) Send the UDP metric packets using MULTICAST or UNICAST MULTICAST UDPAddressingMode prefix (producer) Prefix the metric name with this string and an underscore. String slope (producer) The slope BOTH GMetricSlope spoofHostname (producer) Spoofing information IP:hostname String tmax (producer) Maximum time in seconds that the value can be considered current. After this, Ganglia considers the value to have expired. 60 int ttl (producer) If using multicast, set the TTL of the packets 5 int type (producer) The type of value STRING GMetricType units (producer) Any unit of measurement that qualifies the metric, e.g. widgets, litres, bytes. Do not include a prefix such as k (kilo) or m (milli), other tools may scale the units later. The value should be unscaled. String wireFormat31x (producer) Use the wire format of Ganglia 3.1.0 and later versions. Set this to false to use Ganglia 3.0.x or earlier. true boolean synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 113.3. Spring Boot Auto-Configuration The component supports 16 options, which are listed below. Name Description Default Type camel.component.ganglia.configuration.dmax Minumum time in seconds before Ganglia will purge the metric value if it expires. Set to 0 and the value will remain in Ganglia indefinitely until a gmond agent restart. 0 Integer camel.component.ganglia.configuration.group-name The group that the metric belongs to. java String camel.component.ganglia.configuration.host Host name for Ganglia server 239.2.11.71 String camel.component.ganglia.configuration.metric-name The name to use for the metric. metric String camel.component.ganglia.configuration.mode Send the UDP metric packets using MULTICAST or UNICAST GMetricUSDUDP AddressingMode camel.component.ganglia.configuration.port Port for Ganglia server 8649 Integer camel.component.ganglia.configuration.prefix Prefix the metric name with this string and an underscore. String camel.component.ganglia.configuration.slope The slope GMetricSlope camel.component.ganglia.configuration.spoof-hostname Spoofing information IP:hostname String camel.component.ganglia.configuration.tmax Maximum time in seconds that the value can be considered current. After this, Ganglia considers the value to have expired. 60 Integer camel.component.ganglia.configuration.ttl If using multicast, set the TTL of the packets 5 Integer camel.component.ganglia.configuration.type The type of value GMetricType camel.component.ganglia.configuration.units Any unit of measurement that qualifies the metric, e.g. widgets, litres, bytes. Do not include a prefix such as k (kilo) or m (milli), other tools may scale the units later. The value should be unscaled. String camel.component.ganglia.configuration.wire-format31x Use the wire format of Ganglia 3.1.0 and later versions. Set this to false to use Ganglia 3.0.x or earlier. true Boolean camel.component.ganglia.enabled Enable ganglia component true Boolean camel.component.ganglia.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 113.4. Message body Any value (such as a string or numeric type) in the body is sent to the Ganglia system. 113.5. Return value / response Ganglia sends metrics using unidirectional UDP or multicast. There is no response or change to the message body. 113.6. Examples 113.6.1. Sending a String metric The message body will be converted to a String and sent as a metric value. Unlike numeric metrics, String values can't be charted but Ganglia makes them available for reporting. The os_version string at the top of every Ganglia host page is an example of a String metric. from("direct:string.for.ganglia") .setHeader(GangliaConstants.METRIC_NAME, simple("my_string_metric")) .setHeader(GangliaConstants.METRIC_TYPE, GMetricType.STRING) .to("direct:ganglia.tx"); from("direct:ganglia.tx") .to("ganglia:239.2.11.71:8649?mode=MULTICAST&prefix=test"); 113.6.2. Sending a numeric metric from("direct:value.for.ganglia") .setHeader(GangliaConstants.METRIC_NAME, simple("widgets_in_stock")) .setHeader(GangliaConstants.METRIC_TYPE, GMetricType.UINT32) .setHeader(GangliaConstants.METRIC_UNITS, simple("widgets")) .to("direct:ganglia.tx"); from("direct:ganglia.tx") .to("ganglia:239.2.11.71:8649?mode=MULTICAST&prefix=test"); | [
"<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ganglia</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>",
"ganglia:address:port[?options]",
"ganglia:host:port",
"from(\"direct:string.for.ganglia\") .setHeader(GangliaConstants.METRIC_NAME, simple(\"my_string_metric\")) .setHeader(GangliaConstants.METRIC_TYPE, GMetricType.STRING) .to(\"direct:ganglia.tx\"); from(\"direct:ganglia.tx\") .to(\"ganglia:239.2.11.71:8649?mode=MULTICAST&prefix=test\");",
"from(\"direct:value.for.ganglia\") .setHeader(GangliaConstants.METRIC_NAME, simple(\"widgets_in_stock\")) .setHeader(GangliaConstants.METRIC_TYPE, GMetricType.UINT32) .setHeader(GangliaConstants.METRIC_UNITS, simple(\"widgets\")) .to(\"direct:ganglia.tx\"); from(\"direct:ganglia.tx\") .to(\"ganglia:239.2.11.71:8649?mode=MULTICAST&prefix=test\");"
] | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/ganglia-component |
function::qsq_utilization | function::qsq_utilization Name function::qsq_utilization - Fraction of time that any request was being serviced Synopsis Arguments qname queue name scale scale variable to take account for interval fraction Description This function returns the average time in microseconds that at least one request was being serviced. | [
"qsq_utilization:long(qname:string,scale:long)"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-qsq-utilization |
Chapter 1. Hot Rod Java Clients | Chapter 1. Hot Rod Java Clients Access Data Grid remotely through the Hot Rod Java client API. 1.1. Hot Rod Protocol Hot Rod is a binary TCP protocol that Data Grid offers high-performance client-server interactions with the following capabilities: Load balancing. Hot Rod clients can send requests across Data Grid clusters using different strategies. Failover. Hot Rod clients can monitor Data Grid cluster topology changes and automatically switch to available nodes. Efficient data location. Hot Rod clients can find key owners and make requests directly to those nodes, which reduces latency. 1.2. Client Intelligence Hot Rod clients use intelligence mechanisms to efficiently send requests to Data Grid Server clusters. By default, the Hot Rod protocol has the HASH_DISTRIBUTION_AWARE intelligence mechanism enabled. BASIC intelligence Clients do not receive topology change events for Data Grid clusters, such as nodes joining or leaving, and use only the list of Data Grid Server network locations that you add to the client configuration. Note Enable BASIC intelligence to use the Hot Rod client configuration when a Data Grid Server does not send internal and hidden cluster topology to the Hot Rod client. TOPOLOGY_AWARE intelligence Clients receive and store topology change events for Data Grid clusters to dynamically keep track of Data Grid Servers on the network. To receive cluster topology, clients need the network location, either IP address or host name, of at least one Hot Rod server at startup. After the client connects, Data Grid Server transmits the topology to the client. When Data Grid Server nodes join or leave the cluster, Data Grid transmits an updated topology to the client. HASH_DISTRIBUTION_AWARE intelligence Clients receive and store topology change events for Data Grid clusters in addition to hashing information that enables clients to identify which nodes store specific keys. For example, consider a put(k,v) operation. The client calculates the hash value for the key so it can locate the exact Data Grid Server node on which the data resides. Clients can then connect directly to that node to perform read and write operations. The benefit of HASH_DISTRIBUTION_AWARE intelligence is that Data Grid Server does not need to look up values based on key hashes, which uses less server-side resources. Another benefit is that Data Grid Server responds to client requests more quickly because they do not need to make additional network roundtrips. Configuration By default, Hot Rod client uses the intelligence that you configure globally for all the Data Grid clusters. ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.clientIntelligence(ClientIntelligence.BASIC); hotrod-client.properties When you configure Hot Rod client to use multiple Data Grid clusters you can use different intelligence for each of the clusters. ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addCluster("NYC").clusterClientIntelligence(ClientIntelligence.BASIC); hotrod-client.properties Failed Server Timeout If a server does not report the topology as BASIC, or the client is unable to connect to a server due to network issues, the client will mark the server as failed. A client does not attempt to connect to a server marked as failed until the client receives an updated topology. Because BASIC topology never sends an update, the client will not re-attempt connection. To avoid such a situation, you can use the serverFailureTimeout setting that clears the failed server status after a defined period of time. Data Grid will try to reconnect to the server after the defined timeout. If the server is still unreachable, it is marked as failed again and the connection will be re-attempted after the defined timeout. You can disabled reconnection attempts by setting the serverFailureTimeout value to -1 . ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.serverFailureTimeout(5000).clusterClientIntelligence(ClientIntelligence.BASIC); hotrod-client.properties Additional resources org.infinispan.client.hotrod.configuration.ClientIntelligence 1.3. Request Balancing Hot Rod Java clients balance requests to Data Grid Server clusters so that read and write operations are spread across nodes. Clients that use BASIC or TOPOLOGY_AWARE intelligence use request balancing for all requests. Clients that use HASH_DISTRIBUTION_AWARE intelligence send requests directly to the node that stores the desired key. If the node does not respond, the clients then fall back to request balancing. The default balancing strategy is round-robin, so Hot Rod clients perform request balancing as in the following example where s1 , s2 , s3 are nodes in a Data Grid cluster: // Connect to the Data Grid cluster RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build()); // Obtain the remote cache RemoteCache<String, String> cache = cacheManager.getCache("test"); //Hot Rod client sends a request to the "s1" node cache.put("key1", "aValue"); //Hot Rod client sends a request to the "s2" node cache.put("key2", "aValue"); //Hot Rod client sends a request to the "s3" node String value = cache.get("key1"); //Hot Rod client sends the request to the "s1" node again cache.remove("key2"); Custom balancing policies You can use custom FailoverRequestBalancingStrategy implementations if you add your class in the Hot Rod client configuration. ConfigurationBuilder ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host("127.0.0.1") .port(11222) .balancingStrategy(new MyCustomBalancingStrategy()); hotrod-client.properties Additional resources org.infinispan.client.hotrod.FailoverRequestBalancingStrategy 1.4. Client Failover Hot Rod clients can automatically failover when Data Grid cluster topologies change. For instance, Hot Rod clients that are topology-aware can detect when one or more Data Grid servers fail. In addition to failover between clustered Data Grid servers, Hot Rod clients can failover between Data Grid clusters. For example, you have a Data Grid cluster running in New York ( NYC ) and another cluster running in London ( LON ). Clients sending requests to NYC detect that no nodes are available so they switch to the cluster in LON . Clients then maintain connections to LON until you manually switch clusters or failover happens again. Transactional Caches with Failover Conditional operations, such as putIfAbsent() , replace() , remove() , have strict method return guarantees. Likewise, some operations can require values to be returned. Even though Hot Rod clients can failover, you should use transactional caches to ensure that operations do not partially complete and leave conflicting entries on different nodes. 1.5. Hot Rod client compatibility with Data Grid Server Data Grid Server allows you to connect Hot Rod clients with different versions. For instance during a migration or upgrade to your Data Grid cluster, the Hot Rod client version might be a lower Data Grid version than Data Grid Server. Tip Data Grid recommends using the latest Hot Rod client version to benefit from the most recent capabilities and security enhancements. Data Grid 8 and later Hot Rod protocol version 3.x automatically negotiates the highest version possible for clients with Data Grid Server. Data Grid 7.3 and earlier Clients that use a Hot Rod protocol version that is higher than the Data Grid Server version must set the infinispan.client.hotrod.protocol_version property. Additional resources Hot Rod protocol reference Connecting Hot Rod clients to servers with different versions (Red Hat Knowledgebase) | [
"ConfigurationBuilder builder = new ConfigurationBuilder(); builder.clientIntelligence(ClientIntelligence.BASIC);",
"infinispan.client.hotrod.client_intelligence=BASIC",
"ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addCluster(\"NYC\").clusterClientIntelligence(ClientIntelligence.BASIC);",
"infinispan.client.hotrod.cluster.intelligence.NYC=BASIC",
"ConfigurationBuilder builder = new ConfigurationBuilder(); builder.serverFailureTimeout(5000).clusterClientIntelligence(ClientIntelligence.BASIC);",
"infinispan.client.hotrod.server_failure_timeout=5000 infinispan.client.hotrod.client_intelligence=BASIC",
"// Connect to the Data Grid cluster RemoteCacheManager cacheManager = new RemoteCacheManager(builder.build()); // Obtain the remote cache RemoteCache<String, String> cache = cacheManager.getCache(\"test\"); //Hot Rod client sends a request to the \"s1\" node cache.put(\"key1\", \"aValue\"); //Hot Rod client sends a request to the \"s2\" node cache.put(\"key2\", \"aValue\"); //Hot Rod client sends a request to the \"s3\" node String value = cache.get(\"key1\"); //Hot Rod client sends the next request to the \"s1\" node again cache.remove(\"key2\");",
"ConfigurationBuilder builder = new ConfigurationBuilder(); builder.addServer() .host(\"127.0.0.1\") .port(11222) .balancingStrategy(new MyCustomBalancingStrategy());",
"infinispan.client.hotrod.request_balancing_strategy=my.package.MyCustomBalancingStrategy"
] | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/hot_rod_java_client_guide/hotrod-java-client |
Data Grid documentation | Data Grid documentation Documentation for Data Grid is available on the Red Hat customer portal. Data Grid 8.4 Documentation Data Grid 8.4 Component Details Supported Configurations for Data Grid 8.4 Data Grid 8 Feature Support Data Grid Deprecated Features and Functionality | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/migrating_to_data_grid_8/rhdg-docs_datagrid |
Appendix A. Ceph subsystems default logging level values | Appendix A. Ceph subsystems default logging level values A table of the default logging level values for the various Ceph subsystems. Subsystem Log Level Memory Level asok 1 5 auth 1 5 buffer 0 0 client 0 5 context 0 5 crush 1 5 default 0 5 filer 0 5 bluestore 1 5 finisher 1 5 heartbeatmap 1 5 javaclient 1 5 journaler 0 5 journal 1 5 lockdep 0 5 mds balancer 1 5 mds locker 1 5 mds log expire 1 5 mds log 1 5 mds migrator 1 5 mds 1 5 monc 0 5 mon 1 5 ms 0 5 objclass 0 5 objectcacher 0 5 objecter 0 0 optracker 0 5 osd 0 5 paxos 0 5 perfcounter 1 5 rados 0 5 rbd 0 5 rgw 1 5 throttle 1 5 timer 0 5 tp 0 5 | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/troubleshooting_guide/ceph-subsystems-default-logging-level-values_diag |
Chapter 7. Subscriptions | Chapter 7. Subscriptions 7.1. Creating subscriptions After you have created a channel and an event sink, you can create a subscription to enable event delivery. Subscriptions are created by configuring a Subscription object, which specifies the channel and the sink (also known as a subscriber ) to deliver events to. 7.1.1. Creating a subscription by using the Administrator perspective After you have created a channel and an event sink, also known as a subscriber , you can create a subscription to enable event delivery. Subscriptions are created by configuring a Subscription object, which specifies the channel and the subscriber to deliver events to. You can also specify some subscriber-specific options, such as how to handle failures. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console and are in the Administrator perspective. You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You have created a Knative channel. You have created a Knative service to use as a subscriber. Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Serverless Eventing . In the Channel tab, select the Options menu for the channel that you want to add a subscription to. Click Add Subscription in the list. In the Add Subscription dialogue box, select a Subscriber for the subscription. The subscriber is the Knative service that receives events from the channel. Click Add . 7.1.2. Creating a subscription by using the Developer perspective After you have created a channel and an event sink, you can create a subscription to enable event delivery. Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a subscription. Prerequisites The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console. You have created an event sink, such as a Knative service, and a channel. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure In the Developer perspective, navigate to the Topology page. Create a subscription using one of the following methods: Hover over the channel that you want to create a subscription for, and drag the arrow. The Add Subscription option is displayed. Select your sink in the Subscriber list. Click Add . If the service is available in the Topology view under the same namespace or project as the channel, click on the channel that you want to create a subscription for, and drag the arrow directly to a service to immediately create a subscription from the channel to that service. Verification After the subscription has been created, you can see it represented as a line that connects the channel to the service in the Topology view: 7.1.3. Creating a subscription by using YAML After you have created a channel and an event sink, you can create a subscription to enable event delivery. Creating Knative resources by using YAML files uses a declarative API, which enables you to describe subscriptions declaratively and in a reproducible manner. To create a subscription by using YAML, you must create a YAML file that defines a Subscription object, then apply it by using the oc apply command. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on the cluster. Install the OpenShift CLI ( oc ). You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a Subscription object: Create a YAML file and copy the following sample code into it: apiVersion: messaging.knative.dev/v1beta1 kind: Subscription metadata: name: my-subscription 1 namespace: default spec: channel: 2 apiVersion: messaging.knative.dev/v1beta1 kind: Channel name: example-channel delivery: 3 deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: error-handler subscriber: 4 ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display 1 Name of the subscription. 2 Configuration settings for the channel that the subscription connects to. 3 Configuration settings for event delivery. This tells the subscription what happens to events that cannot be delivered to the subscriber. When this is configured, events that failed to be consumed are sent to the deadLetterSink . The event is dropped, no re-delivery of the event is attempted, and an error is logged in the system. The deadLetterSink value must be a Destination . 4 Configuration settings for the subscriber. This is the event sink that events are delivered to from the channel. Apply the YAML file: USD oc apply -f <filename> 7.1.4. Creating a subscription by using the Knative CLI After you have created a channel and an event sink, you can create a subscription to enable event delivery. Using the Knative ( kn ) CLI to create subscriptions provides a more streamlined and intuitive user interface than modifying YAML files directly. You can use the kn subscription create command with the appropriate flags to create a subscription. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a subscription to connect a sink to a channel: USD kn subscription create <subscription_name> \ --channel <group:version:kind>:<channel_name> \ 1 --sink <sink_prefix>:<sink_name> \ 2 --sink-dead-letter <sink_prefix>:<sink_name> 3 1 --channel specifies the source for cloud events that should be processed. You must provide the channel name. If you are not using the default InMemoryChannel channel that is backed by the Channel custom resource, you must prefix the channel name with the <group:version:kind> for the specified channel type. For example, this will be messaging.knative.dev:v1beta1:KafkaChannel for an Apache Kafka backed channel. 2 --sink specifies the target destination to which the event should be delivered. By default, the <sink_name> is interpreted as a Knative service of this name, in the same namespace as the subscription. You can specify the type of the sink by using one of the following prefixes: ksvc A Knative service. channel A channel that should be used as destination. Only default channel types can be referenced here. broker An Eventing broker. 3 Optional: --sink-dead-letter is an optional flag that can be used to specify a sink which events should be sent to in cases where events fail to be delivered. For more information, see the OpenShift Serverless Event delivery documentation. Example command USD kn subscription create mysubscription --channel mychannel --sink ksvc:event-display Example output Subscription 'mysubscription' created in namespace 'default'. Verification To confirm that the channel is connected to the event sink, or subscriber , by a subscription, list the existing subscriptions and inspect the output: USD kn subscription list Example output NAME CHANNEL SUBSCRIBER REPLY DEAD LETTER SINK READY REASON mysubscription Channel:mychannel ksvc:event-display True Deleting a subscription Delete a subscription: USD kn subscription delete <subscription_name> 7.1.5. steps Configure event delivery parameters that are applied in cases where an event fails to be delivered to an event sink. 7.2. Managing subscriptions 7.2.1. Describing subscriptions by using the Knative CLI You can use the kn subscription describe command to print information about a subscription in the terminal by using the Knative ( kn ) CLI. Using the Knative CLI to describe subscriptions provides a more streamlined and intuitive user interface than viewing YAML files directly. Prerequisites You have installed the Knative ( kn ) CLI. You have created a subscription in your cluster. Procedure Describe a subscription: USD kn subscription describe <subscription_name> Example output Name: my-subscription Namespace: default Annotations: messaging.knative.dev/creator=openshift-user, messaging.knative.dev/lastModifier=min ... Age: 43s Channel: Channel:my-channel (messaging.knative.dev/v1) Subscriber: URI: http://edisplay.default.example.com Reply: Name: default Resource: Broker (eventing.knative.dev/v1) DeadLetterSink: Name: my-sink Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 43s ++ AddedToChannel 43s ++ ChannelReady 43s ++ ReferencesResolved 43s 7.2.2. Listing subscriptions by using the Knative CLI You can use the kn subscription list command to list existing subscriptions on your cluster by using the Knative ( kn ) CLI. Using the Knative CLI to list subscriptions provides a streamlined and intuitive user interface. Prerequisites You have installed the Knative ( kn ) CLI. Procedure List subscriptions on your cluster: USD kn subscription list Example output NAME CHANNEL SUBSCRIBER REPLY DEAD LETTER SINK READY REASON mysubscription Channel:mychannel ksvc:event-display True 7.2.3. Updating subscriptions by using the Knative CLI You can use the kn subscription update command as well as the appropriate flags to update a subscription from the terminal by using the Knative ( kn ) CLI. Using the Knative CLI to update subscriptions provides a more streamlined and intuitive user interface than updating YAML files directly. Prerequisites You have installed the Knative ( kn ) CLI. You have created a subscription. Procedure Update a subscription: USD kn subscription update <subscription_name> \ --sink <sink_prefix>:<sink_name> \ 1 --sink-dead-letter <sink_prefix>:<sink_name> 2 1 --sink specifies the updated target destination to which the event should be delivered. You can specify the type of the sink by using one of the following prefixes: ksvc A Knative service. channel A channel that should be used as destination. Only default channel types can be referenced here. broker An Eventing broker. 2 Optional: --sink-dead-letter is an optional flag that can be used to specify a sink which events should be sent to in cases where events fail to be delivered. For more information, see the OpenShift Serverless Event delivery documentation. Example command USD kn subscription update mysubscription --sink ksvc:event-display | [
"apiVersion: messaging.knative.dev/v1beta1 kind: Subscription metadata: name: my-subscription 1 namespace: default spec: channel: 2 apiVersion: messaging.knative.dev/v1beta1 kind: Channel name: example-channel delivery: 3 deadLetterSink: ref: apiVersion: serving.knative.dev/v1 kind: Service name: error-handler subscriber: 4 ref: apiVersion: serving.knative.dev/v1 kind: Service name: event-display",
"oc apply -f <filename>",
"kn subscription create <subscription_name> --channel <group:version:kind>:<channel_name> \\ 1 --sink <sink_prefix>:<sink_name> \\ 2 --sink-dead-letter <sink_prefix>:<sink_name> 3",
"kn subscription create mysubscription --channel mychannel --sink ksvc:event-display",
"Subscription 'mysubscription' created in namespace 'default'.",
"kn subscription list",
"NAME CHANNEL SUBSCRIBER REPLY DEAD LETTER SINK READY REASON mysubscription Channel:mychannel ksvc:event-display True",
"kn subscription delete <subscription_name>",
"kn subscription describe <subscription_name>",
"Name: my-subscription Namespace: default Annotations: messaging.knative.dev/creator=openshift-user, messaging.knative.dev/lastModifier=min Age: 43s Channel: Channel:my-channel (messaging.knative.dev/v1) Subscriber: URI: http://edisplay.default.example.com Reply: Name: default Resource: Broker (eventing.knative.dev/v1) DeadLetterSink: Name: my-sink Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 43s ++ AddedToChannel 43s ++ ChannelReady 43s ++ ReferencesResolved 43s",
"kn subscription list",
"NAME CHANNEL SUBSCRIBER REPLY DEAD LETTER SINK READY REASON mysubscription Channel:mychannel ksvc:event-display True",
"kn subscription update <subscription_name> --sink <sink_prefix>:<sink_name> \\ 1 --sink-dead-letter <sink_prefix>:<sink_name> 2",
"kn subscription update mysubscription --sink ksvc:event-display"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/eventing/subscriptions |
14.2.2. Starting and Stopping Samba | 14.2.2. Starting and Stopping Samba To start a Samba server, type the following command in a shell prompt while logged in as root: Important To set up a domain member server, you must first join the domain or Active Directory using the net join command before starting the smb service. To stop the server, type the following command in a shell prompt while logged in as root: The restart option is a quick way of stopping and then starting Samba. This is the most reliable way to make configuration changes take effect after editing the configuration file for Samba. Note that the restart option starts the daemon even if it was not running originally. To restart the server, type the following command in a shell prompt while logged in as root: The condrestart ( conditional restart ) option only starts smb on the condition that it is currently running. This option is useful for scripts, because it does not start the daemon if it is not running. Note When the smb.conf file is changed, Samba automatically reloads it after a few minutes. Issuing a manual restart or reload is just as affective. To conditionally restart the server, type the following command as root: A manual reload of the smb.conf file can be useful in case of a failed automatic reload by the smb service. To ensure that the Samba server configuration file is reloaded without restarting the service, type the following command as root: By default, the smb service does not start automatically at boot time. To configure Samba to start at boot time, use an initscript utility, such as /sbin/chkconfig , /usr/sbin/ntsysv , or the Services Configuration Tool program. Refer to the chapter titled Controlling Access to Services in the System Administrators Guide for more information regarding these tools. | [
"service smb start",
"service smb stop",
"service smb restart",
"service smb condrestart",
"service smb reload"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-samba-startstop |
Installing on IBM Power | Installing on IBM Power OpenShift Container Platform 4.13 Installing OpenShift Container Platform on IBM Power Red Hat OpenShift Documentation Team | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"./openshift-install create manifests --dir <installation_directory>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"bootlist -m normal -o sda",
"bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir <installation_directory>",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture : ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 additionalTrustBundle: | 17 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 18 - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <local_registry>/<local_repository_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"./openshift-install create manifests --dir <installation_directory>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"bootlist -m normal -o sda",
"bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/installing_on_ibm_power/index |
Chapter 3. Migrating from Jenkins to OpenShift Pipelines or Tekton | Chapter 3. Migrating from Jenkins to OpenShift Pipelines or Tekton You can migrate your CI/CD workflows from Jenkins to Red Hat OpenShift Pipelines , a cloud-native CI/CD experience based on the Tekton project. 3.1. Comparison of Jenkins and OpenShift Pipelines concepts You can review and compare the following equivalent terms used in Jenkins and OpenShift Pipelines. 3.1.1. Jenkins terminology Jenkins offers declarative and scripted pipelines that are extensible using shared libraries and plugins. Some basic terms in Jenkins are as follows: Pipeline : Automates the entire process of building, testing, and deploying applications by using Groovy syntax. Node : A machine capable of either orchestrating or executing a scripted pipeline. Stage : A conceptually distinct subset of tasks performed in a pipeline. Plugins or user interfaces often use this block to display the status or progress of tasks. Step : A single task that specifies the exact action to be taken, either by using a command or a script. 3.1.2. OpenShift Pipelines terminology OpenShift Pipelines uses YAML syntax for declarative pipelines and consists of tasks. Some basic terms in OpenShift Pipelines are as follows: Pipeline : A set of tasks in a series, in parallel, or both. Task : A sequence of steps as commands, binaries, or scripts. PipelineRun : Execution of a pipeline with one or more tasks. TaskRun : Execution of a task with one or more steps. Note You can initiate a PipelineRun or a TaskRun with a set of inputs such as parameters and workspaces, and the execution results in a set of outputs and artifacts. Workspace : In OpenShift Pipelines, workspaces are conceptual blocks that serve the following purposes: Storage of inputs, outputs, and build artifacts. Common space to share data among tasks. Mount points for credentials held in secrets, configurations held in config maps, and common tools shared by an organization. Note In Jenkins, there is no direct equivalent of OpenShift Pipelines workspaces. You can think of the control node as a workspace, as it stores the cloned code repository, build history, and artifacts. When a job is assigned to a different node, the cloned code and the generated artifacts are stored in that node, but the control node maintains the build history. 3.1.3. Mapping of concepts The building blocks of Jenkins and OpenShift Pipelines are not equivalent, and a specific comparison does not provide a technically accurate mapping. The following terms and concepts in Jenkins and OpenShift Pipelines correlate in general: Table 3.1. Jenkins and OpenShift Pipelines - basic comparison Jenkins OpenShift Pipelines Pipeline Pipeline and PipelineRun Stage Task Step A step in a task 3.2. Migrating a sample pipeline from Jenkins to OpenShift Pipelines You can use the following equivalent examples to help migrate your build, test, and deploy pipelines from Jenkins to OpenShift Pipelines. 3.2.1. Jenkins pipeline Consider a Jenkins pipeline written in Groovy for building, testing, and deploying: pipeline { agent any stages { stage('Build') { steps { sh 'make' } } stage('Test'){ steps { sh 'make check' junit 'reports/**/*.xml' } } stage('Deploy') { steps { sh 'make publish' } } } } 3.2.2. OpenShift Pipelines pipeline To create a pipeline in OpenShift Pipelines that is equivalent to the preceding Jenkins pipeline, you create the following three tasks: Example build task YAML definition file apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-build spec: workspaces: - name: source steps: - image: my-ci-image command: ["make"] workingDir: USD(workspaces.source.path) Example test task YAML definition file apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-test spec: workspaces: - name: source steps: - image: my-ci-image command: ["make check"] workingDir: USD(workspaces.source.path) - image: junit-report-image script: | #!/usr/bin/env bash junit-report reports/**/*.xml workingDir: USD(workspaces.source.path) Example deploy task YAML definition file apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myprojectd-deploy spec: workspaces: - name: source steps: - image: my-deploy-image command: ["make deploy"] workingDir: USD(workspaces.source.path) You can combine the three tasks sequentially to form a pipeline in OpenShift Pipelines: Example: OpenShift Pipelines pipeline for building, testing, and deployment apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: myproject-pipeline spec: workspaces: - name: shared-dir tasks: - name: build taskRef: name: myproject-build workspaces: - name: source workspace: shared-dir - name: test taskRef: name: myproject-test workspaces: - name: source workspace: shared-dir - name: deploy taskRef: name: myproject-deploy workspaces: - name: source workspace: shared-dir 3.3. Migrating from Jenkins plugins to Tekton Hub tasks You can extend the capability of Jenkins by using plugins . To achieve similar extensibility in OpenShift Pipelines, use any of the tasks available from Tekton Hub . For example, consider the git-clone task in Tekton Hub, which corresponds to the git plugin for Jenkins. Example: git-clone task from Tekton Hub apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: demo-pipeline spec: params: - name: repo_url - name: revision workspaces: - name: source tasks: - name: fetch-from-git taskRef: name: git-clone params: - name: url value: USD(params.repo_url) - name: revision value: USD(params.revision) workspaces: - name: output workspace: source 3.4. Extending OpenShift Pipelines capabilities using custom tasks and scripts In OpenShift Pipelines, if you do not find the right task in Tekton Hub, or need greater control over tasks, you can create custom tasks and scripts to extend the capabilities of OpenShift Pipelines. Example: A custom task for running the maven test command apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: maven-test spec: workspaces: - name: source steps: - image: my-maven-image command: ["mvn test"] workingDir: USD(workspaces.source.path) Example: Run a custom shell script by providing its path ... steps: image: ubuntu script: | #!/usr/bin/env bash /workspace/my-script.sh ... Example: Run a custom Python script by writing it in the YAML file ... steps: image: python script: | #!/usr/bin/env python3 print("hello from python!") ... 3.5. Comparison of Jenkins and OpenShift Pipelines execution models Jenkins and OpenShift Pipelines offer similar functions but are different in architecture and execution. Table 3.2. Comparison of execution models in Jenkins and OpenShift Pipelines Jenkins OpenShift Pipelines Jenkins has a controller node. Jenkins runs pipelines and steps centrally, or orchestrates jobs running in other nodes. OpenShift Pipelines is serverless and distributed, and there is no central dependency for execution. Containers are launched by the Jenkins controller node through the pipeline. OpenShift Pipelines adopts a 'container-first' approach, where every step runs as a container in a pod (equivalent to nodes in Jenkins). Extensibility is achieved by using plugins. Extensibility is achieved by using tasks in Tekton Hub or by creating custom tasks and scripts. 3.6. Examples of common use cases Both Jenkins and OpenShift Pipelines offer capabilities for common CI/CD use cases, such as: Compiling, building, and deploying images using Apache Maven Extending the core capabilities by using plugins Reusing shareable libraries and custom scripts 3.6.1. Running a Maven pipeline in Jenkins and OpenShift Pipelines You can use Maven in both Jenkins and OpenShift Pipelines workflows for compiling, building, and deploying images. To map your existing Jenkins workflow to OpenShift Pipelines, consider the following examples: Example: Compile and build an image and deploy it to OpenShift using Maven in Jenkins #!/usr/bin/groovy node('maven') { stage 'Checkout' checkout scm stage 'Build' sh 'cd helloworld && mvn clean' sh 'cd helloworld && mvn compile' stage 'Run Unit Tests' sh 'cd helloworld && mvn test' stage 'Package' sh 'cd helloworld && mvn package' stage 'Archive artifact' sh 'mkdir -p artifacts/deployments && cp helloworld/target/*.war artifacts/deployments' archive 'helloworld/target/*.war' stage 'Create Image' sh 'oc login https://kubernetes.default -u admin -p admin --insecure-skip-tls-verify=true' sh 'oc new-project helloworldproject' sh 'oc project helloworldproject' sh 'oc process -f helloworld/jboss-eap70-binary-build.json | oc create -f -' sh 'oc start-build eap-helloworld-app --from-dir=artifacts/' stage 'Deploy' sh 'oc new-app helloworld/jboss-eap70-deploy.json' } Example: Compile and build an image and deploy it to OpenShift using Maven in OpenShift Pipelines. apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: maven-pipeline spec: workspaces: - name: shared-workspace - name: maven-settings - name: kubeconfig-dir optional: true params: - name: repo-url - name: revision - name: context-path tasks: - name: fetch-repo taskRef: name: git-clone workspaces: - name: output workspace: shared-workspace params: - name: url value: "USD(params.repo-url)" - name: subdirectory value: "" - name: deleteExisting value: "true" - name: revision value: USD(params.revision) - name: mvn-build taskRef: name: maven runAfter: - fetch-repo workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: "USD(params.context-path)" - name: GOALS value: ["-DskipTests", "clean", "compile"] - name: mvn-tests taskRef: name: maven runAfter: - mvn-build workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: "USD(params.context-path)" - name: GOALS value: ["test"] - name: mvn-package taskRef: name: maven runAfter: - mvn-tests workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: "USD(params.context-path)" - name: GOALS value: ["package"] - name: create-image-and-deploy taskRef: name: openshift-client runAfter: - mvn-package workspaces: - name: manifest-dir workspace: shared-workspace - name: kubeconfig-dir workspace: kubeconfig-dir params: - name: SCRIPT value: | cd "USD(params.context-path)" mkdir -p ./artifacts/deployments && cp ./target/*.war ./artifacts/deployments oc new-project helloworldproject oc project helloworldproject oc process -f jboss-eap70-binary-build.json | oc create -f - oc start-build eap-helloworld-app --from-dir=artifacts/ oc new-app jboss-eap70-deploy.json 3.6.2. Extending the core capabilities of Jenkins and OpenShift Pipelines by using plugins Jenkins has the advantage of a large ecosystem of numerous plugins developed over the years by its extensive user base. You can search and browse the plugins in the Jenkins Plugin Index . OpenShift Pipelines also has many tasks developed and contributed by the community and enterprise users. A publicly available catalog of reusable OpenShift Pipelines tasks are available in the Tekton Hub . In addition, OpenShift Pipelines incorporates many of the plugins of the Jenkins ecosystem within its core capabilities. For example, authorization is a critical function in both Jenkins and OpenShift Pipelines. While Jenkins ensures authorization using the Role-based Authorization Strategy plugin, OpenShift Pipelines uses OpenShift's built-in Role-based Access Control system. 3.6.3. Sharing reusable code in Jenkins and OpenShift Pipelines Jenkins shared libraries provide reusable code for parts of Jenkins pipelines. The libraries are shared between Jenkinsfiles to create highly modular pipelines without code repetition. Although there is no direct equivalent of Jenkins shared libraries in OpenShift Pipelines, you can achieve similar workflows by using tasks from the Tekton Hub in combination with custom tasks and scripts. 3.7. Additional resources Understanding OpenShift Pipelines Role-based Access Control | [
"pipeline { agent any stages { stage('Build') { steps { sh 'make' } } stage('Test'){ steps { sh 'make check' junit 'reports/**/*.xml' } } stage('Deploy') { steps { sh 'make publish' } } } }",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-build spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make\"] workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-test spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make check\"] workingDir: USD(workspaces.source.path) - image: junit-report-image script: | #!/usr/bin/env bash junit-report reports/**/*.xml workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myprojectd-deploy spec: workspaces: - name: source steps: - image: my-deploy-image command: [\"make deploy\"] workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: myproject-pipeline spec: workspaces: - name: shared-dir tasks: - name: build taskRef: name: myproject-build workspaces: - name: source workspace: shared-dir - name: test taskRef: name: myproject-test workspaces: - name: source workspace: shared-dir - name: deploy taskRef: name: myproject-deploy workspaces: - name: source workspace: shared-dir",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: demo-pipeline spec: params: - name: repo_url - name: revision workspaces: - name: source tasks: - name: fetch-from-git taskRef: name: git-clone params: - name: url value: USD(params.repo_url) - name: revision value: USD(params.revision) workspaces: - name: output workspace: source",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: maven-test spec: workspaces: - name: source steps: - image: my-maven-image command: [\"mvn test\"] workingDir: USD(workspaces.source.path)",
"steps: image: ubuntu script: | #!/usr/bin/env bash /workspace/my-script.sh",
"steps: image: python script: | #!/usr/bin/env python3 print(\"hello from python!\")",
"#!/usr/bin/groovy node('maven') { stage 'Checkout' checkout scm stage 'Build' sh 'cd helloworld && mvn clean' sh 'cd helloworld && mvn compile' stage 'Run Unit Tests' sh 'cd helloworld && mvn test' stage 'Package' sh 'cd helloworld && mvn package' stage 'Archive artifact' sh 'mkdir -p artifacts/deployments && cp helloworld/target/*.war artifacts/deployments' archive 'helloworld/target/*.war' stage 'Create Image' sh 'oc login https://kubernetes.default -u admin -p admin --insecure-skip-tls-verify=true' sh 'oc new-project helloworldproject' sh 'oc project helloworldproject' sh 'oc process -f helloworld/jboss-eap70-binary-build.json | oc create -f -' sh 'oc start-build eap-helloworld-app --from-dir=artifacts/' stage 'Deploy' sh 'oc new-app helloworld/jboss-eap70-deploy.json' }",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: maven-pipeline spec: workspaces: - name: shared-workspace - name: maven-settings - name: kubeconfig-dir optional: true params: - name: repo-url - name: revision - name: context-path tasks: - name: fetch-repo taskRef: name: git-clone workspaces: - name: output workspace: shared-workspace params: - name: url value: \"USD(params.repo-url)\" - name: subdirectory value: \"\" - name: deleteExisting value: \"true\" - name: revision value: USD(params.revision) - name: mvn-build taskRef: name: maven runAfter: - fetch-repo workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"-DskipTests\", \"clean\", \"compile\"] - name: mvn-tests taskRef: name: maven runAfter: - mvn-build workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"test\"] - name: mvn-package taskRef: name: maven runAfter: - mvn-tests workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"package\"] - name: create-image-and-deploy taskRef: name: openshift-client runAfter: - mvn-package workspaces: - name: manifest-dir workspace: shared-workspace - name: kubeconfig-dir workspace: kubeconfig-dir params: - name: SCRIPT value: | cd \"USD(params.context-path)\" mkdir -p ./artifacts/deployments && cp ./target/*.war ./artifacts/deployments oc new-project helloworldproject oc project helloworldproject oc process -f jboss-eap70-binary-build.json | oc create -f - oc start-build eap-helloworld-app --from-dir=artifacts/ oc new-app jboss-eap70-deploy.json"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/jenkins/migrating-from-jenkins-to-openshift-pipelines_images-other-jenkins-agent |
Chapter 11. VolumeSnapshot [snapshot.storage.k8s.io/v1] | Chapter 11. VolumeSnapshot [snapshot.storage.k8s.io/v1] Description VolumeSnapshot is a user's request for either creating a point-in-time snapshot of a persistent volume, or binding to a pre-existing snapshot. Type object Required spec 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec defines the desired characteristics of a snapshot requested by a user. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshots#volumesnapshots Required. status object status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. 11.1.1. .spec Description spec defines the desired characteristics of a snapshot requested by a user. More info: https://kubernetes.io/docs/concepts/storage/volume-snapshots#volumesnapshots Required. Type object Required source Property Type Description source object source specifies where a snapshot will be created from. This field is immutable after creation. Required. volumeSnapshotClassName string VolumeSnapshotClassName is the name of the VolumeSnapshotClass requested by the VolumeSnapshot. VolumeSnapshotClassName may be left nil to indicate that the default SnapshotClass should be used. A given cluster may have multiple default Volume SnapshotClasses: one default per CSI Driver. If a VolumeSnapshot does not specify a SnapshotClass, VolumeSnapshotSource will be checked to figure out what the associated CSI Driver is, and the default VolumeSnapshotClass associated with that CSI Driver will be used. If more than one VolumeSnapshotClass exist for a given CSI Driver and more than one have been marked as default, CreateSnapshot will fail and generate an event. Empty string is not allowed for this field. 11.1.2. .spec.source Description source specifies where a snapshot will be created from. This field is immutable after creation. Required. Type object Property Type Description persistentVolumeClaimName string persistentVolumeClaimName specifies the name of the PersistentVolumeClaim object representing the volume from which a snapshot should be created. This PVC is assumed to be in the same namespace as the VolumeSnapshot object. This field should be set if the snapshot does not exists, and needs to be created. This field is immutable. volumeSnapshotContentName string volumeSnapshotContentName specifies the name of a pre-existing VolumeSnapshotContent object representing an existing volume snapshot. This field should be set if the snapshot already exists and only needs a representation in Kubernetes. This field is immutable. 11.1.3. .status Description status represents the current information of a snapshot. Consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. Type object Property Type Description boundVolumeSnapshotContentName string boundVolumeSnapshotContentName is the name of the VolumeSnapshotContent object to which this VolumeSnapshot object intends to bind to. If not specified, it indicates that the VolumeSnapshot object has not been successfully bound to a VolumeSnapshotContent object yet. NOTE: To avoid possible security issues, consumers must verify binding between VolumeSnapshot and VolumeSnapshotContent objects is successful (by validating that both VolumeSnapshot and VolumeSnapshotContent point at each other) before using this object. creationTime string creationTime is the timestamp when the point-in-time snapshot is taken by the underlying storage system. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "creation_time" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "creation_time" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. If not specified, it may indicate that the creation time of the snapshot is unknown. error object error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurs during the snapshot creation. Upon success, this error field will be cleared. readyToUse boolean readyToUse indicates if the snapshot is ready to be used to restore a volume. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "ready_to_use" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "ready_to_use" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it, otherwise, this field will be set to "True". If not specified, it means the readiness of a snapshot is unknown. restoreSize integer-or-string restoreSize represents the minimum size of volume required to create a volume from this snapshot. In dynamic snapshot creation case, this field will be filled in by the snapshot controller with the "size_bytes" value returned from CSI "CreateSnapshot" gRPC call. For a pre-existing snapshot, this field will be filled with the "size_bytes" value returned from the CSI "ListSnapshots" gRPC call if the driver supports it. When restoring a volume from this snapshot, the size of the volume MUST NOT be smaller than the restoreSize if it is specified, otherwise the restoration will fail. If not specified, it indicates that the size is unknown. volumeGroupSnapshotName string VolumeGroupSnapshotName is the name of the VolumeGroupSnapshot of which this VolumeSnapshot is a part of. 11.1.4. .status.error Description error is the last observed error during snapshot creation, if any. This field could be helpful to upper level controllers(i.e., application controller) to decide whether they should continue on waiting for the snapshot to be created based on the type of error reported. The snapshot controller will keep retrying when an error occurs during the snapshot creation. Upon success, this error field will be cleared. Type object Property Type Description message string message is a string detailing the encountered error during snapshot creation if specified. NOTE: message may be logged, and it should not contain sensitive information. time string time is the timestamp when the error was encountered. 11.2. API endpoints The following API endpoints are available: /apis/snapshot.storage.k8s.io/v1/volumesnapshots GET : list objects of kind VolumeSnapshot /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots DELETE : delete collection of VolumeSnapshot GET : list objects of kind VolumeSnapshot POST : create a VolumeSnapshot /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name} DELETE : delete a VolumeSnapshot GET : read the specified VolumeSnapshot PATCH : partially update the specified VolumeSnapshot PUT : replace the specified VolumeSnapshot /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name}/status GET : read status of the specified VolumeSnapshot PATCH : partially update status of the specified VolumeSnapshot PUT : replace status of the specified VolumeSnapshot 11.2.1. /apis/snapshot.storage.k8s.io/v1/volumesnapshots HTTP method GET Description list objects of kind VolumeSnapshot Table 11.1. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotList schema 401 - Unauthorized Empty 11.2.2. /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots HTTP method DELETE Description delete collection of VolumeSnapshot Table 11.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind VolumeSnapshot Table 11.3. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshotList schema 401 - Unauthorized Empty HTTP method POST Description create a VolumeSnapshot Table 11.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.5. Body parameters Parameter Type Description body VolumeSnapshot schema Table 11.6. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 201 - Created VolumeSnapshot schema 202 - Accepted VolumeSnapshot schema 401 - Unauthorized Empty 11.2.3. /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name} Table 11.7. Global path parameters Parameter Type Description name string name of the VolumeSnapshot HTTP method DELETE Description delete a VolumeSnapshot Table 11.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 11.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified VolumeSnapshot Table 11.10. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified VolumeSnapshot Table 11.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.12. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified VolumeSnapshot Table 11.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.14. Body parameters Parameter Type Description body VolumeSnapshot schema Table 11.15. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 201 - Created VolumeSnapshot schema 401 - Unauthorized Empty 11.2.4. /apis/snapshot.storage.k8s.io/v1/namespaces/{namespace}/volumesnapshots/{name}/status Table 11.16. Global path parameters Parameter Type Description name string name of the VolumeSnapshot HTTP method GET Description read status of the specified VolumeSnapshot Table 11.17. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified VolumeSnapshot Table 11.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.19. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified VolumeSnapshot Table 11.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 11.21. Body parameters Parameter Type Description body VolumeSnapshot schema Table 11.22. HTTP responses HTTP code Reponse body 200 - OK VolumeSnapshot schema 201 - Created VolumeSnapshot schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/storage_apis/volumesnapshot-snapshot-storage-k8s-io-v1 |
7.8. Using Cloud-Init to Automate the Configuration of Virtual Machines | 7.8. Using Cloud-Init to Automate the Configuration of Virtual Machines Cloud-Init is a tool for automating the initial setup of virtual machines such as configuring the host name, network interfaces, and authorized keys. It can be used when provisioning virtual machines that have been deployed based on a template to avoid conflicts on the network. To use this tool, the cloud-init package must first be installed on the virtual machine. Once installed, the Cloud-Init service starts during the boot process to search for instructions on what to configure. You can then use options in the Run Once window to provide these instructions one time only, or options in the New Virtual Machine , Edit Virtual Machine and Edit Template windows to provide these instructions every time the virtual machine starts. Note Alternatively, you can configure Cloud-Init with Ansible , Python , Java , or Ruby . 7.8.1. Cloud-Init Use Case Scenarios Cloud-Init can be used to automate the configuration of virtual machines in a variety of scenarios. Several common scenarios are as follows: Virtual Machines Created Based on Templates You can use the Cloud-Init options in the Initial Run section of the Run Once window to initialize a virtual machine that was created based on a template. This allows you to customize the virtual machine the first time that virtual machine is started. Virtual Machine Templates You can use the Use Cloud-Init/Sysprep options in the Initial Run tab of the Edit Template window to specify options for customizing virtual machines created based on that template. Virtual Machine Pools You can use the Use Cloud-Init/Sysprep options in the Initial Run tab of the New Pool window to specify options for customizing virtual machines taken from that virtual machine pool. This allows you to specify a set of standard settings that will be applied every time a virtual machine is taken from that virtual machine pool. You can inherit or override the options specified for the template on which the virtual machine is based, or specify options for the virtual machine pool itself. 7.8.2. Installing Cloud-Init This procedure describes how to install Cloud-Init on a virtual machine. Once Cloud-Init is installed, you can create a template based on this virtual machine. Virtual machines created based on this template can leverage Cloud-Init functions, such as configuring the host name, time zone, root password, authorized keys, network interfaces, DNS service, etc on boot. Installing Cloud-Init Log in to the virtual machine. Enable the repositories: For Red Hat Enterprise Linux 6: For Red Hat Enterprise Linux 7: Install the cloud-init package and dependencies: 7.8.3. Using Cloud-Init to Prepare a Template As long as the cloud-init package is installed on a Linux virtual machine, you can use the virtual machine to make a cloud-init enabled template. Specify a set of standard settings to be included in a template as described in the following procedure or, alternatively, skip the Cloud-Init settings steps and configure them when creating a virtual machine based on this template. Note While the following procedure outlines how to use Cloud-Init when preparing a template, the same settings are also available in the New Virtual Machine , Edit Template , and Run Once windows. Using Cloud-Init to Prepare a Template Click Compute Templates and select a template. Click Edit . Click Show Advanced Options Click the Initial Run tab and select the Use Cloud-Init/Sysprep check box. Enter a host name in the VM Hostname text field. Select the Configure Time Zone check box and select a time zone from the Time Zone drop-down list. Expand the Authentication section. Select the Use already configured password check box to use the existing credentials, or clear that check box and enter a root password in the Password and Verify Password text fields to specify a new root password. Enter any SSH keys to be added to the authorized hosts file on the virtual machine in the SSH Authorized Keys text area. Select the Regenerate SSH Keys check box to regenerate SSH keys for the virtual machine. Expand the Networks section. Enter any DNS servers in the DNS Servers text field. Enter any DNS search domains in the DNS Search Domains text field. Select the In-guest Network Interface check box and use the + Add new and - Renove selected buttons to add or remove network interfaces to or from the virtual machine. Important You must specify the correct network interface name and number (for example, eth0 , eno3 , enp0s ). Otherwise, the virtual machine's interface connection will be up, but it will not have the cloud-init network configuration. Expand the Custom Script section and enter any custom scripts in the Custom Script text area. Click OK . You can now provision new virtual machines using this template. 7.8.4. Using Cloud-Init to Initialize a Virtual Machine Use Cloud-Init to automate the initial configuration of a Linux virtual machine. You can use the Cloud-Init fields to configure a virtual machine's host name, time zone, root password, authorized keys, network interfaces, and DNS service. You can also specify a custom script, a script in YAML format, to run on boot. The custom script allows for additional Cloud-Init configuration that is supported by Cloud-Init but not available in the Cloud-Init fields. For more information on custom script examples, see Cloud config examples . Using Cloud-Init to Initialize a Virtual Machine This procedure starts a virtual machine with a set of Cloud-Init settings. If the relevant settings are included in the template the virtual machine is based on, review the settings, make changes where appropriate, and click OK to start the virtual machine. Click Compute Virtual Machines and select a virtual machine. Click the Run drop-down button and select Run Once . Expand the Initial Run section and select the Cloud-Init check box. Enter a host name in the VM Hostname text field. Select the Configure Time Zone check box and select a time zone from the Time Zone drop-down menu. Select the Use already configured password check box to use the existing credentials, or clear that check box and enter a root password in the Password and Verify Password text fields to specify a new root password. Enter any SSH keys to be added to the authorized hosts file on the virtual machine in the SSH Authorized Keys text area. Select the Regenerate SSH Keys check box to regenerate SSH keys for the virtual machine. Enter any DNS servers in the DNS Servers text field. Enter any DNS search domains in the DNS Search Domains text field. Select the Network check box and use the + and - buttons to add or remove network interfaces to or from the virtual machine. Important You must specify the correct network interface name and number (for example, eth0 , eno3 , enp0s ). Otherwise, the virtual machine's interface connection will be up, but the cloud-init network configuration will not be defined in it. Enter a custom script in the Custom Script text area. Make sure the values specified in the script are appropriate. Otherwise, the action will fail. Click OK . Note To check if a virtual machine has Cloud-Init installed, select a virtual machine and click the Applications sub-tab. Only shown if the guest agent is installed. | [
"subscription-manager repos --enable=rhel-6-server-rpms --enable=rhel-6-server-rh-common-rpms",
"subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-rh-common-rpms",
"yum install cloud-init"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/Using_Cloud-Init_to_Automate_the_Configuration_of_Virtual_Machines |
Using Two-Factor Authentication | Using Two-Factor Authentication Red Hat Customer Portal 1 Using two-factor authentication to access your Red Hat user account Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_hybrid_cloud_console/1-latest/html/using_two-factor_authentication/index |
5.4. Load Balancing Policy: Evenly_Distributed | 5.4. Load Balancing Policy: Evenly_Distributed Figure 5.1. Evenly Distributed Scheduling Policy An evenly distributed load balancing policy selects the host for a new virtual machine according to lowest CPU load or highest available memory. The maximum CPU load and minimum available memory that is allowed for hosts in a cluster for a set amount of time are defined by the evenly distributed scheduling policy's parameters. Beyond these limits the environment's performance will degrade. The evenly distributed policy allows an administrator to set these levels for running virtual machines. If a host has reached the defined maximum CPU load or minimum available memory and the host stays there for more than the set time, virtual machines on that host are migrated one by one to the host in the cluster that has the lowest CPU or highest available memory depending on which parameter is being utilized. Host resources are checked once per minute, and one virtual machine is migrated at a time until the host CPU load is below the defined limit or the host available memory is above the defined limit. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/load_balancing_policy_even_distribution |
Chapter 20. Access Control Lists | Chapter 20. Access Control Lists Files and directories have permission sets for the owner of the file, the group associated with the file, and all other users for the system. However, these permission sets have limitations. For example, different permissions cannot be configured for different users. Thus, Access Control Lists (ACLs) were implemented. The Red Hat Enterprise Linux kernel provides ACL support for the ext3 file system and NFS-exported file systems. ACLs are also recognized on ext3 file systems accessed via Samba. Along with support in the kernel, the acl package is required to implement ACLs. It contains the utilities used to add, modify, remove, and retrieve ACL information. The cp and mv commands copy or move any ACLs associated with files and directories. 20.1. Mounting File Systems Before using ACLs for a file or directory, the partition for the file or directory must be mounted with ACL support. If it is a local ext3 file system, it can mounted with the following command: mount -t ext3 -o acl device-name partition For example: mount -t ext3 -o acl /dev/VolGroup00/LogVol02 /work Alternatively, if the partition is listed in the /etc/fstab file, the entry for the partition can include the acl option: If an ext3 file system is accessed via Samba and ACLs have been enabled for it, the ACLs are recognized because Samba has been compiled with the --with-acl-support option. No special flags are required when accessing or mounting a Samba share. 20.1.1. NFS By default, if the file system being exported by an NFS server supports ACLs and the NFS client can read ACLs, then ACLs are utilized by the client system. To disable ACLs on NFS share when mounting it on a client, mount it with the noacl option with the command line. | [
"LABEL=/work /work ext3 acl 1 2"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/ch-acls |
Chapter 7. Installation configuration parameters for Nutanix | Chapter 7. Installation configuration parameters for Nutanix Before you deploy an OpenShift Container Platform cluster on Nutanix, you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 7.1. Available installation configuration parameters for Nutanix The following tables specify the required, optional, and Nutanix-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 7.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 7.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 7.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 7.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 7.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 7.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Mint , Passthrough , Manual or an empty string ( "" ). Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 7.1.4. Additional Nutanix configuration parameters Additional Nutanix configuration parameters are described in the following table: Table 7.4. Additional Nutanix cluster parameters Parameter Description Values The name of a prism category key to apply to compute VMs. This parameter must be accompanied by the value parameter, and both key and value parameters must exist in Prism Central. For more information on categories, see Category management . String The value of a prism category key-value pair to apply to compute VMs. This parameter must be accompanied by the key parameter, and both key and value parameters must exist in Prism Central. String The failure domains that apply to only compute machines. Failure domains are specified in platform.nutanix.failureDomains . List. The name of one or more failures domains. The type of identifier you use to select a project for compute VMs. Projects define logical groups of user roles for managing permissions, networks, and other parameters. For more information on projects, see Projects Overview . name or uuid The name or UUID of a project with which compute VMs are associated. This parameter must be accompanied by the type parameter. String The boot type that the compute machines use. You must use the Legacy boot type in OpenShift Container Platform 4.16. For more information on boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Legacy , SecureBoot or UEFI . The default is Legacy . The name of a prism category key to apply to control plane VMs. This parameter must be accompanied by the value parameter, and both key and value parameters must exist in Prism Central. For more information on categories, see Category management . String The value of a prism category key-value pair to apply to control plane VMs. This parameter must be accompanied by the key parameter, and both key and value parameters must exist in Prism Central. String The failure domains that apply to only control plane machines. Failure domains are specified in platform.nutanix.failureDomains . List. The name of one or more failures domains. The type of identifier you use to select a project for control plane VMs. Projects define logical groups of user roles for managing permissions, networks, and other parameters. For more information on projects, see Projects Overview . name or uuid The name or UUID of a project with which control plane VMs are associated. This parameter must be accompanied by the type parameter. String The name of a prism category key to apply to all VMs. This parameter must be accompanied by the value parameter, and both key and value parameters must exist in Prism Central. For more information on categories, see Category management . String The value of a prism category key-value pair to apply to all VMs. This parameter must be accompanied by the key parameter, and both key and value parameters must exist in Prism Central. String The failure domains that apply to both control plane and compute machines. Failure domains are specified in platform.nutanix.failureDomains . List. The name of one or more failures domains. The type of identifier you use to select a project for all VMs. Projects define logical groups of user roles for managing permissions, networks, and other parameters. For more information on projects, see Projects Overview . name or uuid . The name or UUID of a project with which all VMs are associated. This parameter must be accompanied by the type parameter. String The boot type for all machines. You must use the Legacy boot type in OpenShift Container Platform 4.16. For more information on boot types, see Understanding UEFI, Secure Boot, and TPM in the Virtualized Environment . Legacy , SecureBoot or UEFI . The default is Legacy . The virtual IP (VIP) address that you configured for control plane API access. IP address By default, the installation program installs cluster machines to a single Prism Element instance. You can specify additional Prism Element instances for fault tolerance, and then apply them to: The cluster's default machine configuration Only control plane or compute machine pools A list of configured failure domains. For more information on usage, see "Configuring a failure domain" in "Installing a cluster on Nutanix". The virtual IP (VIP) address that you configured for cluster ingress. IP address The Prism Central domain name or IP address. String The port that is used to log into Prism Central. String The password for the Prism Central user name. String The user name that is used to log into Prism Central. String The Prism Element domain name or IP address. [ 1 ] String The port that is used to log into Prism Element. String The universally unique identifier (UUID) for Prism Element. String The UUID of the Prism Element network that contains the virtual IP addresses and DNS records that you configured. [ 2 ] String Optional: By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image. If Prism Central does not have internet access, you can override the default behavior by hosting the RHCOS image on any HTTP server and pointing the installation program to the image. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, http://example.com/images/rhcos-47.83.202103221318-0-nutanix.x86_64.qcow2 The prismElements section holds a list of Prism Elements (clusters). A Prism Element encompasses all of the Nutanix resources, for example virtual machines and subnets, that are used to host the OpenShift Container Platform cluster. Only one subnet per Prism Element in an OpenShift Container Platform cluster is supported. | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"publish:",
"sshKey:",
"compute: platform: nutanix: categories: key:",
"compute: platform: nutanix: categories: value:",
"compute: platform: nutanix: failureDomains:",
"compute: platform: nutanix: project: type:",
"compute: platform: nutanix: project: name: or uuid:",
"compute: platform: nutanix: bootType:",
"controlPlane: platform: nutanix: categories: key:",
"controlPlane: platform: nutanix: categories: value:",
"controlPlane: platform: nutanix: failureDomains:",
"controlPlane: platform: nutanix: project: type:",
"controlPlane: platform: nutanix: project: name: or uuid:",
"platform: nutanix: defaultMachinePlatform: categories: key:",
"platform: nutanix: defaultMachinePlatform: categories: value:",
"platform: nutanix: defaultMachinePlatform: failureDomains:",
"platform: nutanix: defaultMachinePlatform: project: type:",
"platform: nutanix: defaultMachinePlatform: project: name: or uuid:",
"platform: nutanix: defaultMachinePlatform: bootType:",
"platform: nutanix: apiVIP:",
"platform: nutanix: failureDomains: - name: prismElement: name: uuid: subnetUUIDs: -",
"platform: nutanix: ingressVIP:",
"platform: nutanix: prismCentral: endpoint: address:",
"platform: nutanix: prismCentral: endpoint: port:",
"platform: nutanix: prismCentral: password:",
"platform: nutanix: prismCentral: username:",
"platform: nutanix: prismElements: endpoint: address:",
"platform: nutanix: prismElements: endpoint: port:",
"platform: nutanix: prismElements: uuid:",
"platform: nutanix: subnetUUIDs:",
"platform: nutanix: clusterOSImage:"
] | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_nutanix/installation-config-parameters-nutanix |
Chapter 2. Managing your cluster resources | Chapter 2. Managing your cluster resources You can apply global configuration options in OpenShift Container Platform. Operators apply these configuration settings across the cluster. 2.1. Interacting with your cluster resources You can interact with cluster resources by using the OpenShift CLI ( oc ) tool in OpenShift Container Platform. The cluster resources that you see after running the oc api-resources command can be edited. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the web console or you have installed the oc CLI tool. Procedure To see which configuration Operators have been applied, run the following command: USD oc api-resources -o name | grep config.openshift.io To see what cluster resources you can configure, run the following command: USD oc explain <resource_name>.config.openshift.io To see the configuration of custom resource definition (CRD) objects in the cluster, run the following command: USD oc get <resource_name>.config -o yaml To edit the cluster resource configuration, run the following command: USD oc edit <resource_name>.config -o yaml | [
"oc api-resources -o name | grep config.openshift.io",
"oc explain <resource_name>.config.openshift.io",
"oc get <resource_name>.config -o yaml",
"oc edit <resource_name>.config -o yaml"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/support/managing-cluster-resources |
Chapter 4. Configuring IntelliJ to use Dependency Analytics | Chapter 4. Configuring IntelliJ to use Dependency Analytics You can gain access to Red Hat's Trusted Profile Analyzer service by using the Dependency Analytics plugin for Jet Brains' IntelliJ IDEA application. With this plugin you get access to the latest open source vulnerability information, and insights about your application's dependent packages. The Red Hat Dependency Analytics plugin uses the following data sources for the most up-to-date vulnerability information available: The ONGuard service, integrates the Open Source Vulnerability (OSV) and the National Vulnerability Database (NVD) data sources. When given a set of packages to the ONGuard service, a query to OSV retrieves the associated vulnerability information, and then a query to NVD for public Common Vulnerability and Exposures (CVE) information. Dependency Analytics supports the following programming languages: Maven Node Python Go Note The Dependency Analytics extension is an online service maintained by Red Hat. Dependency Analytics only accesses your manifest files to analyze your application dependencies before displaying the results. Prerequisites Install IntelliJ IDEA on your workstation. For Maven projects, analyzing a pom.xml file, you must have the mvn binary in your system's PATH environment. For Node projects, analyzing a package.json file, you must have the npm binary in your system's PATH environment. For Go projects, analyzing a go.mod file, you must have the go binary in your system's PATH environment. For Python projects, analyzing a requirements.txt file, you must have the python3/pip3 or python/pip binaries in your system's PATH environment. Procedure Open the IntelliJ application. From the file menu, click Settings , and click Plugins . Search the Marketplace for Red Hat Dependency Analytics . Click the INSTALL button to install the plug-in. To start scanning your application for security vulnerabilities, and view the vulnerability report, you can do one of the following: Open a manifest file, hover over a dependency marked by the inline Component Analysis, indicated by the wavy-red line under a dependency, and click Detailed Vulnerability Report . Right click on a manifest file in the Project window, and click Dependency Analytics Report . Additional resources Red Hat's Dependency Analytics Jet Brains marketplace page . The GitHub project . | null | https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1/html/quick_start_guide/configuring-intellij-to-use-dependency-analytics_qsg |
Chapter 1. Provisioning APIs | Chapter 1. Provisioning APIs 1.1. BMCEventSubscription [metal3.io/v1alpha1] Description BMCEventSubscription is the Schema for the fast eventing API Type object 1.2. BareMetalHost [metal3.io/v1alpha1] Description BareMetalHost is the Schema for the baremetalhosts API Type object 1.3. DataImage [metal3.io/v1alpha1] Description DataImage is the Schema for the dataimages API. Type object 1.4. FirmwareSchema [metal3.io/v1alpha1] Description FirmwareSchema is the Schema for the firmwareschemas API. Type object 1.5. HardwareData [metal3.io/v1alpha1] Description HardwareData is the Schema for the hardwaredata API. Type object 1.6. HostFirmwareComponents [metal3.io/v1alpha1] Description HostFirmwareComponents is the Schema for the hostfirmwarecomponents API. Type object 1.7. HostFirmwareSettings [metal3.io/v1alpha1] Description HostFirmwareSettings is the Schema for the hostfirmwaresettings API. Type object 1.8. Metal3Remediation [infrastructure.cluster.x-k8s.io/v1beta1] Description Metal3Remediation is the Schema for the metal3remediations API. Type object 1.9. Metal3RemediationTemplate [infrastructure.cluster.x-k8s.io/v1beta1] Description Metal3RemediationTemplate is the Schema for the metal3remediationtemplates API. Type object 1.10. PreprovisioningImage [metal3.io/v1alpha1] Description PreprovisioningImage is the Schema for the preprovisioningimages API. Type object 1.11. Provisioning [metal3.io/v1alpha1] Description Provisioning contains configuration used by the Provisioning service (Ironic) to provision baremetal hosts. Provisioning is created by the OpenShift installer using admin or user provided information about the provisioning network and the NIC on the server that can be used to PXE boot it. This CR is a singleton, created by the installer and currently only consumed by the cluster-baremetal-operator to bring up and update containers in a metal3 cluster. Type object | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/provisioning_apis/provisioning-apis |
Chapter 3. Identity [user.openshift.io/v1] | Chapter 3. Identity [user.openshift.io/v1] Description Identity records a successful authentication of a user with an identity provider. The information about the source of authentication is stored on the identity, and the identity is then associated with a single user object. Multiple identities can reference a single user. Information retrieved from the authentication provider is stored in the extra field using a schema determined by the provider. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required providerName providerUserName user 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources extra object (string) Extra holds extra information about this identity kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata providerName string ProviderName is the source of identity information providerUserName string ProviderUserName uniquely represents this identity in the scope of the provider user ObjectReference User is a reference to the user this identity is associated with Both Name and UID must be set 3.2. API endpoints The following API endpoints are available: /apis/user.openshift.io/v1/identities DELETE : delete collection of Identity GET : list or watch objects of kind Identity POST : create an Identity /apis/user.openshift.io/v1/watch/identities GET : watch individual changes to a list of Identity. deprecated: use the 'watch' parameter with a list operation instead. /apis/user.openshift.io/v1/identities/{name} DELETE : delete an Identity GET : read the specified Identity PATCH : partially update the specified Identity PUT : replace the specified Identity /apis/user.openshift.io/v1/watch/identities/{name} GET : watch changes to an object of kind Identity. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 3.2.1. /apis/user.openshift.io/v1/identities HTTP method DELETE Description delete collection of Identity Table 3.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Identity Table 3.3. HTTP responses HTTP code Reponse body 200 - OK IdentityList schema 401 - Unauthorized Empty HTTP method POST Description create an Identity Table 3.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.5. Body parameters Parameter Type Description body Identity schema Table 3.6. HTTP responses HTTP code Reponse body 200 - OK Identity schema 201 - Created Identity schema 202 - Accepted Identity schema 401 - Unauthorized Empty 3.2.2. /apis/user.openshift.io/v1/watch/identities HTTP method GET Description watch individual changes to a list of Identity. deprecated: use the 'watch' parameter with a list operation instead. Table 3.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/user.openshift.io/v1/identities/{name} Table 3.8. Global path parameters Parameter Type Description name string name of the Identity HTTP method DELETE Description delete an Identity Table 3.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 3.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Identity Table 3.11. HTTP responses HTTP code Reponse body 200 - OK Identity schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Identity Table 3.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.13. HTTP responses HTTP code Reponse body 200 - OK Identity schema 201 - Created Identity schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Identity Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.15. Body parameters Parameter Type Description body Identity schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK Identity schema 201 - Created Identity schema 401 - Unauthorized Empty 3.2.4. /apis/user.openshift.io/v1/watch/identities/{name} Table 3.17. Global path parameters Parameter Type Description name string name of the Identity HTTP method GET Description watch changes to an object of kind Identity. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/user_and_group_apis/identity-user-openshift-io-v1 |
7.3. Configuring Identity and Authentication Providers for SSSD | 7.3. Configuring Identity and Authentication Providers for SSSD 7.3.1. Introduction to Identity and Authentication Providers for SSSD SSSD Domains. Identity and Authentication Providers Identity and authentication providers are configured as domains in the SSSD configuration file. A single domain can be used as: An identity provider (for user information) An authentication provider (for authentication requests) An access control provider (for authorization requests) A combination of these providers (if all the corresponding operations are performed within a single server) You can configure multiple domains for SSSD. At least one domain must be configured, otherwise SSSD will not start. The access_provider option in the /etc/sssd/sssd.conf file sets the access control provider used for the domain. By default, the option is set to permit , which always allows all access. See the sssd.conf (5) man page for details. Proxy Providers A proxy provider works as an intermediary relay between SSSD and resources that SSSD would otherwise not be able to use. When using a proxy provider, SSSD connects to the proxy service, and the proxy loads the specified libraries. Using a proxy provider, you can configure SSSD to use: Alternative authentication methods, such as a fingerprint scanner Legacy systems, such as NIS A local system account defined in /etc/passwd and remote authentication Available Combinations of Identity and Authentication Providers Table 7.1. Available Combinations of Identity and Authentication Providers Identity Provider Authentication Provider Identity Management [a] Identity Management [a] Active Directory [a] Active Directory [a] LDAP LDAP LDAP Kerberos proxy proxy proxy LDAP proxy Kerberos [a] An extension of the LDAP provider type. Note that this guide does not describe all provider types. See the following additional resources for more information: To configure an SSSD client for Identity Management, Red Hat recommends using the ipa-client-install utility. See Installing and Uninstalling Identity Management Clients in the Linux Domain Identity, Authentication, and Policy Guide . To configure an SSSD client for Identity Management manually without ipa-client-install , see Installing and Uninstalling an Identity Management Client Manually in Red Hat Knowledgebase. To configure Active Directory to be used with SSSD, see Using Active Directory as an Identity Provider for SSSD in the Windows Integration Guide . 7.3.2. Configuring an LDAP Domain for SSSD Prerequisites Install SSSD. Configure SSSD to Discover the LDAP Domain Open the /etc/sssd/sssd.conf file. Create a [domain] section for the LDAP domain: Specify if you want to use the LDAP server as an identity provider, an authentication provider, or both. To use the LDAP server as an identity provider, set the id_provider option to ldap . To use the LDAP server as an authentication provider, set the auth_provider option to ldap . For example, to use the LDAP server as both: Specify the LDAP server. Choose one of the following: To explicitly define the server, specify the server's URI with the ldap_uri option: The ldap_uri option also accepts the IP address of the server. However, using an IP address instead of the server name might cause TLS/SSL connections to fail. See Configuring an SSSD Provider to Use an IP Address in the Certificate Subject Name in Red Hat Knowledgebase. To configure SSSD to discover the server dynamically using DNS service discovery, see Section 7.4.3, "Configuring DNS Service Discovery" . Optionally, specify backup servers in the ldap_backup_uri option as well. Specify the LDAP server's search base in the ldap_search_base option: Specify a way to establish a secure connection to the LDAP server. The recommended method is to use a TLS connection. To do this, enable the ldap_id_use_start_tls option, and use these CA certificate-related options: ldap_tls_reqcert specifies if the client requests a server certificate and what checks are performed on the certificate ldap_tls_cacert specifies the file containing the certificate Note SSSD always uses an encrypted channel for authentication, which ensures that passwords are never sent over the network unencrypted. With ldap_id_use_start_tls = true , identity lookups (such as commands based on the id or getent utilities) are also encrypted. Add the new domain to the domains option in the [sssd] section. The option lists the domains that SSSD queries. For example: Additional Resources The above procedure shows the basic options for an LDAP provider. For more details, see: the sssd.conf (5) man page, which describes global options available for all types of domains the sssd-ldap (5) man page, which describes options specific to LDAP 7.3.3. Configuring the Files Provider for SSSD The files provider mirrors the content of the /etc/passwd and /etc/groups files to make users and groups from these files available through SSSD. This enables you to set the sss database as the first source for users and groups in the /etc/nsswitch.conf file: With this setting, and if the files provider is configured in /etc/sssd/sssd.conf , Red Hat Enterprise Linux sends all queries for users and groups first to SSSD. If SSSD is not running or SSSD cannot find the requested entry, the system falls back to look up users and groups in the local files. If you store most users and groups in a central database, such as an LDAP directory, this setting increases speed of users and groups lookups. Prerequisites Install SSSD. Configure SSSD to Discover the Files Domain Add the following section to the /etc/sssd/sssd.conf file: Optionally, set the sss database as the first source for user and group lookups in the /etc/sssd/sssd.conf file: Configure the system in the way that the sssd service starts when the system boots: Restart the sssd service: Additional Resources The above procedure shows the basic options for the files provider. For more details, see: the sssd.conf (5) man page, which describes global options available for all types of domains the sssd-files (5) man page, which describes options specific to the files provider 7.3.4. Configuring a Proxy Provider for SSSD Prerequisites Install SSSD. Configure SSSD to Discover the Proxy Domain Open the /etc/sssd/sssd.conf file. Create a [domain] section for the proxy provider: To specify an authentication provider: Set the auth_provider option to proxy . Use the proxy_pam_target option to specify a PAM service as the authentication proxy. For example: Important Ensure that the proxy PAM stack does not recursively include pam_sss.so . To specify an identity provider: Set the id_provider option to proxy . Use the proxy_lib_name option to specify an NSS library as the identity proxy. For example: Add the new domain to the domains option in the [sssd] section. The option lists the domains that SSSD queries. For example: Additional Resources The above procedure shows the basic options for a proxy provider. For more details, see the sssd.conf (5) man page, which describes global options available for all types of domains and other proxy-related options. 7.3.5. Configuring a Kerberos Authentication Provider Prerequisites Install SSSD. Configure SSSD to Discover the Kerberos Domain Open the /etc/sssd/sssd.conf file. Create a [domain] section for the SSSD domain. Specify an identity provider. For example, for details on configuring an LDAP identity provider, see Section 7.3.2, "Configuring an LDAP Domain for SSSD" . If the Kerberos principal names are not available in the specified identity provider, SSSD constructs the principals using the format username@REALM . Specify the Kerberos authentication provider details: Set the auth_provider option to krb5 . Specify the Kerberos server: To explicitly define the server, use the krb5_server option. The options accepts the host name or IP address of the server: To configure SSSD to discover the server dynamically using DNS service discovery, see Section 7.4.3, "Configuring DNS Service Discovery" . Optionally, specify backup servers in the krb5_backup_server option as well. If the Change Password service is not running on the KDC specified in krb5_server or krb5_backup_server , use the krb5_passwd option to specify the server where the service is running. If krb5_passwd is not used, SSSD uses the KDC specified in krb5_server or krb5_backup_server . Use the krb5_realm option to specify the name of the Kerberos realm. Add the new domain to the domains option in the [sssd] section. The option lists the domains that SSSD queries. For example: Additional Resources The above procedure shows the basic options for a Kerberos provider. For more details, see: the sssd.conf (5) man page, which describes global options available for all types of domains the sssd-krb5 (5) man page, which describes options specific to Kerberos | [
"yum install sssd",
"[domain/ LDAP_domain_name ]",
"[domain/ LDAP_domain_name ] id_provider = ldap auth_provider = ldap",
"[domain/ LDAP_domain_name ] id_provider = ldap auth_provider = ldap ldap_uri = ldap://ldap.example.com",
"[domain/ LDAP_domain_name ] id_provider = ldap auth_provider = ldap ldap_uri = ldap://ldap.example.com ldap_search_base = dc= example ,dc= com",
"[domain/ LDAP_domain_name ] id_provider = ldap auth_provider = ldap ldap_uri = ldap://ldap.example.com ldap_search_base = dc=example,dc=com ldap_id_use_start_tls = true ldap_tls_reqcert = demand ldap_tls_cacert = /etc/pki/tls/certs/ca-bundle.crt",
"domains = LDAP_domain_name , domain2",
"passwd: sss files group: sss files",
"yum install sssd",
"[domain/files] id_provider = files",
"passwd: sss files group: sss files",
"systemctl enable sssd",
"systemctl restart sssd",
"yum install sssd",
"[domain/ proxy_name ]",
"[domain/ proxy_name ] auth_provider = proxy proxy_pam_target = sssdpamproxy",
"[domain/ proxy_name ] id_provider = proxy proxy_lib_name = nis",
"domains = proxy_name , domain2",
"yum install sssd",
"[domain/ Kerberos_domain_name ]",
"[domain/ Kerberos_domain_name ] id_provider = ldap auth_provider = krb5",
"[domain/ Kerberos_domain_name ] id_provider = ldap auth_provider = krb5 krb5_server = kdc.example.com",
"[domain/ Kerberos_domain_name ] id_provider = ldap auth_provider = krb5 krb5_server = kdc.example.com krb5_backup_server = kerberos.example.com krb5_passwd = kerberos.admin.example.com",
"[domain/ Kerberos_domain_name ] id_provider = ldap auth_provider = krb5 krb5_server = kerberos.example.com krb5_backup_server = kerberos2.example.com krb5_passwd = kerberos.admin.example.com krb5_realm = EXAMPLE.COM",
"domains = Kerberos_domain_name , domain2"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system-level_authentication_guide/configuring_domains |
Chapter 8. OpenShift SDN network plugin | Chapter 8. OpenShift SDN network plugin 8.1. Enabling multicast for a project 8.1.1. About multicast With IP multicast, data is broadcast to many IP addresses simultaneously. Important At this time, multicast is best used for low-bandwidth coordination or service discovery and not a high-bandwidth solution. By default, network policies affect all connections in a namespace. However, multicast is unaffected by network policies. If multicast is enabled in the same namespace as your network policies, it is always allowed, even if there is a deny-all network policy. Cluster administrators should consider the implications to the exemption of multicast from network policies before enabling it. Multicast traffic between OpenShift Dedicated pods is disabled by default. If you are using the OVN-Kubernetes network plugin, you can enable multicast on a per-project basis. 8.1.2. Enabling multicast between pods You can enable multicast between pods for your project. Prerequisites Install the OpenShift CLI ( oc ). You must log in to the cluster with a user that has the cluster-admin or the dedicated-admin role. Procedure Run the following command to enable multicast for a project. Replace <namespace> with the namespace for the project you want to enable multicast for. USD oc annotate namespace <namespace> \ k8s.ovn.org/multicast-enabled=true Tip You can alternatively apply the following YAML to add the annotation: apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: "true" Verification To verify that multicast is enabled for a project, complete the following procedure: Change your current project to the project that you enabled multicast for. Replace <project> with the project name. USD oc project <project> Create a pod to act as a multicast receiver: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi9 command: ["/bin/sh", "-c"] args: ["dnf -y install socat hostname && sleep inf"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF Create a pod to act as a multicast sender: USD cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi9 command: ["/bin/sh", "-c"] args: ["dnf -y install socat && sleep inf"] EOF In a new terminal window or tab, start the multicast listener. Get the IP address for the Pod: USD POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}') Start the multicast listener by entering the following command: USD oc exec mlistener -i -t -- \ socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname Start the multicast transmitter. Get the pod network IP address range: USD CIDR=USD(oc get Network.config.openshift.io cluster \ -o jsonpath='{.status.clusterNetwork[0].cidr}') To send a multicast message, enter the following command: USD oc exec msender -i -t -- \ /bin/bash -c "echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64" If multicast is working, the command returns the following output: mlistener | [
"oc annotate namespace <namespace> k8s.ovn.org/multicast-enabled=true",
"apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: \"true\"",
"oc project <project>",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF",
"cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi9 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF",
"POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')",
"oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname",
"CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')",
"oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"",
"mlistener"
] | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/networking/openshift-sdn-network-plugin |
Chapter 1. Installing a cluster on any platform | Chapter 1. Installing a cluster on any platform In OpenShift Container Platform version 4.13, you can install a cluster on any infrastructure that you provision, including virtualization and cloud environments. Important Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you attempt to install an OpenShift Container Platform cluster in virtualized or cloud environments. 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 1.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 1.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 1.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 1.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 1.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 1.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 1.3.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 1.3.4. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 1.3.4.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 1.3.4.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 1.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 1.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 1.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 1.3.5. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 1.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 1.3.5.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 1.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 1.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 1.3.6. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 1.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 1.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 1.3.6.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 1.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 1.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 1.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 1.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 1.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 1.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 1.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 1.9.1. Sample install-config.yaml file for other platforms You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for your platform. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 15 The pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 1.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 1.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 1.10. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 1.11. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on bare metal infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. To install RHCOS on the machines, follow either the steps to use an ISO image or network PXE booting. Note The compute node deployment steps included in this installation document are RHCOS-specific. If you choose instead to deploy RHEL-based compute nodes, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Only RHEL 8 compute machines are supported. You can configure RHCOS during ISO and PXE installations by using the following methods: Kernel arguments: You can use kernel arguments to provide installation-specific information. For example, you can specify the locations of the RHCOS installation files that you uploaded to your HTTP server and the location of the Ignition config file for the type of node you are installing. For a PXE installation, you can use the APPEND parameter to pass the arguments to the kernel of the live installer. For an ISO installation, you can interrupt the live installation boot process to add the kernel arguments. In both installation cases, you can use special coreos.inst.* arguments to direct the live installer, as well as standard installation boot arguments for turning standard kernel services on or off. Ignition configs: OpenShift Container Platform Ignition config files ( *.ign ) are specific to the type of node you are installing. You pass the location of a bootstrap, control plane, or compute node Ignition config file during the RHCOS installation so that it takes effect on first boot. In special cases, you can create a separate, limited Ignition config to pass to the live system. That Ignition config could do a certain set of tasks, such as reporting success to a provisioning system after completing installation. This special Ignition config is consumed by the coreos-installer to be applied on first boot of the installed system. Do not provide the standard control plane and compute node Ignition configs to the live ISO directly. coreos-installer : You can boot the live ISO installer to a shell prompt, which allows you to prepare the permanent system in a variety of ways before first boot. In particular, you can run the coreos-installer command to identify various artifacts to include, work with disk partitions, and set up networking. In some cases, you can configure features on the live system and copy them to the installed system. Whether to use an ISO or PXE install depends on your situation. A PXE install requires an available DHCP service and more preparation, but can make the installation process more automated. An ISO install is a more manual process and can be inconvenient if you are setting up more than a few machines. 1.11.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 1.11.2. Installing RHCOS by using PXE or iPXE booting You can use PXE or iPXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE or iPXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.13-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE or iPXE installation for the RHCOS images and begin the installation. Modify one of the following example menu entries for your environment and verify that the image and Ignition files are properly accessible: For PXE ( x86_64 ): 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. For iPXE ( x86_64 + aarch64 ): 1 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The kernel parameter value is the location of the kernel file, the initrd=main argument is needed for booting on UEFI systems, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your HTTP server. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the kernel line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Note To network boot the CoreOS kernel on aarch64 architecture, you need to use a version of iPXE build with the IMAGE_GZIP option enabled. See IMAGE_GZIP option in iPXE . For PXE (with UEFI and Grub as second stage) on aarch64 : 1 Specify the locations of the RHCOS files that you uploaded to your HTTP/TFTP server. The kernel parameter value is the location of the kernel file on your TFTP server. The coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file on your HTTP Server. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the location of the initramfs file that you uploaded to your TFTP server. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 1.11.3. Advanced RHCOS installation configuration A key benefit for manually provisioning the Red Hat Enterprise Linux CoreOS (RHCOS) nodes for OpenShift Container Platform is to be able to do configuration that is not available through default OpenShift Container Platform installation methods. This section describes some of the configurations that you can do using techniques that include: Passing kernel arguments to the live installer Running coreos-installer manually from the live system Customizing a live ISO or PXE boot image The advanced configuration topics for manual Red Hat Enterprise Linux CoreOS (RHCOS) installations detailed in this section relate to disk partitioning, networking, and using Ignition configs in different ways. 1.11.3.1. Using advanced networking options for PXE and ISO installations Networking for OpenShift Container Platform nodes uses DHCP by default to gather all necessary configuration settings. To set up static IP addresses or configure special settings, such as bonding, you can do one of the following: Pass special kernel parameters when you boot the live installer. Use a machine config to copy networking files to the installed system. Configure networking from a live installer shell prompt, then copy those settings to the installed system so that they take effect when the installed system first boots. To configure a PXE or iPXE installation, use one of the following options: See the "Advanced RHCOS installation reference" tables. Use a machine config to copy networking files to the installed system. To configure an ISO installation, use the following procedure. Procedure Boot the ISO installer. From the live system shell prompt, configure networking for the live system using available RHEL tools, such as nmcli or nmtui . Run the coreos-installer command to install the system, adding the --copy-network option to copy networking configuration. For example: USD sudo coreos-installer install --copy-network \ --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number> Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. Reboot into the installed system. Additional resources See Getting started with nmcli and Getting started with nmtui in the RHEL 8 documentation for more information about the nmcli and nmtui tools. 1.11.3.2. Disk partitioning Disk partitions are created on OpenShift Container Platform cluster nodes during the Red Hat Enterprise Linux CoreOS (RHCOS) installation. Each RHCOS node of a particular architecture uses the same partition layout, unless you override the default partitioning configuration. During the RHCOS installation, the size of the root file system is increased to use any remaining available space on the target device. Important The use of a custom partition scheme on your node might result in OpenShift Container Platform not monitoring or alerting on some node partitions. If you override the default partitioning, see Understanding OpenShift File System Monitoring (eviction conditions) for more information about how OpenShift Container Platform monitors your host file systems. OpenShift Container Platform monitors the following two filesystem identifiers: nodefs , which is the filesystem that contains /var/lib/kubelet imagefs , which is the filesystem that contains /var/lib/containers For the default partition scheme, nodefs and imagefs monitor the same root filesystem, / . To override the default partitioning when installing RHCOS on an OpenShift Container Platform cluster node, you must create separate partitions. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Consider a situation where you want to add a separate storage partition for your containers and container images. For example, by mounting /var/lib/containers in a separate partition, the kubelet separately monitors /var/lib/containers as the imagefs directory and the root file system as the nodefs directory. Important If you have resized your disk size to host a larger file system, consider creating a separate /var/lib/containers partition. Consider resizing a disk that has an xfs format to reduce CPU time issues caused by a high number of allocation groups. 1.11.3.2.1. Creating a separate /var partition In general, you should use the default disk partitioning that is created during the RHCOS installation. However, there are cases where you might want to create a separate partition for a directory that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var directory or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. The use of a separate partition for the /var directory or a subdirectory of /var also prevents data growth in the partitioned directory from filling up the root file system. The following procedure sets up a separate /var partition by adding a machine config manifest that is wrapped into the Ignition config file for a node type during the preparation phase of an installation. Procedure On your installation host, change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD openshift-install create manifests --dir <installation_directory> Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.13.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum offset value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no offset value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for compute nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Create the Ignition config files: USD openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory: The files in the <installation_directory>/manifest and <installation_directory>/openshift directories are wrapped into the Ignition config files, including the file that contains the 98-var-partition custom MachineConfig object. steps You can apply the custom disk partitioning by referencing the Ignition config files during the RHCOS installations. 1.11.3.2.2. Retaining existing partitions For an ISO installation, you can add options to the coreos-installer command that cause the installer to maintain one or more existing partitions. For a PXE installation, you can add coreos.inst.* options to the APPEND parameter to preserve partitions. Saved partitions might be data partitions from an existing OpenShift Container Platform system. You can identify the disk partitions you want to keep either by partition label or by number. Note If you save existing partitions, and those partitions do not leave enough space for RHCOS, the installation will fail without damaging the saved partitions. Retaining existing partitions during an ISO installation This example preserves any partition in which the partition label begins with data ( data* ): # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number> The following example illustrates running the coreos-installer in a way that preserves the sixth (6) partition on the disk: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign \ --save-partindex 6 /dev/disk/by-id/scsi-<serial_number> This example preserves partitions 5 and higher: # coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number> In the examples where partition saving is used, coreos-installer recreates the partition immediately. Retaining existing partitions during a PXE installation This APPEND option preserves any partition in which the partition label begins with 'data' ('data*'): coreos.inst.save_partlabel=data* This APPEND option preserves partitions 5 and higher: coreos.inst.save_partindex=5- This APPEND option preserves partition 6: coreos.inst.save_partindex=6 1.11.3.3. Identifying Ignition configs When doing an RHCOS manual installation, there are two types of Ignition configs that you can provide, with different reasons for providing each one: Permanent install Ignition config : Every manual RHCOS installation needs to pass one of the Ignition config files generated by openshift-installer , such as bootstrap.ign , master.ign and worker.ign , to carry out the installation. Important It is not recommended to modify these Ignition config files directly. You can update the manifest files that are wrapped into the Ignition config files, as outlined in examples in the preceding sections. For PXE installations, you pass the Ignition configs on the APPEND line using the coreos.inst.ignition_url= option. For ISO installations, after the ISO boots to the shell prompt, you identify the Ignition config on the coreos-installer command line with the --ignition-url= option. In both cases, only HTTP and HTTPS protocols are supported. Live install Ignition config : This type can be created by using the coreos-installer customize subcommand and its various options. With this method, the Ignition config passes to the live install medium, runs immediately upon booting, and performs setup tasks before or after the RHCOS system installs to disk. This method should only be used for performing tasks that must be done once and not applied again later, such as with advanced partitioning that cannot be done using a machine config. For PXE or ISO boots, you can create the Ignition config and APPEND the ignition.config.url= option to identify the location of the Ignition config. You also need to append ignition.firstboot ignition.platform.id=metal or the ignition.config.url option will be ignored. 1.11.3.4. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 1.11.3.4.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple SR-IOV network interfaces to a dual port NIC interface Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the bond= option. On each node, you must perform the following tasks: Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices . Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. Create the bond, attach the desired VFs to the bond and set the bond link state up following the guidance in Configuring network bonding . Follow any of the described procedures to create the bond. The following examples illustrate the syntax you must use: The syntax for configuring a bonded interface is bond=<name>[:<network_interfaces>][:options] . <name> is the bonding device name ( bond0 ), <network_interfaces> represents the virtual functions (VFs) by their known name in the kernel and shown in the output of the ip link command( eno1f0 , eno2f0 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Using network teaming Optional: You can use a network teaming as an alternative to bonding by using the team= parameter: The syntax for configuring a team interface is: team=name[:network_interfaces] name is the team device name ( team0 ) and network_interfaces represents a comma-separated list of physical (ethernet) interfaces ( em1, em2 ). Note Teaming is planned to be deprecated when RHCOS switches to an upcoming version of RHEL. For more information, see this Red Hat Knowledgebase Article . Use the following example to configure a network team: team=team0:em1,em2 ip=team0:dhcp 1.11.3.4.2. coreos-installer options for ISO and PXE installations You can install RHCOS by running coreos-installer install <options> <device> at the command prompt, after booting into the RHCOS live environment from an ISO image. The following table shows the subcommands, options, and arguments you can pass to the coreos-installer command. Table 1.9. coreos-installer subcommands, command-line options, and arguments coreos-installer install subcommand Subcommand Description USD coreos-installer install <options> <device> Embed an Ignition config in an ISO image. coreos-installer install subcommand options Option Description -u , --image-url <url> Specify the image URL manually. -f , --image-file <path> Specify a local image file manually. Used for debugging. -i, --ignition-file <path> Embed an Ignition config from a file. -I , --ignition-url <URL> Embed an Ignition config from a URL. --ignition-hash <digest> Digest type-value of the Ignition config. -p , --platform <name> Override the Ignition platform ID for the installed system. --console <spec> Set the kernel and bootloader console for the installed system. For more information about the format of <spec> , see the Linux kernel serial console documentation. --append-karg <arg>... Append a default kernel argument to the installed system. --delete-karg <arg>... Delete a default kernel argument from the installed system. -n , --copy-network Copy the network configuration from the install environment. Important The --copy-network option only copies networking configuration found under /etc/NetworkManager/system-connections . In particular, it does not copy the system hostname. --network-dir <path> For use with -n . Default is /etc/NetworkManager/system-connections/ . --save-partlabel <lx>.. Save partitions with this label glob. --save-partindex <id>... Save partitions with this number or range. --insecure Skip RHCOS image signature verification. --insecure-ignition Allow Ignition URL without HTTPS or hash. --architecture <name> Target CPU architecture. Valid values are x86_64 and aarch64 . --preserve-on-error Do not clear partition table on error. -h , --help Print help information. coreos-installer install subcommand argument Argument Description <device> The destination device. coreos-installer ISO subcommands Subcommand Description USD coreos-installer iso customize <options> <ISO_image> Customize a RHCOS live ISO image. coreos-installer iso reset <options> <ISO_image> Restore a RHCOS live ISO image to default settings. coreos-installer iso ignition remove <options> <ISO_image> Remove the embedded Ignition config from an ISO image. coreos-installer ISO customize subcommand options Option Description --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --dest-karg-append <arg> Add a kernel argument to each boot of the destination system. --dest-karg-delete <arg> Delete a kernel argument from each boot of the destination system. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. --post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. --live-karg-append <arg> Add a kernel argument to each boot of the live environment. --live-karg-delete <arg> Delete a kernel argument from each boot of the live environment. --live-karg-replace <k=o=n> Replace a kernel argument in each boot of the live environment, in the form key=old=new . -f , --force Overwrite an existing Ignition config. -o , --output <path> Write the ISO to a new output file. -h , --help Print help information. coreos-installer PXE subcommands Subcommand Description Note that not all of these options are accepted by all subcommands. coreos-installer pxe customize <options> <path> Customize a RHCOS live PXE boot config. coreos-installer pxe ignition wrap <options> Wrap an Ignition config in an image. coreos-installer pxe ignition unwrap <options> <image_name> Show the wrapped Ignition config in an image. coreos-installer PXE customize subcommand options Option Description Note that not all of these options are accepted by all subcommands. --dest-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the destination system. --dest-console <spec> Specify the kernel and bootloader console for the destination system. --dest-device <path> Install and overwrite the specified destination device. --network-keyfile <path> Configure networking by using the specified NetworkManager keyfile for live and destination systems. --ignition-ca <path> Specify an additional TLS certificate authority to be trusted by Ignition. --pre-install <path> Run the specified script before installation. post-install <path> Run the specified script after installation. --installer-config <path> Apply the specified installer configuration file. --live-ignition <path> Merge the specified Ignition config file into a new configuration fragment for the live environment. -o, --output <path> Write the initramfs to a new output file. Note This option is required for PXE environments. -h , --help Print help information. 1.11.3.4.3. coreos.inst boot options for ISO or PXE installations You can automatically invoke coreos-installer options at boot time by passing coreos.inst boot arguments to the RHCOS live installer. These are provided in addition to the standard boot arguments. For ISO installations, the coreos.inst options can be added by interrupting the automatic boot at the bootloader menu. You can interrupt the automatic boot by pressing TAB while the RHEL CoreOS (Live) menu option is highlighted. For PXE or iPXE installations, the coreos.inst options must be added to the APPEND line before the RHCOS live installer is booted. The following table shows the RHCOS live installer coreos.inst boot options for ISO and PXE installations. Table 1.10. coreos.inst boot options Argument Description coreos.inst.install_dev Required. The block device on the system to install to. It is recommended to use the full path, such as /dev/sda , although sda is allowed. coreos.inst.ignition_url Optional: The URL of the Ignition config to embed into the installed system. If no URL is specified, no Ignition config is embedded. Only HTTP and HTTPS protocols are supported. coreos.inst.save_partlabel Optional: Comma-separated labels of partitions to preserve during the install. Glob-style wildcards are permitted. The specified partitions do not need to exist. coreos.inst.save_partindex Optional: Comma-separated indexes of partitions to preserve during the install. Ranges m-n are permitted, and either m or n can be omitted. The specified partitions do not need to exist. coreos.inst.insecure Optional: Permits the OS image that is specified by coreos.inst.image_url to be unsigned. coreos.inst.image_url Optional: Download and install the specified RHCOS image. This argument should not be used in production environments and is intended for debugging purposes only. While this argument can be used to install a version of RHCOS that does not match the live media, it is recommended that you instead use the media that matches the version you want to install. If you are using coreos.inst.image_url , you must also use coreos.inst.insecure . This is because the bare-metal media are not GPG-signed for OpenShift Container Platform. Only HTTP and HTTPS protocols are supported. coreos.inst.skip_reboot Optional: The system will not reboot after installing. After the install finishes, you will receive a prompt that allows you to inspect what is happening during installation. This argument should not be used in production environments and is intended for debugging purposes only. coreos.inst.platform_id Optional: The Ignition platform ID of the platform the RHCOS image is being installed on. Default is metal . This option determines whether or not to request an Ignition config from the cloud provider, such as VMware. For example: coreos.inst.platform_id=vmware . ignition.config.url Optional: The URL of the Ignition config for the live boot. For example, this can be used to customize how coreos-installer is invoked, or to run code before or after the installation. This is different from coreos.inst.ignition_url , which is the Ignition config for the installed system. 1.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 1.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 1.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 1.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Configure the Operators that are not available. 1.15.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 1.15.2. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 1.15.3. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 1.15.3.1. Configuring registry storage for bare metal and other manual installations As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster that uses manually-provisioned Red Hat Enterprise Linux CoreOS (RHCOS) nodes, such as bare metal. You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 1.15.3.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 1.15.3.3. Configuring block registry storage for bare metal To allow the image registry to use block storage types during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes, or block persistent volumes, are supported but not recommended for use with the image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. If you choose to use a block storage volume with the image registry, you must use a filesystem persistent volume claim (PVC). Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only one ( 1 ) replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. 1.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Post-installation machine configuration tasks documentation for more information. 1.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 1.18. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.13-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.13-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.13-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.13/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"kernel http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> initrd=main coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd --name main http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img 3 boot",
"menuentry 'Install CoreOS' { linux rhcos-<version>-live-kernel-<architecture> coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 1 2 initrd rhcos-<version>-live-initramfs.<architecture>.img 3 }",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"sudo coreos-installer install --copy-network --ignition-url=http://host/worker.ign /dev/disk/by-id/scsi-<serial_number>",
"openshift-install create manifests --dir <installation_directory>",
"variant: openshift version: 4.13.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partlabel 'data*' /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 6 /dev/disk/by-id/scsi-<serial_number>",
"coreos-installer install --ignition-url http://10.0.2.2:8080/user.ign --save-partindex 5- /dev/disk/by-id/scsi-<serial_number>",
"coreos.inst.save_partlabel=data*",
"coreos.inst.save_partindex=5-",
"coreos.inst.save_partindex=6",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"team=team0:em1,em2 ip=team0:dhcp",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.26.0 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.26.0 master-1 Ready master 63m v1.26.0 master-2 Ready master 64m v1.26.0",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.26.0 master-1 Ready master 73m v1.26.0 master-2 Ready master 74m v1.26.0 worker-0 Ready worker 11m v1.26.0 worker-1 Ready worker 11m v1.26.0",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.13 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.13.0 True False False 19m baremetal 4.13.0 True False False 37m cloud-credential 4.13.0 True False False 40m cluster-autoscaler 4.13.0 True False False 37m config-operator 4.13.0 True False False 38m console 4.13.0 True False False 26m csi-snapshot-controller 4.13.0 True False False 37m dns 4.13.0 True False False 37m etcd 4.13.0 True False False 36m image-registry 4.13.0 True False False 31m ingress 4.13.0 True False False 30m insights 4.13.0 True False False 31m kube-apiserver 4.13.0 True False False 26m kube-controller-manager 4.13.0 True False False 36m kube-scheduler 4.13.0 True False False 36m kube-storage-version-migrator 4.13.0 True False False 37m machine-api 4.13.0 True False False 29m machine-approver 4.13.0 True False False 37m machine-config 4.13.0 True False False 36m marketplace 4.13.0 True False False 37m monitoring 4.13.0 True False False 29m network 4.13.0 True False False 38m node-tuning 4.13.0 True False False 37m openshift-apiserver 4.13.0 True False False 32m openshift-controller-manager 4.13.0 True False False 30m openshift-samples 4.13.0 True False False 32m operator-lifecycle-manager 4.13.0 True False False 37m operator-lifecycle-manager-catalog 4.13.0 True False False 37m operator-lifecycle-manager-packageserver 4.13.0 True False False 32m service-ca 4.13.0 True False False 38m storage 4.13.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_any_platform/installing-platform-agnostic |
Chapter 15. Managing Certificate System Users and Groups | Chapter 15. Managing Certificate System Users and Groups This chapter explains how to set up authorization for access to the administrative, agent services, and end-entities pages. 15.1. About Authorization Authorization is the process of allowing access to certain tasks associated with the Certificate System. Access can be limited to allow certain tasks to certain areas of the subsystem for certain users or groups and different tasks to different users and groups. Users are specific to the subsystem in which they are created. Each subsystem has its own set of users independent of any other subsystem installed. The users are placed in groups, which can be predefined or user-created. Privileges are assigned to a group through access control lists (ACLs). There are ACLs associated with areas in the administrative console, agent services interface, and end-entities page that perform an authorization check before allowing an operation to proceed. Access control instructions (ACIs) in each of the ACLs are created that specifically allow or deny possible operations for that ACL to specified users, groups, or IP addresses. The ACLs contain a default set of ACIs for the default groups that are created. These ACIs can be modified to change the privileges of predefined groups or to assign privileges to newly-created groups. Authorization goes through the following process: The users authenticate to the interface using either the Certificate System user ID and password or a certificate. The server authenticates the user either by matching the user ID and password with the one stored in the database or by checking the certificate against one stored in the database. With certificate-based authentication, the server also checks that the certificate is valid and finds the group membership of the user by associating the DN of the certificate with a user and checking the user entry. With password-based authentication, the server checks the password against the user ID and then finds the group membership of the user by associating that user ID with the user ID contained in the group. When the user tries to perform an operation, the authorization mechanism compares the user ID of the user, the group in which the user belongs, or the IP address of the user to the ACLs set for that user, group, or IP address. If an ACL exists that allows that operation, then the operation proceeds. | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/user_and_group_authorization |
5.9.2. Moving Files and Directories | 5.9.2. Moving Files and Directories Files and directories keep their current SELinux context when they are moved. In many cases, this is incorrect for the location they are being moved to. The following example demonstrates moving a file from a user's home directory to /var/www/html/ , which is used by the Apache HTTP Server. Since the file is moved, it does not inherit the correct SELinux context: Run the cd command without any arguments to change into your home directory. Once in your home directory, run the touch file1 command to create a file. This file is labeled with the user_home_t type: Run the ls -dZ /var/www/html/ command to view the SELinux context of the /var/www/html/ directory: By default, the /var/www/html/ directory is labeled with the httpd_sys_content_t type. Files and directories created under the /var/www/html/ directory inherit this type, and as such, they are labeled with this type. As the Linux root user, run the mv file1 /var/www/html/ command to move file1 to the /var/www/html/ directory. Since this file is moved, it keeps its current user_home_t type: By default, the Apache HTTP Server cannot read files that are labeled with the user_home_t type. If all files comprising a web page are labeled with the user_home_t type, or another type that the Apache HTTP Server cannot read, permission is denied when attempting to access them via web browsers, such as Firefox. Important Moving files and directories with the mv command may result in the incorrect SELinux context, preventing processes, such as the Apache HTTP Server and Samba, from accessing such files and directories. | [
"~]USD ls -Z file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 file1",
"~]USD ls -dZ /var/www/html/ drwxr-xr-x root root system_u:object_r:httpd_sys_content_t:s0 /var/www/html/",
"~]# mv file1 /var/www/html/ ~]# ls -Z /var/www/html/file1 -rw-rw-r-- user1 group1 unconfined_u:object_r:user_home_t:s0 /var/www/html/file1"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/sect-security-enhanced_linux-maintaining_selinux_labels_-moving_files_and_directories |
Preface | Preface As a developer of business decisions, you can use Red Hat Decision Manager to develop decision services using Decision Model and Notation (DMN) models, Drools Rule Language (DRL) rules, guided decision tables, and other rule-authoring assets. | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/pr01 |
probe::stap.pass4.end | probe::stap.pass4.end Name probe::stap.pass4.end - Finished stap pass4 (compile C code into kernel module) Synopsis stap.pass4.end Values session the systemtap_session variable s Description pass4.end fires just before the jump to cleanup if s.last_pass = 4 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-stap-pass4-end |
Chapter 9. Scheduler Tapset | Chapter 9. Scheduler Tapset This family of probe points is used to probe the task scheduler activities. It contains the following probe points: | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/sched-dot-stp |
Chapter 8. Assigning roles to hosts | Chapter 8. Assigning roles to hosts You can assign roles to your discovered hosts. These roles define the function of the host within the cluster. The roles can be one of the standard Kubernetes types: control plane (master) or worker . The host must meet the minimum requirements for the role you selected. You can find the hardware requirements by referring to the Prerequisites section of this document or using the preflight requirement API. If you do not select a role, the system selects one for you. You can change the role at any time before installation starts. 8.1. Selecting a role by using the web console You can select a role after the host finishes its discovery. Procedure Go to the Host Discovery tab and scroll down to the Host Inventory table. Select the Auto-assign drop-down for the required host. Select Control plane node to assign this host a control plane role. Select Worker to assign this host a worker role. Check the validation status. 8.2. Selecting a role by using the API You can select a role for the host by using the /v2/infra-envs/{infra_env_id}/hosts/{host_id} endpoint. A host can have one of the following roles: master : A host with the master role operates as a control plane node. worker : A host with the worker role operates as a worker node. By default, the Assisted Installer sets a host to auto-assign , which means the Assisted Installer will determine whether the host is a master or worker role automatically. Use this procedure to set the host's role. Prerequisites You have added hosts to the cluster. Procedure Refresh the API token: USD source refresh-token Get the host IDs: USD curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID" \ --header "Content-Type: application/json" \ -H "Authorization: Bearer USDAPI_TOKEN" \ | jq '.host_networks[].host_ids' Example output [ "1062663e-7989-8b2d-7fbb-e6f4d5bb28e5" ] Modify the host_role setting: USD curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> \ -X PATCH \ -H "Authorization: Bearer USD{API_TOKEN}" \ -H "Content-Type: application/json" \ -d ' { "host_role":"worker" } ' | jq Replace <host_id> with the ID of the host. 8.3. Auto-assigning roles Assisted Installer selects a role automatically for hosts if you do not assign a role yourself. The role selection mechanism factors the host's memory, CPU, and disk space. It aims to assign a control plane role to the weakest hosts that meet the minimum requirements for control plane nodes. The number of control planes you specify in the cluster definition determines the number of control plane nodes that the Assisted Installer assigns. For details, see Setting the cluster details . All other hosts default to worker nodes. The goal is to provide enough resources to run the control plane and reserve the more capacity-intensive hosts for running the actual workloads. You can override the auto-assign decision at any time before installation. The validations make sure that the auto selection is a valid one. 8.4. Additional resources Prerequisites | [
"source refresh-token",
"curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" --header \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.host_networks[].host_ids'",
"[ \"1062663e-7989-8b2d-7fbb-e6f4d5bb28e5\" ]",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{INFRA_ENV_ID}/hosts/<host_id> -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"host_role\":\"worker\" } ' | jq"
] | https://docs.redhat.com/en/documentation/assisted_installer_for_openshift_container_platform/2025/html/installing_openshift_container_platform_with_the_assisted_installer/assembly_role-assignment |
Chapter 3. Upgrading to the Red Hat JBoss Core Services Apache HTTP Server 2.4.57 | Chapter 3. Upgrading to the Red Hat JBoss Core Services Apache HTTP Server 2.4.57 The steps to upgrade to the latest Red Hat JBoss Core Services (JBCS) release differ depending on whether you previously installed JBCS from RPM packages or from an archive file. Upgrading JBCS when installed from RPM packages If you installed an earlier release of the JBCS Apache HTTP Server from RPM packages on RHEL 7 or RHEL 8 by using the yum groupinstall command, you can upgrade to the latest release. You can use the yum groupupdate command to upgrade to the 2.4.57 release on RHEL 7 or RHEL 8. Note JBCS does not provide an RPM distribution of the Apache HTTP Server on RHEL 9. Upgrading JBCS when installed from an archive file If you installed an earlier release of the JBCS Apache HTTP Server from an archive file, you must perform the following steps to upgrade to the Apache HTTP Server 2.4.57: Install the Apache HTTP Server 2.4.57. Set up the Apache HTTP Server 2.4.57. Remove the earlier version of Apache HTTP Server. The following procedure describes the recommended steps for upgrading a JBCS Apache HTTP Server 2.4.51 release that you installed from archive files to the latest 2.4.57 release. Prerequisites If you are using Red Hat Enterprise Linux, you have root user access. If you are using Windows Server, you have administrative access. The Red Hat JBoss Core Services Apache HTTP Server 2.4.51 or earlier was previously installed in your system from an archive file. Procedure Shut down any running instances of Red Hat JBoss Core Services Apache HTTP Server 2.4.51. Back up the Red Hat JBoss Core Services Apache HTTP Server 2.4.51 installation and configuration files. Install the Red Hat JBoss Core Services Apache HTTP Server 2.4.57 using the .zip installation method for the current system (see Additional Resources below). Migrate your configuration from the Red Hat JBoss Core Services Apache HTTP Server version 2.4.51 to version 2.4.57. Note The Apache HTTP Server configuration files might have changed since the Apache HTTP Server 2.4.51 release. Consider updating the 2.4.57 version configuration files rather than overwrite them with the configuration files from a different version, such as the Apache HTTP Server 2.4.51. Remove the Red Hat JBoss Core Services Apache HTTP Server 2.4.51 root directory. Additional Resources Installing the JBCS Apache HTTP Server on RHEL from archive files Installing the JBCS Apache HTTP Server on RHEL from RPM packages Installing the JBCS Apache HTTP Server on Windows Server | null | https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_service_pack_6_release_notes/upgrading-to-the-jbcs-http-2.4.57-release-notes |
Chapter 1. Overview | Chapter 1. Overview This guide describes how to setup federation in a high availability Red Hat OpenStack Platform director environment, using a Red Hat Single Sign-On (RH-SSO) server for authentication services. 1.1. Operational Goals By following this guide, your OpenStack deployment's authentication service will be federated with RH-SSO, and will include the following characteristics: Federation will be based on Security Assertion Markup Language (SAML). The Identity Provider (IdP) is RH-SSO, and will be situated externally to the Red Hat OpenStack Platform deployment. The RH-SSO IdP uses Red Hat Identity Management (IdM) as the federated user backing store. As a result, users and groups are managed in IdM, and RH-SSO will reference the user and group information that is stored in IdM. Your IdM users will be authorized to access OpenStack when they are added to the IdM group: openstack-users . OpenStack Keystone will have a group named federated_users . Members of the federated_users group will have the Member role, which grants them permission to access the project. During the federated authentication process, members of the IdM group openstack-users are mapped into the OpenStack group federated_users . As a result, an IdM user will need to be a member of the openstack-users group in order to access OpenStack; if the user is not a member of the IdM group openstack-users , then authentication will fail. 1.2. Assumptions This guide makes the following assumptions about your deployment: A RH-SSO server is present, and you either have administrative privileges on the server, or the RH-SSO administrator has created a realm for you and given you administrative privileges on that realm. Since federated IdPs are external by definition, the RH-SSO server is assumed to be external to the Red Hat OpenStack Platform director overcloud. An IdM server is present, and also external to the Red Hat OpenStack Platform director overcloud where users and groups are managed. RH-SSO will use IdM as its User Federation backing store. The OpenStack deployment is based on Red Hat OpenStack Platform director. The Red Hat OpenStack Platform director overcloud installation uses high availability (HA) features. Only the Red Hat OpenStack Platform director overcloud will have federation enabled; the undercloud is not federated. TLS encryption is used for all external communication. All nodes have a Fully Qualified Domain Name (FQDN). HAProxy terminates TLS front-end connections, and servers running behind HAProxy do not use TLS. Pacemaker is used to manage some of the overcloud services, including HAProxy. Red Hat OpenStack Platform director has an overcloud deployed. You are able to SSH into the undercloud and overcloud nodes. The examples described in the Keystone Federation Configuration Guide will be followed. On the undercloud-0 node, you will install the helper files into the home directory of the stack user, and work in the stack user home directory. On the controller-0 node, you will install the helper files into the home directory of the heat-admin user, and work in the heat-admin user home directory. 1.3. Prerequisites The RH-SSO server has been configured and is external to the Red Hat OpenStack Platform director overcloud. The IdM deployment is external to the Red Hat OpenStack Platform director overcloud. Red Hat OpenStack Platform director has an overcloud deployed. Reinstall mod_auth_mellon If mod_auth_mellon was previously installed on your controller nodes (perhaps because it was included in a base image used to instantiate the controller nodes) you might need to reinstall it again. This is a consequence of the way in which Puppet manages Apache modules, where the Puppet Apache class will remove any Apache configuration files not under Puppet's control. Note that Apache will not start if these files have been removed, and it will raise errors about unknown Mellon files. At the time of this writing, mod_auth_mellon remains outside of Puppet's control. See Section 4.14, "Prevent Puppet From Deleting Unmanaged HTTPD Files" for information on how to prevent Puppet from removing Apache configuration files. To check if Puppet removed any of the files belonging to the mod_auth_mellon RPM, you can perform a query to validate the`mod_auth_mellon` packages, for example: If RPM indicates these configuration files are absent, then Puppet has removed then. You can then restore the files: For more information, see BZ#1434875 and BZ#1497718 1.4. Accessing the OpenStack Nodes As the root user, SSH into the node hosting the OpenStack deployment. For example: SSH into the undercloud node: Become the stack user: Source the overcloud configuration to enable the required OpenStack environment variables: Note Currently, Red Hat OpenStack Platform director sets up Keystone to use the Keystone v2 API but you will be using the Keystone v3 API. Later on in the guide you will create an overcloudrc.v3 file. From that point on you should use the v3 version of the overcloudrc file. See Section 4.8, "Use the Keystone Version 3 API" for more information. Afer sourcing overcloudrc , you can issue commands using the openstack command line tool, which will operate against the overcloud (even though you're currently still logged into an undercloud node). If you need to directly access one of the overcloud nodes, you can SSH to it as the heat-admin user. For example: 1.5. Understanding High Availability Detailed information on high availability can be found in the High Availability Deployment and Usage guide. Red Hat OpenStack Platform director distributes redundant copies of various OpenStack services across the overcloud deployment. These redundant services are deployed on the overcloud controller nodes, with director naming these nodes controller-0 , controller-1 , controller-2 , and so on, depending on how many controller nodes Red Hat OpenStack Platform director has configured. The IP address of the controller nodes are private to the overcloud and are not externally visible. This is because the services running on the controller nodes are HAProxy back-end servers. There is one publically visible IP address for the set of controller nodes; this is HAProxy's front end. When a request arrives for a service on the public IP address, then HAProxy will select a back-end server to service the request. The overcloud is organized as a high availability cluster. Pacemaker manages the cluster, performs health checks, and can fail over to another cluster resource if the resource stops functioning. Pacemaker is also aware of how to correctly start and stop resources. 1.5.1. HAProxy Overview HAProxy serves a similar role to Pacemaker, as it also performs health checks on the back-end servers and only forwards requests to functioning back-end servers. There is a copy of HAProxy running on all controller nodes. Although there are N copies of HAProxy running, only one is actually fielding requests at any given time; this active HAProxy instance is managed by Pacemaker. This approach helps prevent conflicts from occurring, and allows multiple copies of HAProxy to coordinate the distribution of requests across multiple back-ends. If Pacemaker detects HAProxy has failed, it reassigns the front-end IP address to a different HAProxy instance which then becomes the controlling HAProxy instance. You might think of it as high availability for high availability. The instances of HAProxy that are kept in reserve by Pacemaker are running, but they never see an incoming connection because Pacemaker has configured the networking so that connections only route to the active HAProxy instance. 1.5.2. Managing Pacemaker Services Services that are managed by Pacemaker must not be managed by systemctl on a controller node. Use the Pacemaker pcs command instead, for example: sudo pcs resource restart haproxy-clone . You can determine the resource name using the Pacemaker status command: sudo pcs status . This will print a result similar to this: 1.5.3. Using the Configuration Script Many of the steps in this guide require the execution of complicated commands, so to make that task easier (and to allow for repeatability) all the commands have been gathered into a master shell script called configure-federation . Each individual step can be executed by passing the name of the step to configure-federation . The list of possible commands can be seen by using the help option ( -h or --help ). Note You can find the script here: Chapter 6, The configure-federation file It can be useful to know exactly what the command will be after variable substitution occurs, when the configure-federation script executes: -n is dry-run mode: nothing will be modified, the exact operation will instead be written to stdout. -v is verbose mode: the exact operation will be written to stdout just prior to executing it. This is useful for logging. 1.5.4. Site-specific Values Certain values used in this guide are site-specific; it may otherwise have been confusing to include these site-specific values directly into this guide, and may have been a source of errors for someone attempting to replicate these steps. To address this, any site-specific values referenced in this guide are in the form of a variable. The variable name starts with a dollar-sign ( USD ) and is all-caps with a prefix of FED_ . For example, the URL used to access the RH-SSO server would be: USDFED_RHSSO_URL Note You can find the variables file here: Chapter 7, The fed_variables file Site-specific values can always be identified by searching for USDFED_ Site-specific values used by the configure-federation script are gathered into the file fed_variables . You will need to edit this file to suit your deployment. 1.6. Using a Proxy or SSL terminator When a server is behind a proxy, the environment it sees is different to what the client sees as the public identity of the server. A back-end server may have a different hostname, listen on a different port, or use a different protocol than what a client sees on the front side of the proxy. For many web apps this is not a major problem. Typically most of the problems occur when a server has to generate a self-referential URL (perhaps because it will redirect the client to a different URL on the same server). The URL the server generates must match the public address and port as seen by the client. Authentication protocols are especially sensitive to the host, port and protocol (for example, HTTP/HTTPS) because they often need to assure a request was targeted at a specific server, on a specific port and on a secure transport. Proxies can interfere with this vital information, because by definition a proxy transforms a request received on its public front-end before dispatching it to a non-public server in the back-end. Similarly, responses from the non-public back-end server sometimes need adjustment so that it appears as if the response came from the public front-end of the proxy. There are various approaches to solving this problem. Because SAML is sensitive to host, port, and protocol information, and because you are configuring SAML behind a high availability proxy (HAProxy), you must deal with these issues or your configuration will likely fail (often in cryptic ways). 1.6.1. Hostname and Port Considerations The host and port details are used in multiple contexts: The host and port in the URL used by the client. The host HTTP header inserted into the HTTP request (as derived from the client URL host). The host name of the front-facing proxy the client connects to. This is actually the FQDN of the IP address that the proxy is listening on. The host and port of the back-end server which actually handled the client request. The virtual host and port of the server that actually handled the client request. It is important to understand how each of these values are used, otherwise there is a risk that the wrong host and port are used, with the result that the authentication protocols may fail because they cannot validate the parties involved in the transaction. You can begin by considering the back-end server handling the request, because this is where the host and port are evaluated, and where most of the problems can occur: The back-end server needs to know: The URL of the request (including host and port). Its own host and port. Apache supports virtual name hosting, which allows a single server to host multiple domains. For example, a server running on example.com might service requests for both example.com and example-2.com , with these being virtual host names. Virtual hosts in Apache are configured inside a server configuration block, for example: When Apache receives a request, it gathers the host information from the HOST HTTP header, and then tries to match the host to the ServerName in its collection of virtual hosts. The ServerName directive defines the request scheme, hostname, and port that the server uses to identify itself. The behavior of the ServerName directive is modified by the UseCanonicalName directive. When UseCanonicalName is enabled, Apache will use the hostname and port specified in the ServerName directive to construct the canonical name for the server. This name is used in all self-referential URLs, and for the values of SERVER_NAME and SERVER_PORT in CGIs. If UseCanonicalName is Off , Apache will form self-referential URLs using the hostname and port supplied by the client, if any are supplied. If no port is specified in the ServerName , then the server will use the port from the incoming request. For optimal reliability and predictability, you should specify an explicit hostname and port using the ServerName directive. If no ServerName is specified, the server attempts to deduce the host by first asking the operating system for the system host name, and if that fails, performing a reverse lookup for an IP address present on the system. Consequently, this will produce the wrong host information when the server is behind a proxy, therefore the use of the ServerName directive is essential. The Apache ServerName documentation is clear concerning the need to fully specify the scheme, host, and port in the Server name directive when the server is behind a proxy, where it states: Sometimes, the server runs behind a device that processes SSL, such as a reverse proxy, load balancer or SSL offload appliance. When this is the case, specify the https:// scheme and the port number to which the clients connect in the ServerName directive to make sure that the server generates the correct self-referential URLs. When proxies are in effect, they use X-Forwarded-* HTTP headers to allow the entity processing the request to recognize that the request was forwarded, and what the original values were before they were forwarded. The Red Hat OpenStack Platform director HAProxy configuration sets the X-Forwarded-Proto HTTP header based on whether the front connection used SSL/TLS or not, using this configuration: In addition, Apache does not interpret this header, so responsibility falls to another component to process it properly. In the situation where HAProxy terminates SSL prior to the back-end server processing the request, it is irrelevant that the X-Forwarded-Proto HTTP header is set to HTTPS, because Apache does not use the header when an extension module (such as mellon ) asks for the protocol scheme of the request. This is why it is essential to have the ServerName directive include the scheme:://host:port and that UseCanonicalName is enabled, otherwise Apache extension modules such as mod_auth_mellon will not function properly behind a proxy. With regard to web apps hosted by Apache behind a proxy, it is the web app's (or rather the web app framework) responsibility to process the forwarded header. Consequently, apps handle the protocol scheme of a forwarded request differently than Apache extension modules will. Since Dashboard (horizon) is a Django web app, it is Django's responsibility to process the X-Forwarded-Proto header. This issue arises with the origin query parameter used by horizon during authentication. Horizon adds a origin query parameter to the keystone URL it invokes to perform authentication. The origin parameter is used by horizon to redirect back to original resource. The origin parameter generated by horizon may incorrectly specify HTTP as the scheme instead of https despite the fact horizon is running with HTTPS enabled. This occurs because Horizon calls the function build_absolute_uri() to form the origin parameter. It is entirely up to the Django to identify the scheme because build_absolute_url() is ultimately implemented by Django. You can force Django to process the X-Forwarded-Proto using a special configuration directive. This is covered in the Django secure-proxy-ssl-header documentation. You can enable this setting by uncommenting this line in /var/lib/config-data/puppet-generated/horizon/etc/openstack-dashboard/local_settings : Note Note that Django prefixes the header with "HTTP_" , and converts hyphens to underscores. After uncommenting, the Origin parameter will correctly use the HTTPS scheme. However, even when the ServerName directive includes the HTTPS scheme, the Django call build_absolute_url() will not use the HTTPS scheme. So for Django you must use the SECURE_PROXY_SSL_HEADER override, simply specifying the scheme in ServerName directive will not work. It is important to note that the Apache extension modules and web apps process the request scheme of a forwarded request differently, requiring that both the ServerName and X-Forwarded-Proto HTTP header techniques be used. | [
"rpm -qV mod_auth_mellon missing c /var/lib/config-data/puppet-generated/keystone/etc/httpd/conf.d/auth_mellon.conf missing c /var/lib/config-data/puppet-generated/keystone/etc/httpd/conf.modules.d/10-auth_mellon.conf",
"sudo dnf reinstall mod_auth_mellon",
"ssh root@xxx",
"ssh undercloud-0",
"su - stack",
"source overcloudrc",
"ssh heat-admin@controller-0",
"Clone Set: haproxy-clone [haproxy] Started: [ controller-1 ] Stopped: [ controller-0 ]",
"<VirtualHost> ServerName example.com </VirtualHost>",
"http-request set-header X-Forwarded-Proto https if { ssl_fc } http-request set-header X-Forwarded-Proto http if !{ ssl_fc }",
"#SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')"
] | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/federate_with_identity_service/introduction |
7.63. glusterfs | 7.63. glusterfs 7.63.1. RHBA-2015:0683 - glusterfs bug fix update Updated glusterfs packages that fix one bug are now available for Red Hat Enterprise Linux 6. GlusterFS is a key building block of Red Hat Storage. It is based on a stackable user-space design and can deliver exceptional performance for diverse workloads. GlusterFS aggregates various storage servers over network interconnections into one large, parallel network file system. Bug Fix BZ# 1204589 Previously, the qemu-kvm utility could terminate unexpectedly with a segmentation fault after the user attempted to create an image on GlusterFS using the "qemu-img create" command. The glusterfs packages source code has been modified to fix this bug, and qemu-kvm no longer crashes in the described situation. Users of glusterfs are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-glusterfs |
Chapter 20. Managing local storage using RHEL System Roles | Chapter 20. Managing local storage using RHEL System Roles To manage LVM and local file systems (FS) using Ansible, you can use the storage role, which is one of the RHEL System Roles available in RHEL 8. Using the storage role enables you to automate administration of file systems on disks and logical volumes on multiple machines and across all versions of RHEL starting with RHEL 7.7. For more information about RHEL System Roles and how to apply them, see Introduction to RHEL System Roles . 20.1. Introduction to the storage RHEL System Role The storage role can manage: File systems on disks which have not been partitioned Complete LVM volume groups including their logical volumes and file systems MD RAID volumes and their file systems With the storage role, you can perform the following tasks: Create a file system Remove a file system Mount a file system Unmount a file system Create LVM volume groups Remove LVM volume groups Create logical volumes Remove logical volumes Create RAID volumes Remove RAID volumes Create LVM volume groups with RAID Remove LVM volume groups with RAID Create encrypted LVM volume groups Create LVM logical volumes with RAID 20.2. Parameters that identify a storage device in the storage RHEL System Role Your storage role configuration affects only the file systems, volumes, and pools that you list in the following variables. storage_volumes List of file systems on all unpartitioned disks to be managed. storage_volumes can also include raid volumes. Partitions are currently unsupported. storage_pools List of pools to be managed. Currently the only supported pool type is LVM. With LVM, pools represent volume groups (VGs). Under each pool there is a list of volumes to be managed by the role. With LVM, each volume corresponds to a logical volume (LV) with a file system. 20.3. Example Ansible playbook to create an XFS file system on a block device This section provides an example Ansible playbook. This playbook applies the storage role to create an XFS file system on a block device using the default parameters. Warning The storage role can create a file system only on an unpartitioned, whole disk or a logical volume (LV). It cannot create the file system on a partition. Example 20.1. A playbook that creates XFS on /dev/sdb The volume name ( barefs in the example) is currently arbitrary. The storage role identifies the volume by the disk device listed under the disks: attribute. You can omit the fs_type: xfs line because XFS is the default file system in RHEL 8. To create the file system on an LV, provide the LVM setup under the disks: attribute, including the enclosing volume group. For details, see Example Ansible playbook to manage logical volumes . Do not provide the path to the LV device. Additional resources The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.4. Example Ansible playbook to persistently mount a file system This section provides an example Ansible playbook. This playbook applies the storage role to immediately and persistently mount an XFS file system. Example 20.2. A playbook that mounts a file system on /dev/sdb to /mnt/data This playbook adds the file system to the /etc/fstab file, and mounts the file system immediately. If the file system on the /dev/sdb device or the mount point directory do not exist, the playbook creates them. Additional resources The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.5. Example Ansible playbook to manage logical volumes This section provides an example Ansible playbook. This playbook applies the storage role to create an LVM logical volume in a volume group. Example 20.3. A playbook that creates a mylv logical volume in the myvg volume group The myvg volume group consists of the following disks: /dev/sda /dev/sdb /dev/sdc If the myvg volume group already exists, the playbook adds the logical volume to the volume group. If the myvg volume group does not exist, the playbook creates it. The playbook creates an Ext4 file system on the mylv logical volume, and persistently mounts the file system at /mnt . Additional resources The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.6. Example Ansible playbook to enable online block discard This section provides an example Ansible playbook. This playbook applies the storage role to mount an XFS file system with online block discard enabled. Example 20.4. A playbook that enables online block discard on /mnt/data/ Additional resources Example Ansible playbook to persistently mount a file system The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.7. Example Ansible playbook to create and mount an Ext4 file system This section provides an example Ansible playbook. This playbook applies the storage role to create and mount an Ext4 file system. Example 20.5. A playbook that creates Ext4 on /dev/sdb and mounts it at /mnt/data The playbook creates the file system on the /dev/sdb disk. The playbook persistently mounts the file system at the /mnt/data directory. The label of the file system is label-name . Additional resources The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.8. Example Ansible playbook to create and mount an ext3 file system This section provides an example Ansible playbook. This playbook applies the storage role to create and mount an Ext3 file system. Example 20.6. A playbook that creates Ext3 on /dev/sdb and mounts it at /mnt/data The playbook creates the file system on the /dev/sdb disk. The playbook persistently mounts the file system at the /mnt/data directory. The label of the file system is label-name . Additional resources The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.9. Example Ansible playbook to resize an existing Ext4 or Ext3 file system using the storage RHEL System Role This section provides an example Ansible playbook. This playbook applies the storage role to resize an existing Ext4 or Ext3 file system on a block device. Example 20.7. A playbook that set up a single volume on a disk If the volume in the example already exists, to resize the volume, you need to run the same playbook, just with a different value for the parameter size . For example: Example 20.8. A playbook that resizes ext4 on /dev/sdb The volume name (barefs in the example) is currently arbitrary. The Storage role identifies the volume by the disk device listed under the disks: attribute. Note Using the Resizing action in other file systems can destroy the data on the device you are working on. Additional resources The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.10. Example Ansible playbook to resize an existing file system on LVM using the storage RHEL System Role This section provides an example Ansible playbook. This playbook applies the storage RHEL System Role to resize an LVM logical volume with a file system. Warning Using the Resizing action in other file systems can destroy the data on the device you are working on. Example 20.9. A playbook that resizes existing mylv1 and myvl2 logical volumes in the myvg volume group This playbook resizes the following existing file systems: The Ext4 file system on the mylv1 volume, which is mounted at /opt/mount1 , resizes to 10 GiB. The Ext4 file system on the mylv2 volume, which is mounted at /opt/mount2 , resizes to 50 GiB. Additional resources The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.11. Example Ansible playbook to create a swap volume using the storage RHEL System Role This section provides an example Ansible playbook. This playbook applies the storage role to create a swap volume, if it does not exist, or to modify the swap volume, if it already exist, on a block device using the default parameters. Example 20.10. A playbook that creates or modify an existing XFS on /dev/sdb The volume name ( swap_fs in the example) is currently arbitrary. The storage role identifies the volume by the disk device listed under the disks: attribute. Additional resources The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.12. Configuring a RAID volume using the storage System Role With the storage System Role, you can configure a RAID volume on RHEL using Red Hat Ansible Automation Platform and Ansible-Core. Create an Ansible playbook with the parameters to configure a RAID volume to suit your requirements. Prerequisites The Ansible Core package is installed on the control machine. You have the rhel-system-roles package installed on the system from which you want to run the playbook. You have an inventory file detailing the systems on which you want to deploy a RAID volume using the storage System Role. Procedure Create a new playbook.yml file with the following content: --- - name: Configure the storage hosts: managed-node-01.example.com tasks: - name: Create a RAID on sdd, sde, sdf, and sdg include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_volumes: - name: data type: raid disks: [sdd, sde, sdf, sdg] raid_level: raid0 raid_chunk_size: 32 KiB mount_point: /mnt/data state: present Warning Device names might change in certain circumstances, for example, when you add a new disk to a system. Therefore, to prevent data loss, do not use specific disk names in the playbook. Optional: Verify the playbook syntax: Run the playbook: Additional resources Managing RAID The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file Preparing a control node and managed nodes to use RHEL System Roles . 20.13. Configuring an LVM pool with RAID using the storage RHEL System Role With the storage System Role, you can configure an LVM pool with RAID on RHEL using Red Hat Ansible Automation Platform. In this section you will learn how to set up an Ansible playbook with the available parameters to configure an LVM pool with RAID. Prerequisites The Ansible Core package is installed on the control machine. You have the rhel-system-roles package installed on the system from which you want to run the playbook. You have an inventory file detailing the systems on which you want to configure an LVM pool with RAID using the storage System Role. Procedure Create a new playbook.yml file with the following content: - hosts: all vars: storage_safe_mode: false storage_pools: - name: my_pool type: lvm disks: [sdh, sdi] raid_level: raid1 volumes: - name: my_pool size: "1 GiB" mount_point: "/mnt/app/shared" fs_type: xfs state: present roles: - name: rhel-system-roles.storage Note To create an LVM pool with RAID, you must specify the RAID type using the raid_level parameter. Optional. Verify playbook syntax. Run the playbook on your inventory file: Additional resources Managing RAID . The /usr/share/ansible/roles/rhel-system-roles.storage/README.md file. 20.14. Example Ansible playbook to compress and deduplicate a VDO volume on LVM using the storage RHEL System Role This section provides an example Ansible playbook. This playbook applies the storage RHEL System Role to enable compression and deduplication of Logical Volumes (LVM) using Virtual Data Optimizer (VDO). Example 20.11. A playbook that creates a mylv1 LVM VDO volume in the myvg volume group In this example, the compression and deduplication pools are set to true, which specifies that the VDO is used. The following describes the usage of these parameters: The deduplication is used to deduplicate the duplicated data stored on the storage volume. The compression is used to compress the data stored on the storage volume, which results in more storage capacity. The vdo_pool_size specifies the actual size the volume takes on the device. The virtual size of VDO volume is set by the size parameter. NOTE: Because of the Storage role use of LVM VDO, only one volume per pool can use the compression and deduplication. 20.15. Creating a LUKS2 encrypted volume using the storage RHEL System Role You can use the storage role to create and configure a volume encrypted with LUKS by running an Ansible playbook. Prerequisites Access and permissions to one or more managed nodes, which are systems you want to configure with the crypto_policies System Role. An inventory file, which lists the managed nodes. Access and permissions to a control node, which is a system from which Red Hat Ansible Core configures other systems. On the control node, the ansible-core and rhel-system-roles packages are installed. Important RHEL 8.0-8.5 provided access to a separate Ansible repository that contains Ansible Engine 2.9 for automation based on Ansible. Ansible Engine contains command-line utilities such as ansible , ansible-playbook , connectors such as docker and podman , and many plugins and modules. For information about how to obtain and install Ansible Engine, see the How to download and install Red Hat Ansible Engine Knowledgebase article. RHEL 8.6 and 9.0 have introduced Ansible Core (provided as the ansible-core package), which contains the Ansible command-line utilities, commands, and a small set of built-in Ansible plugins. RHEL provides this package through the AppStream repository, and it has a limited scope of support. For more information, see the Scope of support for the Ansible Core package included in the RHEL 9 and RHEL 8.6 and later AppStream repositories Knowledgebase article. Procedure Create a new playbook.yml file with the following content: You can also add the other encryption parameters such as encryption_key , encryption_cipher , encryption_key_size , and encryption_luks version in the playbook.yml file. Optional: Verify playbook syntax: Run the playbook on your inventory file: Verification View the encryption status: Verify the created LUKS encrypted volume: View the cryptsetup parameters in the playbook.yml file, which the storage role supports: Additional resources Encrypting block devices using LUKS /usr/share/ansible/roles/rhel-system-roles.storage/README.md file 20.16. Example Ansible playbook to express pool volume sizes as percentage using the storage RHEL System Role This section provides an example Ansible playbook. This playbook applies the storage System Role to enable you to express Logical Manager Volumes (LVM) volume sizes as a percentage of the pool's total size. Example 20.12. A playbook that express volume sizes as a percentage of the pool's total size This example specifies the size of LVM volumes as a percentage of the pool size, for example: "60%". Additionally, you can also specify the size of LVM volumes as a percentage of the pool size in a human-readable size of the file system, for example, "10g" or "50 GiB". 20.17. Additional resources /usr/share/doc/rhel-system-roles/storage/ /usr/share/ansible/roles/rhel-system-roles.storage/ | [
"--- - hosts: all vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs roles: - rhel-system-roles.storage",
"--- - hosts: all vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs mount_point: /mnt/data roles: - rhel-system-roles.storage",
"- hosts: all vars: storage_pools: - name: myvg disks: - sda - sdb - sdc volumes: - name: mylv size: 2G fs_type: ext4 mount_point: /mnt/data roles: - rhel-system-roles.storage",
"--- - hosts: all vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs mount_point: /mnt/data mount_options: discard roles: - rhel-system-roles.storage",
"--- - hosts: all vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: ext4 fs_label: label-name mount_point: /mnt/data roles: - rhel-system-roles.storage",
"--- - hosts: all vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: ext3 fs_label: label-name mount_point: /mnt/data roles: - rhel-system-roles.storage",
"--- - name: Create a disk device mounted on /opt/barefs - hosts: all vars: storage_volumes: - name: barefs type: disk disks: - /dev/sdb size: 12 GiB fs_type: ext4 mount_point: /opt/barefs roles: - rhel-system-roles.storage",
"--- - name: Create a disk device mounted on /opt/barefs - hosts: all vars: storage_volumes: - name: barefs type: disk disks: - /dev/sdb size: 10 GiB fs_type: ext4 mount_point: /opt/barefs roles: - rhel-system-roles.storage",
"--- - hosts: all vars: storage_pools: - name: myvg disks: - /dev/sda - /dev/sdb - /dev/sdc volumes: - name: mylv1 size: 10 GiB fs_type: ext4 mount_point: /opt/mount1 - name: mylv2 size: 50 GiB fs_type: ext4 mount_point: /opt/mount2 - name: Create LVM pool over three disks include_role: name: rhel-system-roles.storage",
"--- - name: Create a disk device with swap - hosts: all vars: storage_volumes: - name: swap_fs type: disk disks: - /dev/sdb size: 15 GiB fs_type: swap roles: - rhel-system-roles.storage",
"--- - name: Configure the storage hosts: managed-node-01.example.com tasks: - name: Create a RAID on sdd, sde, sdf, and sdg include_role: name: rhel-system-roles.storage vars: storage_safe_mode: false storage_volumes: - name: data type: raid disks: [sdd, sde, sdf, sdg] raid_level: raid0 raid_chunk_size: 32 KiB mount_point: /mnt/data state: present",
"ansible-playbook --syntax-check playbook.yml",
"ansible-playbook -i inventory.file /path/to/file/playbook.yml",
"- hosts: all vars: storage_safe_mode: false storage_pools: - name: my_pool type: lvm disks: [sdh, sdi] raid_level: raid1 volumes: - name: my_pool size: \"1 GiB\" mount_point: \"/mnt/app/shared\" fs_type: xfs state: present roles: - name: rhel-system-roles.storage",
"ansible-playbook --syntax-check playbook.yml",
"ansible-playbook -i inventory.file /path/to/file/playbook.yml",
"--- - name: Create LVM VDO volume under volume group 'myvg' hosts: all roles: -rhel-system-roles.storage vars: storage_pools: - name: myvg disks: - /dev/sdb volumes: - name: mylv1 compression: true deduplication: true vdo_pool_size: 10 GiB size: 30 GiB mount_point: /mnt/app/shared",
"- hosts: all vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs fs_label: label-name mount_point: /mnt/data encryption: true encryption_password: your-password roles: - rhel-system-roles.storage",
"ansible-playbook --syntax-check playbook.yml",
"ansible-playbook -i inventory.file /path/to/file/playbook.yml",
"cryptsetup status sdb /dev/mapper/ sdb is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/ sdb [...]",
"cryptsetup luksDump /dev/ sdb Version: 2 Epoch: 6 Metadata area: 16384 [bytes] Keyslots area: 33521664 [bytes] UUID: a4c6be82-7347-4a91-a8ad-9479b72c9426 Label: (no label) Subsystem: (no subsystem) Flags: allow-discards Data segments: 0: crypt offset: 33554432 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 4096 [bytes] [...]",
"cat ~/playbook.yml - hosts: all vars: storage_volumes: - name: foo type: disk disks: - nvme0n1 fs_type: xfs fs_label: label-name mount_point: /mnt/data encryption: true #encryption_password: passwdpasswd encryption_key: /home/passwd_key encryption_cipher: aes-xts-plain64 encryption_key_size: 512 encryption_luks_version: luks2 roles: - rhel-system-roles.storage",
"--- - name: Express volume sizes as a percentage of the pool's total size hosts: all roles - rhel-system-roles.storage vars: storage_pools: - name: myvg disks: - /dev/sdb volumes: - name: data size: 60% mount_point: /opt/mount/data - name: web size: 30% mount_point: /opt/mount/web - name: cache size: 10% mount_point: /opt/cache/mount"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/managing-local-storage-using-rhel-system-roles_automating-system-administration-by-using-rhel-system-roles |
6.7 Technical Notes | 6.7 Technical Notes Red Hat Enterprise Linux 6 Detailed notes on the changes implemented in Red Hat Enterprise Linux 6.7 Edition 7 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/index |
Chapter 6. Managing storage devices in the web console | Chapter 6. Managing storage devices in the web console You can use the the web console to configure physical and virtual storage devices. This chapter provides instructions for these devices: Mounted NFS Logical Volumes RAID VDO 6.1. Prerequisites The the web console has been installed. For details, see Installing the web console . 6.2. Managing NFS mounts in the web console The the web console enables you to mount remote directories using the Network File System (NFS) protocol. NFS makes it possible to reach and mount remote directories located on the network and work with the files as if the directory was located on your physical drive. Prerequisites NFS server name or IP address. Path to the directory on the remote server. 6.2.1. Connecting NFS mounts in the web console The following steps aim to help you with connecting a remote directory to your file system using NFS. Prerequisites NFS server name or IP address. Path to the directory on the remote server. Procedure Log in to the web console. For details, see Logging in to the web console . Click Storage . Click + in the NFS mounts section. In the New NFS Mount dialog box, enter the server or IP address of the remote server. In the Path on Server field, enter the path to the directory you want to mount. In the Local Mount Point field, enter the path where you want to find the directory in your local system. Select Mount at boot . This ensures that the directory will be reachable also after the restart of the local system. Optionally, select Mount read only if you do not want to change the content. Click Add . At this point, you can open the mounted directory and verify that the content is accessible. To troubleshoot the connection, you can adjust it with the Custom Mount Options . 6.2.2. Customizing NFS mount options in the web console The following section provides you with information on how to edit an existing NFS mount and shows you where to add custom mount options. Custom mount options can help you to troubleshoot the connection or change parameters of the NFS mount such as changing timeout limits or configuring authentication. Prerequisites NFS mount added. Procedure Log in to the web console. For details, see Logging in to the web console . Click Storage . Click on the NFS mount you want to adjust. If the remote directory is mounted, click Unmount . The directory must not be mounted during the custom mount options configuration. Otherwise the web console does not save the configuration and this will cause an error. Click Edit . In the NFS Mount dialog box, select Custom mount option . Enter mount options separated by a comma. For example: nfsvers=4 - the NFS protocol version number soft - type of recovery after an NFS request times out sec=krb5 - files on the NFS server can be secured by Kerberos authentication. Both the NFS client and server have to support Kerberos authentication. For a complete list of the NFS mount options, enter man nfs in the command line. Click Apply . Click Mount . Now you can open the mounted directory and verify that the content is accessible. 6.2.3. Related information For more details on NFS, see the Network File System (NFS) . 6.3. Managing Redundant Arrays of Independent Disks in the web console Redundant Arrays of Independent Disks (RAID) represents a way how to arrange more disks into one storage. RAID protects data stored in the disks against disk failure with the following data distribution strategies: Mirroring - data are copied to two different locations. If one disk fails, you have a copy and your data is not lost. Striping - data are evenly distributed among disks. Level of protection depends on the RAID level. The RHEL web console supports the following RAID levels: RAID 0 (Stripe) RAID 1 (Mirror) RAID 4 (Dedicated parity) RAID 5 (Distributed parity) RAID 6 (Double Distributed Parity) RAID 10 (Stripe of Mirrors) For more details, see RAID Levels and Linear Support . Before you can use disks in RAID, you need to: Create a RAID. Format it with file system. Mount the RAID to the server. 6.3.1. Prerequisites The the web console is running and accessible. For details, see Installing the web console . 6.3.2. Creating RAID in the web console This procedure aims to help you with configuring RAID in the web console. Prerequisites Physical disks connected to the system. Each RAID level requires different amount of disks. Procedure Open the web console. Click Storage . Click the + icon in the RAID Devices box. In the Create RAID Device dialog box, enter a name for a new RAID. In the RAID Level drop-down list, select a level of RAID you want to use. For detailed description of RAID levels supported on the RHEL 7 system, see RAID Levels and Linear Support . In the Chunk Size drop-down list, leave the predefined value as it is. The Chunk Size value specifies how large is each block for data writing. If the chunk size is 512 KiB, the system writes the first 512 KiB to the first disk, the second 512 KiB is written to the second disk, and the third chunk will be written to the third disk. If you have three disks in your RAID, the fourth 512 KiB will be written to the first disk again. Select disks you want to use for RAID. Click Create . In the Storage section, you can see the new RAID in the RAID devices box and format it. Now you have the following options how to format and mount the new RAID in the web console: Formatting RAID Creating partitions on partition table Creating a volume group on top of RAID 6.3.3. Formatting RAID in the web console This section describes formatting procedure of the new software RAID device which is created in the RHEL web interface. Prerequisites Physical disks are connected and visible by RHEL 7. RAID is created. Consider the file system which will be used for the RAID. Consider creating of a partitioning table. Procedure Open the RHEL web console. Click Storage . In the RAID devices box, choose the RAID you want to format by clicking on it. In the RAID details screen, scroll down to the Content part. Click to the newly created RAID. Click the Format button. In the Erase drop-down list, select: Don't overwrite existing data - the RHEL web console rewrites only the disk header. Advantage of this option is speed of formatting. Overwrite existing data with zeros - the RHEL web console rewrites the whole disk with zeros. This option is slower because the program has to go through the whole disk. Use this option if the RAID includes any data and you need to rewrite it. In the Type drop-down list, select a XFS file system, if you do not have another strong preference. Enter a name of the file system. In the Mounting drop down list, select Custom . The Default option does not ensure that the file system will be mounted on the boot. In the Mount Point field, add the mount path. Select Mount at boot . Click the Format button. Formatting can take several minutes depending on the used formatting options and size of RAID. After successful finish, you can see the details of the formatted RAID on the Filesystem tab. To use the RAID, click Mount . At this point, the system uses mounted and formatted RAID. 6.3.4. Using the web console for creating a partition table on RAID RAID requires formatting as any other storage device. You have two options: Format the RAID device without partitions Create a partition table with partitions This section describes formatting RAID with the partition table on the new software RAID device created in the RHEL web interface. Prerequisites Physical disks are connected and visible by RHEL 7. RAID is created. Consider the file system used for the RAID. Consider creating a partitioning table. Procedure Open the RHEL web console. Click Storage . In the RAID devices box, select the RAID you want to edit. In the RAID details screen, scroll down to the Content part. Click to the newly created RAID. Click the Create partition table button. In the Erase drop-down list, select: Don't overwrite existing data - the RHEL web console rewrites only the disk header. Advantage of this option is speed of formatting. Overwrite existing data with zeros - the RHEL web console rewrites the whole RAID with zeros. This option is slower because the program has to go through the whole RAID. Use this option if RAID includes any data and you need to rewrite it. In the Partitioning drop-down list, select: Compatible with modern system and hard disks > 2TB (GPT) - GUID Partition Table is a modern recommended partitioning system for large RAIDs with more than four partitions. Compatible with all systems and devices (MBR) - Master Boot Record works with disks up to 2 TB in size. MBR also support four primary partitions max. Click Format . At this point, the partitioning table has been created and you can create partitions. For creating partitions, see Using the web console for creating partitions on RAID . 6.3.5. Using the web console for creating partitions on RAID This section describes creating a partition in the existing partition table. Prerequisites Partition table is created. For details, see Using the web console for creating partitions on RAID . Procedure Open the web console. Click Storage . In the RAID devices box, click to the RAID you want to edit. In the RAID details screen, scroll down to the Content part. Click to the newly created RAID. Click Create Partition . In the Create partition dialog box, set up the size of the first partition. In the Erase drop-down list, select: Don't overwrite existing data - the RHEL web console rewrites only the disk header. Advantage of this option is speed of formatting. Overwrite existing data with zeros - the RHEL web console rewrites the whole RAID with zeros. This option is slower because the program have to go through the whole RAID. Use this option if RAID includes any data and you need to rewrite it. In the Type drop-down list, select a XFS file system, if you do not have another strong preference. For details about the XFS file system, see The XFS file system . Enter any name for the file system. Do not use spaces in the name. In the Mounting drop down list, select Custom . The Default option does not ensure that the file system will be mounted on the boot. In the Mount Point field, add the mount path. Select Mount at boot . Click Create partition . Formatting can take several minutes depending on used formatting options and size of RAID. After successful finish, you can continue with creating other partitions. At this point, the system uses mounted and formatted RAID. 6.3.6. Using the web console for creating a volume group on top of RAID This section shows you how to build a volume group from software RAID. Prerequisites RAID device, which is not formatted and mounted. Procedure Open the RHEL web console. Click Storage . Click the + icon in the Volume Groups box. In the Create Volume Group dialog box, enter a name for the new volume group. In the Disks list, select a RAID device. If you do not see the RAID in the list, unmount the RAID from the system. The RAID device must not be used by the RHEL system. Click Create . The new volume group has been created and you can continue with creating a logical volume. For details, see Creating logical volumes in the web console . 6.4. Using the web console for configuring LVM logical volumes Red Hat Enterprise Linux 7 supports the LVM logical volume manager. When you install a Red Hat Enterprise Linux 7, it will be installed on LVM automatically created during the installation. The screenshot shows you a clean installation of the RHEL system with two logical volumes in the the web console automatically created during the installation. To find out more about logical volumes, follow the sections describing: What is logical volume manager and when to use it. What are volume groups and how to create them. What are logical volumes and how to create them. How to format logical volumes. How to resize logical volumes. 6.4.1. Prerequisites Physical drives, RAID devices, or any other type of block device from which you can create the logical volume. 6.4.2. Logical Volume Manager in the web console The web console provides a graphical interface to create LVM volume groups and logical volumes. Volume groups create a layer between physical and logical volumes. It makes you possible to add or remove physical volumes without influencing logical volume itself. Volume groups appear as one drive with capacity consisting of capacities of all physical drives included in the group. You can join physical drives into volume groups in the web console. Logical volumes act as a single physical drive and it is built on top of a volume group in your system. Main advantages of logical volumes are: Better flexibility than the partitioning system used on your physical drive. Ability to connect more physical drives into one volume. Possibility of expanding (growing) or reducing (shrinking) capacity of the volume on-line, without restart. Ability to create snapshots. Additional resources For details, see Logical volume manager administration . 6.4.3. Creating volume groups in the web console The following describes creating volume groups from one or more physical drives or other storage devices. Logical volumes are created from volume groups. Each volume group can include multiple logical volumes. For details, see Volume groups . Prerequisites Physical drives or other types of storage devices from which you want to create volume groups. Procedure Log in to the web console. Click Storage . Click the + icon in the Volume Groups box. In the Name field, enter a name of a group without spaces. Select the drives you want to combine to create the volume group. It might happen that you cannot see devices as you expected. The RHEL web console displays only unused block devices. Used devices means, for example: Devices formatted with a file system Physical volumes in another volume group Physical volumes being a member of another software RAID device If you do not see the device, format it to be empty and unused. Click Create . The web console adds the volume group in the Volume Groups section. After clicking the group, you can create logical volumes that are allocated from that volume group. 6.4.4. Creating logical volumes in the web console The following steps describe how to create LVM logical volumes. Prerequisites Volume group created. For details, see Creating volume groups in the web console . Procedure Log in to the web console. Click Storage . Click the volume group in which you want to create logical volumes. Click Create new Logical Volume . In the Name field, enter a name for the new logical volume without spaces. In the Purpose drop down menu, select Block device for filesystems . This configuration enables you to create a logical volume with the maximum volume size which is equal to the sum of the capacities of all drives included in the volume group. Define the size of the logical volume. Consider: How much space the system using this logical volume will need. How many logical volumes you want to create. You do not have to use the whole space. If necessary, you can grow the logical volume later. Click Create . To verify the settings, click your logical volume and check the details. At this stage, the logical volume has been created and you need to create and mount a file system with the formatting process. 6.4.5. Formatting logical volumes in the web console Logical volumes act as physical drives. To use them, you need to format them with a file system. Warning Formatting logical volumes will erase all data on the volume. The file system you select determines the configuration parameters you can use for logical volumes. For example, some the XFS file system does not support shrinking volumes. For details, see Resizing logical volumes in the web console . The following steps describe the procedure to format logical volumes. Prerequisites Logical volume created. For details, see Creating volume groups in the web console . Procedure Log in to the RHEL web console. Click Storage . Click the volume group in which the logical volume is placed. Click the logical volume. Click on the Unrecognized Data tab. Click Format . In the Erase drop down menu, select: Don't overwrite existing data - the RHEL web console rewrites only the disk header. Advantage of this option is speed of formatting. Overwrite existing data with zeros - the RHEL web console rewrites the whole disk with zeros. This option is slower because the program have to go through the whole disk. Use this option if the disk includes any data and you need to overwrite it. In the Type drop down menu, select a file system: XFS file system supports large logical volumes, switching physical drives online without outage, and growing an existing file system. Leave this file system selected if you do not have a different strong preference. XFS does not support reducing the size of a volume formatted with an XFS file system ext4 file system supports: Logical volumes Switching physical drives online without outage Growing a file system Shrinking a file system You can also select a version with the LUKS (Linux Unified Key Setup) encryption, which allows you to encrypt the volume with a passphrase. In the Name field, enter the logical volume name. In the Mounting drop down menu, select Custom . The Default option does not ensure that the file system will be mounted on the boot. In the Mount Point field, add the mount path. Select Mount at boot . Click Format . Formatting can take several minutes depending on the volume size and which formatting options are selected. after the formatting has completed successfully, you can see the details of the formatted logical volume on the Filesystem tab. To use the logical volume, click Mount . At this point, the system can use mounted and formatted logical volume. 6.4.6. Resizing logical volumes in the web console This section describes how to resize logical volumes. You can extend or even reduce logical volumes. Whether you can resize a logical volume depends on which file system you are using. Most file systems enable you to extend (grow) the volume online (without outage). You can also reduce (shrink) the size of logical volumes, if the logical volume contains a file system which supports shrinking. It should be available, for example, in the ext3/ext4 file systems. Warning You cannot reduce volumes that contains GFS2 or XFS filesystem. Prerequisites Existing logical volume containing a file system which supports resizing logical volumes. Procedure The following steps provide the procedure for growing a logical volume without taking the volume offline: Log in to the RHEL web console. Click Storage . Click the volume group in which the logical volume is placed. Click the logical volume. On the Volume tab, click Grow . In the Grow Logical Volume dialog box, adjust volume space. Click Grow . LVM grows the logical volume without system outage. 6.4.7. Related information For more details on creating logical volumes, see Configuring and managing logical volumes . 6.5. Using the web console for configuring thin logical volumes Thinly-provisioned logical volumes enables you to allocate more space for designated applications or servers than how much space logical volumes actually contain. For details, see Thinly-provisioned logical volumes (thin volumes) . The following sections describe: Creating pools for the thinly provisioned logical volumes. Creating thin logical volumes. Formatting thin logical volumes. 6.5.1. Prerequisites Physical drives or other types of storage devices from which you want to create volume groups. 6.5.2. Creating pools for thin logical volumes in the web console The following steps show you how to create a pool for thinly provisioned volumes: Prerequisites Volume group created . Procedure Log in to the web console. Click Storage . Click the volume group in which you want to create thin volumes. Click Create new Logical Volume . In the Name field, enter a name for the new pool of thin volumes without spaces. In the Purpose drop down menu, select Pool for thinly provisioned volumes . This configuration enables you to create the thin volume. Define the size of the pool of thin volumes. Consider: How many thin volumes you will need in this pool? What is the expected size of each thin volume? You do not have to use the whole space. If necessary, you can grow the pool later. Click Create . The pool for thin volumes has been created and you can add thin volumes. 6.5.3. Creating thin logical volumes in the web console The following text describes creating a thin logical volume in the pool. The pool can include multiple thin volumes and each thin volume can be as large as the pool for thin volumes itself. Important Using thin volumes requires regular checkup of actual free physical space of the logical volume. Prerequisites Pool for thin volumes created. For details, see Creating volume groups in the web console . Procedure Log in to the web console. Click Storage . Click the volume group in which you want to create thin volumes. Click the desired pool. Click Create Thin Volume . In the Create Thin Volume dialog box, enter a name for the thin volume without spaces. Define the size of the thin volume. Click Create . At this stage, the thin logical volume has been created and you need to format it. 6.5.4. Formatting logical volumes in the web console Logical volumes act as physical drives. To use them, you need to format them with a file system. Warning Formatting logical volumes will erase all data on the volume. The file system you select determines the configuration parameters you can use for logical volumes. For example, some the XFS file system does not support shrinking volumes. For details, see Resizing logical volumes in the web console . The following steps describe the procedure to format logical volumes. Prerequisites Logical volume created. For details, see Creating volume groups in the web console . Procedure Log in to the RHEL web console. Click Storage . Click the volume group in which the logical volume is placed. Click the logical volume. Click on the Unrecognized Data tab. Click Format . In the Erase drop down menu, select: Don't overwrite existing data - the RHEL web console rewrites only the disk header. Advantage of this option is speed of formatting. Overwrite existing data with zeros - the RHEL web console rewrites the whole disk with zeros. This option is slower because the program have to go through the whole disk. Use this option if the disk includes any data and you need to overwrite it. In the Type drop down menu, select a file system: XFS file system supports large logical volumes, switching physical drives online without outage, and growing an existing file system. Leave this file system selected if you do not have a different strong preference. XFS does not support reducing the size of a volume formatted with an XFS file system ext4 file system supports: Logical volumes Switching physical drives online without outage Growing a file system Shrinking a file system You can also select a version with the LUKS (Linux Unified Key Setup) encryption, which allows you to encrypt the volume with a passphrase. In the Name field, enter the logical volume name. In the Mounting drop down menu, select Custom . The Default option does not ensure that the file system will be mounted on the boot. In the Mount Point field, add the mount path. Select Mount at boot . Click Format . Formatting can take several minutes depending on the volume size and which formatting options are selected. after the formatting has completed successfully, you can see the details of the formatted logical volume on the Filesystem tab. To use the logical volume, click Mount . At this point, the system can use mounted and formatted logical volume. 6.6. Using the web console for changing physical drives in volume groups The following text describes how to change the drive in a volume group using the the web console. The change of physical drives consists of the following procedures: Adding physical drives from logical volumes. Removing physical drives from logical volumes. 6.6.1. Prerequisites A new physical drive for replacing the old or broken one. The configuration expects that physical drives are organized in a volume group. 6.6.2. Adding physical drives to volume groups in the web console The web console enables you to add a new physical drive or other type of volume to the existing logical volume. Prerequisites A volume group must be created. A new drive connected to the machine. Procedure Log in to the web console. Click Storage . In the Volume Groups box, click the volume group in which you want to add a physical volume. In the Physical Volumes box, click the + icon. In the Add Disks dialog box, select the preferred drive and click Add . As a result, the web console adds the physical volume. You can see it in the Physical Volumes section, and the logical volume can immediately start to write on the drive. 6.6.3. Removing physical drives from volume groups in the web console If a logical volume includes multiple physical drives, you can remove one of the physical drives online. The system moves automatically all data from the drive to be removed to other drives during the removal process. Notice that it can take some time. The web console also verifies, if there is enough space for removing the physical drive. Prerequisites A volume group with more than one physical drive connected. Procedure The following steps describe how to remove a drive from the volume group without causing outage in the RHEL web console. Log in to the RHEL web console. Click Storage . Click the volume group in which you have the logical volume. In the Physical Volumes section, locate the preferred volume. Click the - icon. The RHEL web console verifies, if the logical volume has enough free space for removing the disk. If not, you cannot remove the disk and it is necessary to add another disk first. For details, see Adding physical drives to logical volumes in the web console . As results, the RHEL web console removes the physical volume from the created logical volume without causing an outage. 6.7. Using the web console for managing Virtual Data Optimizer volumes This chapter describes the Virtual Data Optimizer (VDO) configuration using the the web console. After reading it, you will be able to: Create VDO volumes Format VDO volumes Extend VDO volumes 6.7.1. Prerequisites The the web console is installed and accessible. For details, see Installing the web console . 6.7.2. VDO volumes in the web console Red Hat Enterprise Linux 7 supports Virtual Data Optimizer (VDO). VDO is a block virtualization technology that combines: Compression For details, see Using Compression . Deduplication For details, see Disabling and Re-enabling deduplication . Thin provisioning For details, see Thinly-provisioned logical volumes (thin volumes) . Using these technologies, VDO: Saves storage space inline Compresses files Eliminates duplications Enables you to allocate more virtual space than how much the physical or logical storage provides Enables you to extend the virtual storage by growing VDO can be created on top of many types of storage. In the web console, you can configure VDO on top of: LVM Note It is not possible to configure VDO on top of thinly-provisioned volumes. Physical volume Software RAID For details about placement of VDO in the Storage Stack, see System Requirements . Additional resources For details about VDO, see Deduplication and compression with VDO . 6.7.3. Creating VDO volumes in the web console This section helps you to create a VDO volume in the RHEL web console. Prerequisites Physical drives, LVMs, or RAID from which you want to create VDO. Procedure Log in to the web console. For details, see Logging in to the web console . Click Storage . Click the + icon in the VDO Devices box. In the Name field, enter a name of a VDO volume without spaces. Select the drive that you want to use. In the Logical Size bar, set up the size of the VDO volume. You can extend it more than ten times, but consider for what purpose you are creating the VDO volume: For active VMs or container storage, use logical size that is ten times the physical size of the volume. For object storage, use logical size that is three times the physical size of the volume. For details, see Getting started with VDO . In the Index Memory bar, allocate memory for the VDO volume. For details about VDO system requirements, see System Requirements . Select the Compression option. This option can efficiently reduce various file formats. For details, see Using Compression . Select the Deduplication option. This option reduces the consumption of storage resources by eliminating multiple copies of duplicate blocks. For details, see Disabling and Re-enabling deduplication . [Optional] If you want to use the VDO volume with applications that need a 512 bytes block size, select Use 512 Byte emulation . This reduces the performance of the VDO volume, but should be very rarely needed. If in doubt, leave it off. Click Create . If the process of creating the VDO volume succeeds, you can see the new VDO volume in the Storage section and format it with a file system. 6.7.4. Formatting VDO volumes in the web console VDO volumes act as physical drives. To use them, you need to format them with a file system. Warning Formatting VDO will erase all data on the volume. The following steps describe the procedure to format VDO volumes. Prerequisites A VDO volume is created. For details, see Section 6.7.3, "Creating VDO volumes in the web console" . Procedure Log in to the web console. For details, see Logging in to the web console . Click Storage . Click the VDO volume. Click on the Unrecognized Data tab. Click Format . In the Erase drop down menu, select: Don't overwrite existing data The RHEL web console rewrites only the disk header. The advantage of this option is the speed of formatting. Overwrite existing data with zeros The RHEL web console rewrites the whole disk with zeros. This option is slower because the program has to go through the whole disk. Use this option if the disk includes any data and you need to rewrite them. In the Type drop down menu, select a filesystem: The XFS file system supports large logical volumes, switching physical drives online without outage, and growing. Leave this file system selected if you do not have a different strong preference. XFS does not support shrinking volumes. Therefore, you will not be able to reduce volume formatted with XFS. The ext4 file system supports logical volumes, switching physical drives online without outage, growing, and shrinking. You can also select a version with the LUKS (Linux Unified Key Setup) encryption, which allows you to encrypt the volume with a passphrase. In the Name field, enter the logical volume name. In the Mounting drop down menu, select Custom . The Default option does not ensure that the file system will be mounted on the boot. In the Mount Point field, add the mount path. Select Mount at boot . Click Format . Formatting can take several minutes depending on the used formatting options and the volume size. After a successful finish, you can see the details of the formatted VDO volume on the Filesystem tab. To use the VDO volume, click Mount . At this point, the system uses the mounted and formatted VDO volume. 6.7.5. Extending VDO volumes in the web console This section describes extending VDO volumes in the web console. Prerequisites The VDO volume created. Procedure Log in to the web console. For details, see Logging in to the web console . Click Storage . Click your VDO volume in the VDO Devices box. In the VDO volume details, click the Grow button. In the Grow logical size of VDO dialog box, extend the logical size of the VDO volume. Original size of the logical volume from the screenshot was 6 GB. As you can see, the RHEL web console enables you to grow the volume to more than ten times the size and it works correctly because of the compression and deduplication. Click Grow . If the process of growing VDO succeeds, you can see the new size in the VDO volume details. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/managing_systems_using_the_rhel_7_web_console/managing-storage-devices-in-the-web-console_system-management-using-the-RHEL-7-web-console |
Chapter 5. AlertingRule [monitoring.openshift.io/v1] | Chapter 5. AlertingRule [monitoring.openshift.io/v1] Description AlertingRule represents a set of user-defined Prometheus rule groups containing alerting rules. This resource is the supported method for cluster admins to create alerts based on metrics recorded by the platform monitoring stack in OpenShift, i.e. the Prometheus instance deployed to the openshift-monitoring namespace. You might use this to create custom alerting rules not shipped with OpenShift based on metrics from components such as the node_exporter, which provides machine-level metrics such as CPU usage, or kube-state-metrics, which provides metrics on Kubernetes usage. The API is mostly compatible with the upstream PrometheusRule type from the prometheus-operator. The primary difference being that recording rules are not allowed here - only alerting rules. For each AlertingRule resource created, a corresponding PrometheusRule will be created in the openshift-monitoring namespace. OpenShift requires admins to use the AlertingRule resource rather than the upstream type in order to allow better OpenShift specific defaulting and validation, while not modifying the upstream APIs directly. You can find upstream API documentation for PrometheusRule resources here: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec describes the desired state of this AlertingRule object. status object status describes the current state of this AlertOverrides object. 5.1.1. .spec Description spec describes the desired state of this AlertingRule object. Type object Required groups Property Type Description groups array groups is a list of grouped alerting rules. Rule groups are the unit at which Prometheus parallelizes rule processing. All rules in a single group share a configured evaluation interval. All rules in the group will be processed together on this interval, sequentially, and all rules will be processed. It's common to group related alerting rules into a single AlertingRule resources, and within that resource, closely related alerts, or simply alerts with the same interval, into individual groups. You are also free to create AlertingRule resources with only a single rule group, but be aware that this can have a performance impact on Prometheus if the group is extremely large or has very complex query expressions to evaluate. Spreading very complex rules across multiple groups to allow them to be processed in parallel is also a common use-case. groups[] object RuleGroup is a list of sequentially evaluated alerting rules. 5.1.2. .spec.groups Description groups is a list of grouped alerting rules. Rule groups are the unit at which Prometheus parallelizes rule processing. All rules in a single group share a configured evaluation interval. All rules in the group will be processed together on this interval, sequentially, and all rules will be processed. It's common to group related alerting rules into a single AlertingRule resources, and within that resource, closely related alerts, or simply alerts with the same interval, into individual groups. You are also free to create AlertingRule resources with only a single rule group, but be aware that this can have a performance impact on Prometheus if the group is extremely large or has very complex query expressions to evaluate. Spreading very complex rules across multiple groups to allow them to be processed in parallel is also a common use-case. Type array 5.1.3. .spec.groups[] Description RuleGroup is a list of sequentially evaluated alerting rules. Type object Required name rules Property Type Description interval string interval is how often rules in the group are evaluated. If not specified, it defaults to the global.evaluation_interval configured in Prometheus, which itself defaults to 30 seconds. You can check if this value has been modified from the default on your cluster by inspecting the platform Prometheus configuration: The relevant field in that resource is: spec.evaluationInterval name string name is the name of the group. rules array rules is a list of sequentially evaluated alerting rules. Prometheus may process rule groups in parallel, but rules within a single group are always processed sequentially, and all rules are processed. rules[] object Rule describes an alerting rule. See Prometheus documentation: - https://www.prometheus.io/docs/prometheus/latest/configuration/alerting_rules 5.1.4. .spec.groups[].rules Description rules is a list of sequentially evaluated alerting rules. Prometheus may process rule groups in parallel, but rules within a single group are always processed sequentially, and all rules are processed. Type array 5.1.5. .spec.groups[].rules[] Description Rule describes an alerting rule. See Prometheus documentation: - https://www.prometheus.io/docs/prometheus/latest/configuration/alerting_rules Type object Required alert expr Property Type Description alert string alert is the name of the alert. Must be a valid label value, i.e. may contain any Unicode character. annotations object (string) annotations to add to each alert. These are values that can be used to store longer additional information that you won't query on, such as alert descriptions or runbook links. expr integer-or-string expr is the PromQL expression to evaluate. Every evaluation cycle this is evaluated at the current time, and all resultant time series become pending or firing alerts. This is most often a string representing a PromQL expression, e.g.: mapi_current_pending_csr > mapi_max_pending_csr In rare cases this could be a simple integer, e.g. a simple "1" if the intent is to create an alert that is always firing. This is sometimes used to create an always-firing "Watchdog" alert in order to ensure the alerting pipeline is functional. for string for is the time period after which alerts are considered firing after first returning results. Alerts which have not yet fired for long enough are considered pending. labels object (string) labels to add or overwrite for each alert. The results of the PromQL expression for the alert will result in an existing set of labels for the alert, after evaluating the expression, for any label specified here with the same name as a label in that set, the label here wins and overwrites the value. These should typically be short identifying values that may be useful to query against. A common example is the alert severity, where one sets severity: warning under the labels key: 5.1.6. .status Description status describes the current state of this AlertOverrides object. Type object Property Type Description observedGeneration integer observedGeneration is the last generation change you've dealt with. prometheusRule object prometheusRule is the generated PrometheusRule for this AlertingRule. Each AlertingRule instance results in a generated PrometheusRule object in the same namespace, which is always the openshift-monitoring namespace. 5.1.7. .status.prometheusRule Description prometheusRule is the generated PrometheusRule for this AlertingRule. Each AlertingRule instance results in a generated PrometheusRule object in the same namespace, which is always the openshift-monitoring namespace. Type object Required name Property Type Description name string name of the referenced PrometheusRule. 5.2. API endpoints The following API endpoints are available: /apis/monitoring.openshift.io/v1/alertingrules GET : list objects of kind AlertingRule /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertingrules DELETE : delete collection of AlertingRule GET : list objects of kind AlertingRule POST : create an AlertingRule /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertingrules/{name} DELETE : delete an AlertingRule GET : read the specified AlertingRule PATCH : partially update the specified AlertingRule PUT : replace the specified AlertingRule /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertingrules/{name}/status GET : read status of the specified AlertingRule PATCH : partially update status of the specified AlertingRule PUT : replace status of the specified AlertingRule 5.2.1. /apis/monitoring.openshift.io/v1/alertingrules HTTP method GET Description list objects of kind AlertingRule Table 5.1. HTTP responses HTTP code Reponse body 200 - OK AlertingRuleList schema 401 - Unauthorized Empty 5.2.2. /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertingrules HTTP method DELETE Description delete collection of AlertingRule Table 5.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind AlertingRule Table 5.3. HTTP responses HTTP code Reponse body 200 - OK AlertingRuleList schema 401 - Unauthorized Empty HTTP method POST Description create an AlertingRule Table 5.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.5. Body parameters Parameter Type Description body AlertingRule schema Table 5.6. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 201 - Created AlertingRule schema 202 - Accepted AlertingRule schema 401 - Unauthorized Empty 5.2.3. /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertingrules/{name} Table 5.7. Global path parameters Parameter Type Description name string name of the AlertingRule HTTP method DELETE Description delete an AlertingRule Table 5.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified AlertingRule Table 5.10. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified AlertingRule Table 5.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.12. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified AlertingRule Table 5.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.14. Body parameters Parameter Type Description body AlertingRule schema Table 5.15. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 201 - Created AlertingRule schema 401 - Unauthorized Empty 5.2.4. /apis/monitoring.openshift.io/v1/namespaces/{namespace}/alertingrules/{name}/status Table 5.16. Global path parameters Parameter Type Description name string name of the AlertingRule HTTP method GET Description read status of the specified AlertingRule Table 5.17. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified AlertingRule Table 5.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.19. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified AlertingRule Table 5.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.21. Body parameters Parameter Type Description body AlertingRule schema Table 5.22. HTTP responses HTTP code Reponse body 200 - OK AlertingRule schema 201 - Created AlertingRule schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/monitoring_apis/alertingrule-monitoring-openshift-io-v1 |
Appendix A. Consistent Network Device Naming | Appendix A. Consistent Network Device Naming Red Hat Enterprise Linux 6 provides consistent network device naming for network interfaces. This feature changes the name of network interfaces on a system in order to make locating and differentiating the interfaces easier. Traditionally, network interfaces in Linux are enumerated as eth[0123...] , but these names do not necessarily correspond to actual labels on the chassis. Modern server platforms with multiple network adapters can encounter non-deterministic and counter-intuitive naming of these interfaces. This affects both network adapters embedded on the motherboard ( Lan-on-Motherboard , or LOM ) and add-in (single and multiport) adapters. The new naming convention assigns names to network interfaces based on their physical location, whether embedded or in PCI slots. By converting to this naming convention, system administrators will no longer have to guess at the physical location of a network port, or modify each system to rename them into some consistent order. This feature, implemented via the biosdevname program, will change the name of all embedded network interfaces, PCI card network interfaces, and virtual function network interfaces from the existing eth[0123...] to the new naming convention as shown in Table A.1, "The new naming convention" . Table A.1. The new naming convention Device Old Name New Name Embedded network interface (LOM) eth[0123...] em[1234...] [a] PCI card network interface eth[0123...] p< slot >p< ethernet port > [b] Virtual function eth[0123...] p< slot >p< ethernet port >_< virtual interface > [c] [a] New enumeration starts at 1 . [b] For example: p3p4 [c] For example: p3p4_1 System administrators may continue to write rules in /etc/udev/rules.d/70-persistent-net.rules to change the device names to anything desired; those will take precedence over this physical location naming convention. A.1. Affected Systems Consistent network device naming is enabled by default for a set of Dell PowerEdge , C Series , and Precision Workstation systems. For more details regarding the impact on Dell systems, visit https://access.redhat.com/kb/docs/DOC-47318 . For all other systems, it will be disabled by default; see Section A.2, "System Requirements" and Section A.3, "Enabling and Disabling the Feature" for more details. Regardless of the type of system, Red Hat Enterprise Linux 6 guests running under Red Hat Enterprise Linux 5 hosts will not have devices renamed, since the virtual machine BIOS does not provide SMBIOS information. Upgrades from Red Hat Enterprise Linux 6.0 to Red Hat Enterprise Linux 6.1 are unaffected, and the old eth[0123...] naming convention will continue to be used. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/appe-consistent_network_device_naming |
24.4. Certificate Profiles | 24.4. Certificate Profiles A certificate profile defines the content of certificates belonging to the particular profile, as well as constraints for issuing the certificates, enrollment method, and input and output forms for enrollment. A single certificate profile is associated with issuing a particular type of certificate. Different certificate profiles can be defined for users, services, and hosts in IdM. The CA uses certificate profiles in signing of certificates to determine: whether the CA can accept a certificate signing request (CSR) what features and extensions should be present on the certificate IdM includes the following two certificate profiles by default: caIPAserviceCert and IECUserRoles . In addition, custom profiles can be imported. Custom certificate profiles allow issuing certificates for specific, unrelated purposes. For example, it is possible to restrict use of a particular profile to only one user or one group, preventing other users and groups from using that profile to issue a certificate for authentication. For details on supported certificate profile configuration, see Defaults Reference and Constraints Reference in the Red Hat Certificate System Administration Guide . Note By combining certificate profiles and CA ACLs, Section 24.5, "Certificate Authority ACL Rules" , the administrator can define and control access to custom certificate profiles. For a description of using profiles and CA ACLs to issue user certificates, see Section 24.6, "Using Certificate Profiles and ACLs to Issue User Certificates with the IdM CAs" . 24.4.1. Creating a Certificate Profile For details about creating a certificate profile, see the following documentation in the Red Hat Certificate System 9 Administration Guide : The Setting up Certificate Profiles section explains how to create new certificate profiles and how they are constructed. The Defaults, Constraints, and Extensions for Certificates and CRLs appendix lists the object identifiers (OID) other fields you can use in a certificate profile. 24.4.2. Certificate Profile Management from the Command Line The certprofile plug-in for management of IdM profiles allows privileged users to import, modify, or remove IdM certificate profiles. To display all commands supported by the plug-in, run the ipa certprofile command: Note that to perform the certprofile operations, you must be operating as a user who has the required permissions. IdM includes the following certificate profile-related permissions by default: System: Read Certificate Profiles Enables users to read all profile attributes. System: Import Certificate Profile Enables users to import a certificate profile into IdM. System: Delete Certificate Profile Enables users to delete an existing certificate profile. System: Modify Certificate Profile Enables users to modify the profile attributes and to disable or enable the profile. All these permissions are included in the default CA Administrator privilege. For more information on IdM role-based access controls and managing permissions, see Section 10.4, "Defining Role-Based Access Controls" . Note When requesting a certificate, the --profile-id option can be added to the ipa cert-request command to specify which profile to use. If no profile ID is specified, the default caIPAserviceCert profile is used for the certificate. This section only describes the most important aspects of using the ipa certprofile commands for profile management. For complete information about a command, run it with the --help option added, for example: Importing Certificate Profiles To import a new certificate profile to IdM, use the ipa certprofile-import command. Running the command without any options starts an interactive session in which the certprofile-import script prompts your for the information required to import the certificate. The ipa certprofile-import command accepts several command-line options. Most notably: --file This option passes the file containing the profile configuration directly to ipa certprofile-import . For example: --store This option sets the Store issued certificates attribute. It accepts two values: True , which delivers the issued certificates to the client and stores them in the target IdM principal's userCertificate attribute. False , which delivers the issued certificates to the client, but does not store them in IdM. This option is most commonly-used when issuing multiple short-term certificates is required. Import fails if the profile ID specified with ipa certprofile-import is already in use or if the profile content is incorrect. For example, the import fails if a required attribute is missing or if the profile ID value defined in the supplied file does not match the profile ID specified with ipa certprofile-import . To obtain a template for a new profile, you can run the ipa certprofile-show command with the --out option, which exports a specified existing profile to a file. For example: You can then edit the exported file as required and import it as a new profile. Displaying Certificate Profiles To display all certificate profiles currently stored in IdM, use the ipa certprofile-find command: To display information about a particular profile, use the ipa certprofile-show command: Modifying Certificate Profiles To modify an existing certificate profile, use the ipa certprofile-mod command. Pass the required modifications with the command using the command-line options accepted by ipa certprofile-mod . For example, to modify the description of a profile and change whether IdM stores the issued certificates: To update the certificate profile configuration, import the file containing the updated configuration using the --file option. For example: Deleting Certificate Profiles To remove an existing certificate profile from IdM, use the ipa certprofile-del command: 24.4.3. Certificate Profile Management from the Web UI To manage certificate profiles from the IdM web UI: Open the Authentication tab and the Certificates subtab. Open the Certificate Profiles section. Figure 24.7. Certificate Profile Management in the Web UI In the Certificate Profiles section, you can display information about existing profiles, modify their attributes, or delete selected profiles. For example, to modify an existing certificate profile: Click on the name of the profile to open the profile configuration page. In the profile configuration page, fill in the required information. Click Save to confirm the new configuration. Figure 24.8. Modifying a Certificate Profile in the Web UI If you enable the Store issued certificates option, the issued certificates are delivered to the client as well as stored in the target IdM principal's userCertificate attribute. If the option is disabled, the issued certificates are delivered to the client, but not stored in IdM. Storing certificates is often disabled when issuing multiple short-lived certificates is required. Note that some certificate profile management operations are currently unavailable in the web UI: It is not possible to import a certificate profile in the web UI. To import a certificate, use the ipa certprofile-import command. It is not possible to set, add, or delete attribute and value pairs. To modify the attribute and value pairs, use the ipa certprofile-mod command. It is not possible to import updated certificate profile configuration. To import a file containing updated profile configuration, use the ipa certprofile-mod --file= file_name command. For more information about the commands used to manage certificate profiles, see Section 24.4.2, "Certificate Profile Management from the Command Line" . 24.4.4. Upgrading IdM Servers with Certificate Profiles When upgrading an IdM server, the profiles included in the server are all imported and enabled. If you upgrade multiple server replicas, the profiles of the first upgraded replica are imported. On the other replicas, IdM detects the presence of other profiles and does not import them or resolve any conflicts between the two sets of profiles. If you have custom profiles defined on replicas, make sure the profiles on all replicas are consistent before upgrading. | [
"ipa certprofile Manage Certificate Profiles EXAMPLES: Import a profile that will not store issued certificates: ipa certprofile-import ShortLivedUserCert --file UserCert.profile --desc \"User Certificates\" --store=false Delete a certificate profile: ipa certprofile-del ShortLivedUserCert",
"ipa certprofile-mod --help Usage: ipa [global-options] certprofile-mod ID [options] Modify Certificate Profile configuration. Options: -h, --help show this help message and exit --desc=STR Brief description of this profile --store=BOOL Whether to store certs issued using this profile",
"ipa certprofile-import Profile ID: smime Profile description: S/MIME certificates Store issued certificates [True]: TRUE Filename of a raw profile. The XML format is not supported.: smime.cfg ------------------------ Imported profile \"smime\" ------------------------ Profile ID: smime Profile description: S/MIME certificates Store issued certificates: TRUE",
"ipa certprofile-import --file= smime.cfg",
"ipa certprofile-show caIPAserviceCert --out= file_name",
"ipa certprofile-find ------------------ 3 profiles matched ------------------ Profile ID: caIPAserviceCert Profile description: Standard profile for network services Store issued certificates: TRUE Profile ID: IECUserRoles",
"ipa certprofile-show profile_ID Profile ID: profile_ID Profile description: S/MIME certificates Store issued certificates: TRUE",
"ipa certprofile-mod profile_ID --desc=\"New description\" --store=False ------------------------------------ Modified Certificate Profile \"profile_ID\" ------------------------------------ Profile ID: profile_ID Profile description: New description Store issued certificates: FALSE",
"ipa certprofile-mod profile_ID --file= new_configuration.cfg",
"ipa certprofile-del profile_ID ----------------------- Deleted profile \"profile_ID\" -----------------------"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/certificate-profiles |
17.14. Applying Network Filtering | 17.14. Applying Network Filtering This section provides an introduction to libvirt's network filters, their goals, concepts and XML format. 17.14.1. Introduction The goal of the network filtering, is to enable administrators of a virtualized system to configure and enforce network traffic filtering rules on virtual machines and manage the parameters of network traffic that virtual machines are allowed to send or receive. The network traffic filtering rules are applied on the host physical machine when a virtual machine is started. Since the filtering rules cannot be circumvented from within the virtual machine, it makes them mandatory from the point of view of a virtual machine user. From the point of view of the guest virtual machine, the network filtering system allows each virtual machine's network traffic filtering rules to be configured individually on a per interface basis. These rules are applied on the host physical machine when the virtual machine is started and can be modified while the virtual machine is running. The latter can be achieved by modifying the XML description of a network filter. Multiple virtual machines can make use of the same generic network filter. When such a filter is modified, the network traffic filtering rules of all running virtual machines that reference this filter are updated. The machines that are not running will update on start. As previously mentioned, applying network traffic filtering rules can be done on individual network interfaces that are configured for certain types of network configurations. Supported network types include: network ethernet -- must be used in bridging mode bridge Example 17.1. An example of network filtering The interface XML is used to reference a top-level filter. In the following example, the interface description references the filter clean-traffic. Network filters are written in XML and may either contain: references to other filters, rules for traffic filtering, or hold a combination of both. The above referenced filter clean-traffic is a filter that only contains references to other filters and no actual filtering rules. Since references to other filters can be used, a tree of filters can be built. The clean-traffic filter can be viewed using the command: # virsh nwfilter-dumpxml clean-traffic . As previously mentioned, a single network filter can be referenced by multiple virtual machines. Since interfaces will typically have individual parameters associated with their respective traffic filtering rules, the rules described in a filter's XML can be generalized using variables. In this case, the variable name is used in the filter XML and the name and value are provided at the place where the filter is referenced. Example 17.2. Description extended In the following example, the interface description has been extended with the parameter IP and a dotted IP address as a value. In this particular example, the clean-traffic network traffic filter will be represented with the IP address parameter 10.0.0.1 and as per the rule dictates that all traffic from this interface will always be using 10.0.0.1 as the source IP address, which is one of the purpose of this particular filter. 17.14.2. Filtering Chains Filtering rules are organized in filter chains. These chains can be thought of as having a tree structure with packet filtering rules as entries in individual chains (branches). Packets start their filter evaluation in the root chain and can then continue their evaluation in other chains, return from those chains back into the root chain or be dropped or accepted by a filtering rule in one of the traversed chains. Libvirt's network filtering system automatically creates individual root chains for every virtual machine's network interface on which the user chooses to activate traffic filtering. The user may write filtering rules that are either directly instantiated in the root chain or may create protocol-specific filtering chains for efficient evaluation of protocol-specific rules. The following chains exist: root mac stp (spanning tree protocol) vlan arp and rarp ipv4 ipv6 Multiple chains evaluating the mac, stp, vlan, arp, rarp, ipv4, or ipv6 protocol can be created using the protocol name only as a prefix in the chain's name. Example 17.3. ARP traffic filtering This example allows chains with names arp-xyz or arp-test to be specified and have their ARP protocol packets evaluated in those chains. The following filter XML shows an example of filtering ARP traffic in the arp chain. The consequence of putting ARP-specific rules in the arp chain, rather than for example in the root chain, is that packets protocols other than ARP do not need to be evaluated by ARP protocol-specific rules. This improves the efficiency of the traffic filtering. However, one must then pay attention to only putting filtering rules for the given protocol into the chain since other rules will not be evaluated. For example, an IPv4 rule will not be evaluated in the ARP chain since IPv4 protocol packets will not traverse the ARP chain. 17.14.3. Filtering Chain Priorities As previously mentioned, when creating a filtering rule, all chains are connected to the root chain. The order in which those chains are accessed is influenced by the priority of the chain. The following table shows the chains that can be assigned a priority and their default priorities. Table 17.1. Filtering chain default priorities values Chain (prefix) Default priority stp -810 mac -800 vlan -750 ipv4 -700 ipv6 -600 arp -500 rarp -400 Note A chain with a lower priority value is accessed before one with a higher value. The chains listed in Table 17.1, "Filtering chain default priorities values" can be also be assigned custom priorities by writing a value in the range [-1000 to 1000] into the priority (XML) attribute in the filter node. Section 17.14.2, "Filtering Chains" filter shows the default priority of -500 for arp chains, for example. 17.14.4. Usage of Variables in Filters There are two variables that have been reserved for usage by the network traffic filtering subsystem: MAC and IP. MAC is designated for the MAC address of the network interface. A filtering rule that references this variable will automatically be replaced with the MAC address of the interface. This works without the user having to explicitly provide the MAC parameter. Even though it is possible to specify the MAC parameter similar to the IP parameter above, it is discouraged since libvirt knows what MAC address an interface will be using. The parameter IP represents the IP address that the operating system inside the virtual machine is expected to use on the given interface. The IP parameter is special in so far as the libvirt daemon will try to determine the IP address (and thus the IP parameter's value) that is being used on an interface if the parameter is not explicitly provided but referenced. For current limitations on IP address detection, consult the section on limitations Section 17.14.12, "Limitations" on how to use this feature and what to expect when using it. The XML file shown in Section 17.14.2, "Filtering Chains" contains the filter no-arp-spoofing , which is an example of using a network filter XML to reference the MAC and IP variables. Note that referenced variables are always prefixed with the character USD . The format of the value of a variable must be of the type expected by the filter attribute identified in the XML. In the above example, the IP parameter must hold a legal IP address in standard format. Failure to provide the correct structure will result in the filter variable not being replaced with a value and will prevent a virtual machine from starting or will prevent an interface from attaching when hot plugging is being used. Some of the types that are expected for each XML attribute are shown in the example Example 17.4, "Sample variable types" . Example 17.4. Sample variable types As variables can contain lists of elements, (the variable IP can contain multiple IP addresses that are valid on a particular interface, for example), the notation for providing multiple elements for the IP variable is: This XML file creates filters to enable multiple IP addresses per interface. Each of the IP addresses will result in a separate filtering rule. Therefore, using the XML above and the following rule, three individual filtering rules (one for each IP address) will be created: As it is possible to access individual elements of a variable holding a list of elements, a filtering rule like the following accesses the 2nd element of the variable DSTPORTS . Example 17.5. Using a variety of variables As it is possible to create filtering rules that represent all of the permissible rules from different lists using the notation USDVARIABLE[@<iterator id="x">] . The following rule allows a virtual machine to receive traffic on a set of ports, which are specified in DSTPORTS , from the set of source IP address specified in SRCIPADDRESSES . The rule generates all combinations of elements of the variable DSTPORTS with those of SRCIPADDRESSES by using two independent iterators to access their elements. Assign concrete values to SRCIPADDRESSES and DSTPORTS as shown: Assigning values to the variables using USDSRCIPADDRESSES[@1] and USDDSTPORTS[@2] would then result in all variants of addresses and ports being created as shown: 10.0.0.1, 80 10.0.0.1, 8080 11.1.2.3, 80 11.1.2.3, 8080 Accessing the same variables using a single iterator, for example by using the notation USDSRCIPADDRESSES[@1] and USDDSTPORTS[@1] , would result in parallel access to both lists and result in the following combination: 10.0.0.1, 80 11.1.2.3, 8080 Note USDVARIABLE is short-hand for USDVARIABLE[@0] . The former notation always assumes the role of iterator with iterator id="0" added as shown in the opening paragraph at the top of this section. 17.14.5. Automatic IP Address Detection and DHCP Snooping This section provides information about automatic IP address detection and DHCP snooping. 17.14.5.1. Introduction The detection of IP addresses used on a virtual machine's interface is automatically activated if the variable IP is referenced but no value has been assigned to it. The variable CTRL_IP_LEARNING can be used to specify the IP address learning method to use. Valid values include: any , dhcp , or none . The value any instructs libvirt to use any packet to determine the address in use by a virtual machine, which is the default setting if the variable CTRL_IP_LEARNING is not set. This method will only detect a single IP address per interface. Once a guest virtual machine's IP address has been detected, its IP network traffic will be locked to that address, if for example, IP address spoofing is prevented by one of its filters. In that case, the user of the VM will not be able to change the IP address on the interface inside the guest virtual machine, which would be considered IP address spoofing. When a guest virtual machine is migrated to another host physical machine or resumed after a suspend operation, the first packet sent by the guest virtual machine will again determine the IP address that the guest virtual machine can use on a particular interface. The value of dhcp instructs libvirt to only honor DHCP server-assigned addresses with valid leases. This method supports the detection and usage of multiple IP address per interface. When a guest virtual machine resumes after a suspend operation, any valid IP address leases are applied to its filters. Otherwise the guest virtual machine is expected to use DHCP to obtain a new IP addresses. When a guest virtual machine migrates to another physical host physical machine, the guest virtual machine is required to re-run the DHCP protocol. If CTRL_IP_LEARNING is set to none , libvirt does not do IP address learning and referencing IP without assigning it an explicit value is an error. 17.14.5.2. DHCP Snooping CTRL_IP_LEARNING= dhcp (DHCP snooping) provides additional anti-spoofing security, especially when combined with a filter allowing only trusted DHCP servers to assign IP addresses. To enable this, set the variable DHCPSERVER to the IP address of a valid DHCP server and provide filters that use this variable to filter incoming DHCP responses. When DHCP snooping is enabled and the DHCP lease expires, the guest virtual machine will no longer be able to use the IP address until it acquires a new, valid lease from a DHCP server. If the guest virtual machine is migrated, it must get a new valid DHCP lease to use an IP address (for example by bringing the VM interface down and up again). Note Automatic DHCP detection listens to the DHCP traffic the guest virtual machine exchanges with the DHCP server of the infrastructure. To avoid denial-of-service attacks on libvirt, the evaluation of those packets is rate-limited, meaning that a guest virtual machine sending an excessive number of DHCP packets per second on an interface will not have all of those packets evaluated and thus filters may not get adapted. Normal DHCP client behavior is assumed to send a low number of DHCP packets per second. Further, it is important to setup appropriate filters on all guest virtual machines in the infrastructure to avoid them being able to send DHCP packets. Therefore, guest virtual machines must either be prevented from sending UDP and TCP traffic from port 67 to port 68 or the DHCPSERVER variable should be used on all guest virtual machines to restrict DHCP server messages to only be allowed to originate from trusted DHCP servers. At the same time anti-spoofing prevention must be enabled on all guest virtual machines in the subnet. Example 17.6. Activating IPs for DHCP snooping The following XML provides an example for the activation of IP address learning using the DHCP snooping method: 17.14.6. Reserved Variables Table 17.2, "Reserved variables" shows the variables that are considered reserved and are used by libvirt: Table 17.2. Reserved variables Variable Name Definition MAC The MAC address of the interface IP The list of IP addresses in use by an interface IPV6 Not currently implemented: the list of IPV6 addresses in use by an interface DHCPSERVER The list of IP addresses of trusted DHCP servers DHCPSERVERV6 Not currently implemented: The list of IPv6 addresses of trusted DHCP servers CTRL_IP_LEARNING The choice of the IP address detection mode 17.14.7. Element and Attribute Overview The root element required for all network filters is named <filter> with two possible attributes. The name attribute provides a unique name of the given filter. The chain attribute is optional but allows certain filters to be better organized for more efficient processing by the firewall subsystem of the underlying host physical machine. Currently, the system only supports the following chains: root , ipv4 , ipv6 , arp and rarp . 17.14.8. References to Other Filters Any filter may hold references to other filters. Individual filters may be referenced multiple times in a filter tree but references between filters must not introduce loops. Example 17.7. An Example of a clean traffic filter The following shows the XML of the clean-traffic network filter referencing several other filters. To reference another filter, the XML node <filterref> needs to be provided inside a filter node. This node must have the attribute filter whose value contains the name of the filter to be referenced. New network filters can be defined at any time and may contain references to network filters that are not known to libvirt, yet. However, once a virtual machine is started or a network interface referencing a filter is to be hot-plugged, all network filters in the filter tree must be available. Otherwise the virtual machine will not start or the network interface cannot be attached. 17.14.9. Filter Rules The following XML shows a simple example of a network traffic filter implementing a rule to drop traffic if the IP address (provided through the value of the variable IP) in an outgoing IP packet is not the expected one, thus preventing IP address spoofing by the VM. Example 17.8. Example of network traffic filtering The traffic filtering rule starts with the rule node. This node may contain up to three of the following attributes: action is mandatory can have the following values: drop (matching the rule silently discards the packet with no further analysis) reject (matching the rule generates an ICMP reject message with no further analysis) accept (matching the rule accepts the packet with no further analysis) return (matching the rule passes this filter, but returns control to the calling filter for further analysis) continue (matching the rule goes on to the rule for further analysis) direction is mandatory can have the following values: in for incoming traffic out for outgoing traffic inout for incoming and outgoing traffic priority is optional. The priority of the rule controls the order in which the rule will be instantiated relative to other rules. Rules with lower values will be instantiated before rules with higher values. Valid values are in the range of -1000 to 1000. If this attribute is not provided, priority 500 will be assigned by default. Note that filtering rules in the root chain are sorted with filters connected to the root chain following their priorities. This allows to interleave filtering rules with access to filter chains. See Section 17.14.3, "Filtering Chain Priorities" for more information. statematch is optional. Possible values are '0' or 'false' to turn the underlying connection state matching off. The default setting is 'true' or 1 For more information, see Section 17.14.11, "Advanced Filter Configuration Topics" . The above example Example 17.7, "An Example of a clean traffic filter" indicates that the traffic of type ip will be associated with the chain ipv4 and the rule will have priority= 500 . If for example another filter is referenced whose traffic of type ip is also associated with the chain ipv4 then that filter's rules will be ordered relative to the priority= 500 of the shown rule. A rule may contain a single rule for filtering of traffic. The above example shows that traffic of type ip is to be filtered. 17.14.10. Supported Protocols The following sections list and give some details about the protocols that are supported by the network filtering subsystem. This type of traffic rule is provided in the rule node as a nested node. Depending on the traffic type a rule is filtering, the attributes are different. The above example showed the single attribute srcipaddr that is valid inside the ip traffic filtering node. The following sections show what attributes are valid and what type of data they are expecting. The following datatypes are available: UINT8 : 8 bit integer; range 0-255 UINT16: 16 bit integer; range 0-65535 MAC_ADDR: MAC address in dotted decimal format, for example 00:11:22:33:44:55 MAC_MASK: MAC address mask in MAC address format, for instance, FF:FF:FF:FC:00:00 IP_ADDR: IP address in dotted decimal format, for example 10.1.2.3 IP_MASK: IP address mask in either dotted decimal format (255.255.248.0) or CIDR mask (0-32) IPV6_ADDR: IPv6 address in numbers format, for example FFFF::1 IPV6_MASK: IPv6 mask in numbers format (FFFF:FFFF:FC00::) or CIDR mask (0-128) STRING: A string BOOLEAN: 'true', 'yes', '1' or 'false', 'no', '0' IPSETFLAGS: The source and destination flags of the ipset described by up to 6 'src' or 'dst' elements selecting features from either the source or destination part of the packet header; example: src,src,dst. The number of 'selectors' to provide here depends on the type of ipset that is referenced Every attribute except for those of type IP_MASK or IPV6_MASK can be negated using the match attribute with value no . Multiple negated attributes may be grouped together. The following XML fragment shows such an example using abstract attributes. Rules behave evaluate the rule as well as look at it logically within the boundaries of the given protocol attributes. Thus, if a single attribute's value does not match the one given in the rule, the whole rule will be skipped during the evaluation process. Therefore, in the above example incoming traffic will only be dropped if: the protocol property attribute1 does not match both value1 and the protocol property attribute2 does not match value2 and the protocol property attribute3 matches value3 . 17.14.10.1. MAC (Ethernet) Protocol ID: mac Rules of this type should go into the root chain. Table 17.3. MAC protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender dstmacaddr MAC_ADDR MAC address of destination dstmacmask MAC_MASK Mask applied to MAC address of destination protocolid UINT16 (0x600-0xffff), STRING Layer 3 protocol ID. Valid strings include [arp, rarp, ipv4, ipv6] comment STRING text string up to 256 characters The filter can be written as such: 17.14.10.2. VLAN (802.1Q) Protocol ID: vlan Rules of this type should go either into the root or vlan chain. Table 17.4. VLAN protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender dstmacaddr MAC_ADDR MAC address of destination dstmacmask MAC_MASK Mask applied to MAC address of destination vlan-id UINT16 (0x0-0xfff, 0 - 4095) VLAN ID encap-protocol UINT16 (0x03c-0xfff), String Encapsulated layer 3 protocol ID, valid strings are arp, ipv4, ipv6 comment STRING text string up to 256 characters 17.14.10.3. STP (Spanning Tree Protocol) Protocol ID: stp Rules of this type should go either into the root or stp chain. Table 17.5. STP protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender type UINT8 Bridge Protocol Data Unit (BPDU) type flags UINT8 BPDU flagdstmacmask root-priority UINT16 Root priority range start root-priority-hi UINT16 (0x0-0xfff, 0 - 4095) Root priority range end root-address MAC _ADDRESS root MAC Address root-address-mask MAC _MASK root MAC Address mask roor-cost UINT32 Root path cost (range start) root-cost-hi UINT32 Root path cost range end sender-priority-hi UINT16 Sender priority range end sender-address MAC_ADDRESS BPDU sender MAC address sender-address-mask MAC_MASK BPDU sender MAC address mask port UINT16 Port identifier (range start) port_hi UINT16 Port identifier range end msg-age UINT16 Message age timer (range start) msg-age-hi UINT16 Message age timer range end max-age-hi UINT16 Maximum age time range end hello-time UINT16 Hello time timer (range start) hello-time-hi UINT16 Hello time timer range end forward-delay UINT16 Forward delay (range start) forward-delay-hi UINT16 Forward delay range end comment STRING text string up to 256 characters 17.14.10.4. ARP/RARP Protocol ID: arp or rarp Rules of this type should either go into the root or arp/rarp chain. Table 17.6. ARP and RARP protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender dstmacaddr MAC_ADDR MAC address of destination dstmacmask MAC_MASK Mask applied to MAC address of destination hwtype UINT16 Hardware type protocoltype UINT16 Protocol type opcode UINT16, STRING Opcode valid strings are: Request, Reply, Request_Reverse, Reply_Reverse, DRARP_Request, DRARP_Reply, DRARP_Error, InARP_Request, ARP_NAK arpsrcmacaddr MAC_ADDR Source MAC address in ARP/RARP packet arpdstmacaddr MAC _ADDR Destination MAC address in ARP/RARP packet arpsrcipaddr IP_ADDR Source IP address in ARP/RARP packet arpdstipaddr IP_ADDR Destination IP address in ARP/RARP packet gratuitous BOOLEAN Boolean indicating whether to check for a gratuitous ARP packet comment STRING text string up to 256 characters 17.14.10.5. IPv4 Protocol ID: ip Rules of this type should either go into the root or ipv4 chain. Table 17.7. IPv4 protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender dstmacaddr MAC_ADDR MAC address of destination dstmacmask MAC_MASK Mask applied to MAC address of destination srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address protocol UINT8, STRING Layer 4 protocol identifier. Valid strings for protocol are: tcp, udp, udplite, esp, ah, icmp, igmp, sctp srcportstart UINT16 Start of range of valid source ports; requires protocol srcportend UINT16 End of range of valid source ports; requires protocol dstportstart UNIT16 Start of range of valid destination ports; requires protocol dstportend UNIT16 End of range of valid destination ports; requires protocol comment STRING text string up to 256 characters 17.14.10.6. IPv6 Protocol ID: ipv6 Rules of this type should either go into the root or ipv6 chain. Table 17.8. IPv6 protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to MAC address of sender dstmacaddr MAC_ADDR MAC address of destination dstmacmask MAC_MASK Mask applied to MAC address of destination srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address protocol UINT8, STRING Layer 4 protocol identifier. Valid strings for protocol are: tcp, udp, udplite, esp, ah, icmpv6, sctp scrportstart UNIT16 Start of range of valid source ports; requires protocol srcportend UINT16 End of range of valid source ports; requires protocol dstportstart UNIT16 Start of range of valid destination ports; requires protocol dstportend UNIT16 End of range of valid destination ports; requires protocol comment STRING text string up to 256 characters 17.14.10.7. TCP/UDP/SCTP Protocol ID: tcp, udp, sctp The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 17.9. TCP/UDP/SCTP protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address scripto IP_ADDR Start of range of source IP address srcipfrom IP_ADDR End of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address scrportstart UNIT16 Start of range of valid source ports; requires protocol srcportend UINT16 End of range of valid source ports; requires protocol dstportstart UNIT16 Start of range of valid destination ports; requires protocol dstportend UNIT16 End of range of valid destination ports; requires protocol comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE flags STRING TCP-only: format of mask/flags with mask and flags each being a comma separated list of SYN,ACK,URG,PSH,FIN,RST or NONE or ALL ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute 17.14.10.8. ICMP Protocol ID: icmp Note: The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 17.10. ICMP protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to the MAC address of the sender dstmacaddr MAD_ADDR MAC address of the destination dstmacmask MAC_MASK Mask applied to the MAC address of the destination srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address srcipfrom IP_ADDR start of range of source IP address scripto IP_ADDR end of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address type UNIT16 ICMP type code UNIT16 ICMP code comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute 17.14.10.9. IGMP, ESP, AH, UDPLITE, 'ALL' Protocol ID: igmp, esp, ah, udplite, all The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 17.11. IGMP, ESP, AH, UDPLITE, 'ALL' Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcmacmask MAC_MASK Mask applied to the MAC address of the sender dstmacaddr MAD_ADDR MAC address of the destination dstmacmask MAC_MASK Mask applied to the MAC address of the destination srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address srcipfrom IP_ADDR start of range of source IP address scripto IP_ADDR end of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute 17.14.10.10. TCP/UDP/SCTP over IPV6 Protocol ID: tcp-ipv6, udp-ipv6, sctp-ipv6 The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 17.12. TCP, UDP, SCTP over IPv6 protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address srcipfrom IP_ADDR start of range of source IP address scripto IP_ADDR end of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address srcportstart UINT16 Start of range of valid source ports srcportend UINT16 End of range of valid source ports dstportstart UINT16 Start of range of valid destination ports dstportend UINT16 End of range of valid destination ports comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute 17.14.10.11. ICMPv6 Protocol ID: icmpv6 The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 17.13. ICMPv6 protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address srcipfrom IP_ADDR start of range of source IP address scripto IP_ADDR end of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address type UINT16 ICMPv6 type code UINT16 ICMPv6 code comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute 17.14.10.12. IGMP, ESP, AH, UDPLITE, 'ALL' over IPv6 Protocol ID: igmp-ipv6, esp-ipv6, ah-ipv6, udplite-ipv6, all-ipv6 The chain parameter is ignored for this type of traffic and should either be omitted or set to root. Table 17.14. IGMP, ESP, AH, UDPLITE, 'ALL' over IPv6 protocol types Attribute Name Datatype Definition srcmacaddr MAC_ADDR MAC address of sender srcipaddr IP_ADDR Source IP address srcipmask IP_MASK Mask applied to source IP address dstipaddr IP_ADDR Destination IP address dstipmask IP_MASK Mask applied to destination IP address srcipfrom IP_ADDR start of range of source IP address scripto IP_ADDR end of range of source IP address dstipfrom IP_ADDR Start of range of destination IP address dstipto IP_ADDR End of range of destination IP address comment STRING text string up to 256 characters state STRING comma separated list of NEW,ESTABLISHED,RELATED,INVALID or NONE ipset STRING The name of an IPSet managed outside of libvirt ipsetflags IPSETFLAGS flags for the IPSet; requires ipset attribute 17.14.11. Advanced Filter Configuration Topics The following sections discuss advanced filter configuration topics. 17.14.11.1. Connection tracking The network filtering subsystem (on Linux) makes use of the connection tracking support of IP tables. This helps in enforcing the direction of the network traffic (state match) as well as counting and limiting the number of simultaneous connections towards a guest virtual machine. As an example, if a guest virtual machine has TCP port 8080 open as a server, clients may connect to the guest virtual machine on port 8080. Connection tracking and enforcement of the direction and then prevents the guest virtual machine from initiating a connection from (TCP client) port 8080 to the host physical machine back to a remote host physical machine. More importantly, tracking helps to prevent remote attackers from establishing a connection back to a guest virtual machine. For example, if the user inside the guest virtual machine established a connection to port 80 on an attacker site, the attacker will not be able to initiate a connection from TCP port 80 back towards the guest virtual machine. By default the connection state match that enables connection tracking and then enforcement of the direction of traffic is turned on. Example 17.9. XML example for turning off connections to the TCP port The following shows an example XML fragment where this feature has been turned off for incoming connections to TCP port 12345. This now allows incoming traffic to TCP port 12345, but would also enable the initiation from (client) TCP port 12345 within the VM, which may or may not be desirable. 17.14.11.2. Limiting number of connections To limit the number of connections a guest virtual machine may establish, a rule must be provided that sets a limit of connections for a given type of traffic. If for example a VM is supposed to be allowed to only ping one other IP address at a time and is supposed to have only one active incoming ssh connection at a time. Example 17.10. XML sample file that sets limits to connections The following XML fragment can be used to limit connections Note Limitation rules must be listed in the XML prior to the rules for accepting traffic. According to the XML file in Example 17.10, "XML sample file that sets limits to connections" , an additional rule for allowing DNS traffic sent to port 22 go out the guest virtual machine, has been added to avoid ssh sessions not getting established for reasons related to DNS lookup failures by the ssh daemon. Leaving this rule out may result in the ssh client hanging unexpectedly as it tries to connect. Additional caution should be used in regards to handling timeouts related to tracking of traffic. An ICMP ping that the user may have terminated inside the guest virtual machine may have a long timeout in the host physical machine's connection tracking system and will therefore not allow another ICMP ping to go through. The best solution is to tune the timeout in the host physical machine's sysfs with the following command:# echo 3 > /proc/sys/net/netfilter/nf_conntrack_icmp_timeout . This command sets the ICMP connection tracking timeout to 3 seconds. The effect of this is that once one ping is terminated, another one can start after 3 seconds. If for any reason the guest virtual machine has not properly closed its TCP connection, the connection to be held open for a longer period of time, especially if the TCP timeout value was set for a large amount of time on the host physical machine. In addition, any idle connection may result in a timeout in the connection tracking system which can be re-activated once packets are exchanged. However, if the limit is set too low, newly initiated connections may force an idle connection into TCP backoff. Therefore, the limit of connections should be set rather high so that fluctuations in new TCP connections do not cause odd traffic behavior in relation to idle connections. 17.14.11.3. Command-line tools virsh has been extended with life-cycle support for network filters. All commands related to the network filtering subsystem start with the prefix nwfilter . The following commands are available: nwfilter-list : lists UUIDs and names of all network filters nwfilter-define : defines a new network filter or updates an existing one (must supply a name) nwfilter-undefine : deletes a specified network filter (must supply a name). Do not delete a network filter currently in use. nwfilter-dumpxml : displays a specified network filter (must supply a name) nwfilter-edit : edits a specified network filter (must supply a name) 17.14.11.4. Pre-existing network filters The following is a list of example network filters that are automatically installed with libvirt: Table 17.15. ICMPv6 protocol types Protocol Name Description allow-arp Accepts all incoming and outgoing Address Resolution Protocol (ARP) traffic to a guest virtual machine. no-arp-spoofing , no-arp-mac-spoofing , and no-arp-ip-spoofing These filters prevent a guest virtual machine from spoofing ARP traffic. In addition, they only allows ARP request and reply messages, and enforce that those packets contain: no-arp-spoofing - the MAC and IP addresses of the guest no-arp-mac-spoofing - the MAC address of the guest no-arp-ip-spoofing - the IP address of the guest low-dhcp Allows a guest virtual machine to request an IP address via DHCP (from any DHCP server). low-dhcp-server Allows a guest virtual machine to request an IP address from a specified DHCP server. The dotted decimal IP address of the DHCP server must be provided in a reference to this filter. The name of the variable must be DHCPSERVER . low-ipv4 Accepts all incoming and outgoing IPv4 traffic to a virtual machine. low-incoming-ipv4 Accepts only incoming IPv4 traffic to a virtual machine. This filter is a part of the clean-traffic filter. no-ip-spoofing Prevents a guest virtual machine from sending IP packets with a source IP address different from the one inside the packet. This filter is a part of the clean-traffic filter. no-ip-multicast Prevents a guest virtual machine from sending IP multicast packets. no-mac-broadcast Prevents outgoing IPv4 traffic to a specified MAC address. This filter is a part of the clean-traffic filter. no-other-l2-traffic Prevents all layer 2 networking traffic except traffic specified by other filters used by the network. This filter is a part of the clean-traffic filter. no-other-rarp-traffic , qemu-announce-self , qemu-announce-self-rarp These filters allow QEMU's self-announce Reverse Address Resolution Protocol (RARP) packets, but prevent all other RARP traffic. All of them are also included in the clean-traffic filter. clean-traffic Prevents MAC, IP and ARP spoofing. This filter references several other filters as building blocks. These filters are only building blocks and require a combination with other filters to provide useful network traffic filtering. The most used one in the above list is the clean-traffic filter. This filter itself can for example be combined with the no-ip-multicast filter to prevent virtual machines from sending IP multicast traffic on top of the prevention of packet spoofing. 17.14.11.5. Writing your own filters Since libvirt only provides a couple of example networking filters, you may consider writing your own. When planning on doing so there are a couple of things you may need to know regarding the network filtering subsystem and how it works internally. Certainly you also have to know and understand the protocols very well that you want to be filtering on so that no further traffic than what you want can pass and that in fact the traffic you want to allow does pass. The network filtering subsystem is currently only available on Linux host physical machines and only works for QEMU and KVM type of virtual machines. On Linux, it builds upon the support for ebtables, iptables and ip6tables and makes use of their features. Considering the list found in Section 17.14.10, "Supported Protocols" the following protocols can be implemented using ebtables: mac stp (spanning tree protocol) vlan (802.1Q) arp, rarp ipv4 ipv6 Any protocol that runs over IPv4 is supported using iptables, those over IPv6 are implemented using ip6tables. Using a Linux host physical machine, all traffic filtering rules created by libvirt's network filtering subsystem first passes through the filtering support implemented by ebtables and only afterwards through iptables or ip6tables filters. If a filter tree has rules with the protocols including: mac, stp, vlan arp, rarp, ipv4, or ipv6; the ebtable rules and values listed will automatically be used first. Multiple chains for the same protocol can be created. The name of the chain must have a prefix of one of the previously enumerated protocols. To create an additional chain for handling of ARP traffic, a chain with name arp-test, can for example be specified. As an example, it is possible to filter on UDP traffic by source and destination ports using the ip protocol filter and specifying attributes for the protocol, source and destination IP addresses and ports of UDP packets that are to be accepted. This allows early filtering of UDP traffic with ebtables. However, once an IP or IPv6 packet, such as a UDP packet, has passed the ebtables layer and there is at least one rule in a filter tree that instantiates iptables or ip6tables rules, a rule to let the UDP packet pass will also be necessary to be provided for those filtering layers. This can be achieved with a rule containing an appropriate udp or udp-ipv6 traffic filtering node. Example 17.11. Creating a custom filter Suppose a filter is needed to fulfill the following list of requirements: prevents a VM's interface from MAC, IP and ARP spoofing opens only TCP ports 22 and 80 of a VM's interface allows the VM to send ping traffic from an interface but not let the VM be pinged on the interface allows the VM to do DNS lookups (UDP towards port 53) The requirement to prevent spoofing is fulfilled by the existing clean-traffic network filter, thus the way to do this is to reference it from a custom filter. To enable traffic for TCP ports 22 and 80, two rules are added to enable this type of traffic. To allow the guest virtual machine to send ping traffic a rule is added for ICMP traffic. For simplicity reasons, general ICMP traffic will be allowed to be initiated from the guest virtual machine, and will not be specified to ICMP echo request and response messages. All other traffic will be prevented to reach or be initiated by the guest virtual machine. To do this a rule will be added that drops all other traffic. Assuming the guest virtual machine is called test and the interface to associate our filter with is called eth0 , a filter is created named test-eth0 . The result of these considerations is the following network filter XML: 17.14.11.6. Sample custom filter Although one of the rules in the above XML contains the IP address of the guest virtual machine as either a source or a destination address, the filtering of the traffic works correctly. The reason is that whereas the rule's evaluation occurs internally on a per-interface basis, the rules are additionally evaluated based on which (tap) interface has sent or will receive the packet, rather than what their source or destination IP address may be. Example 17.12. Sample XML for network interface descriptions An XML fragment for a possible network interface description inside the domain XML of the test guest virtual machine could then look like this: To more strictly control the ICMP traffic and enforce that only ICMP echo requests can be sent from the guest virtual machine and only ICMP echo responses be received by the guest virtual machine, the above ICMP rule can be replaced with the following two rules: Example 17.13. Second example custom filter This example demonstrates how to build a similar filter as in the example above, but extends the list of requirements with an ftp server located inside the guest virtual machine. The requirements for this filter are: prevents a guest virtual machine's interface from MAC, IP, and ARP spoofing opens only TCP ports 22 and 80 in a guest virtual machine's interface allows the guest virtual machine to send ping traffic from an interface but does not allow the guest virtual machine to be pinged on the interface allows the guest virtual machine to do DNS lookups (UDP towards port 53) enables the ftp server (in active mode) so it can run inside the guest virtual machine The additional requirement of allowing an FTP server to be run inside the guest virtual machine maps into the requirement of allowing port 21 to be reachable for FTP control traffic as well as enabling the guest virtual machine to establish an outgoing TCP connection originating from the guest virtual machine's TCP port 20 back to the FTP client (FTP active mode). There are several ways of how this filter can be written and two possible solutions are included in this example. The first solution makes use of the state attribute of the TCP protocol that provides a hook into the connection tracking framework of the Linux host physical machine. For the guest virtual machine-initiated FTP data connection (FTP active mode) the RELATED state is used to enable detection that the guest virtual machine-initiated FTP data connection is a consequence of ( or 'has a relationship with' ) an existing FTP control connection, thereby allowing it to pass packets through the firewall. The RELATED state, however, is only valid for the very first packet of the outgoing TCP connection for the FTP data path. Afterwards, the state is ESTABLISHED, which then applies equally to the incoming and outgoing direction. All this is related to the FTP data traffic originating from TCP port 20 of the guest virtual machine. This then leads to the following solution: Before trying out a filter using the RELATED state, you have to make sure that the appropriate connection tracking module has been loaded into the host physical machine's kernel. Depending on the version of the kernel, you must run either one of the following two commands before the FTP connection with the guest virtual machine is established: modprobe nf_conntrack_ftp - where available OR modprobe ip_conntrack_ftp if above is not available If protocols other than FTP are used in conjunction with the RELATED state, their corresponding module must be loaded. Modules are available for the protocols: ftp, tftp, irc, sip, sctp, and amanda. The second solution makes use of the state flags of connections more than the solution did. This solution takes advantage of the fact that the NEW state of a connection is valid when the very first packet of a traffic flow is detected. Subsequently, if the very first packet of a flow is accepted, the flow becomes a connection and thus enters into the ESTABLISHED state. Therefore, a general rule can be written for allowing packets of ESTABLISHED connections to reach the guest virtual machine or be sent by the guest virtual machine. This is done writing specific rules for the very first packets identified by the NEW state and dictates the ports that the data is acceptable. All packets meant for ports that are not explicitly accepted are dropped, thus not reaching an ESTABLISHED state. Any subsequent packets sent from that port are dropped as well. 17.14.12. Limitations The following is a list of the currently known limitations of the network filtering subsystem. VM migration is only supported if the whole filter tree that is referenced by a guest virtual machine's top level filter is also available on the target host physical machine. The network filter clean-traffic for example should be available on all libvirt installations and thus enable migration of guest virtual machines that reference this filter. To assure version compatibility is not a problem make sure you are using the most current version of libvirt by updating the package regularly. Migration must occur between libvirt insallations of version 0.8.1 or later in order not to lose the network traffic filters associated with an interface. VLAN (802.1Q) packets, if sent by a guest virtual machine, cannot be filtered with rules for protocol IDs arp, rarp, ipv4 and ipv6. They can only be filtered with protocol IDs, MAC and VLAN. Therefore, the example filter clean-traffic Example 17.1, "An example of network filtering" will not work as expected. | [
"<devices> <interface type='bridge'> <mac address='00:16:3e:5d:c7:9e'/> <filterref filter='clean-traffic'/> </interface> </devices>",
"<devices> <interface type='bridge'> <mac address='00:16:3e:5d:c7:9e'/> <filterref filter='clean-traffic'> <parameter name='IP' value='10.0.0.1'/> </filterref> </interface> </devices>",
"<filter name='no-arp-spoofing' chain='arp' priority='-500'> <uuid>f88f1932-debf-4aa1-9fbe-f10d3aa4bc95</uuid> <rule action='drop' direction='out' priority='300'> <mac match='no' srcmacaddr='USDMAC'/> </rule> <rule action='drop' direction='out' priority='350'> <arp match='no' arpsrcmacaddr='USDMAC'/> </rule> <rule action='drop' direction='out' priority='400'> <arp match='no' arpsrcipaddr='USDIP'/> </rule> <rule action='drop' direction='in' priority='450'> <arp opcode='Reply'/> <arp match='no' arpdstmacaddr='USDMAC'/> </rule> <rule action='drop' direction='in' priority='500'> <arp match='no' arpdstipaddr='USDIP'/> </rule> <rule action='accept' direction='inout' priority='600'> <arp opcode='Request'/> </rule> <rule action='accept' direction='inout' priority='650'> <arp opcode='Reply'/> </rule> <rule action='drop' direction='inout' priority='1000'/> </filter>",
"<devices> <interface type='bridge'> <mac address='00:16:3e:5d:c7:9e'/> <filterref filter='clean-traffic'> <parameter name='IP' value='10.0.0.1'/> <parameter name='IP' value='10.0.0.2'/> <parameter name='IP' value='10.0.0.3'/> </filterref> </interface> </devices>",
"<rule action='accept' direction='in' priority='500'> <tcp srpipaddr='USDIP'/> </rule>",
"<rule action='accept' direction='in' priority='500'> <udp dstportstart='USDDSTPORTS[1]'/> </rule>",
"<rule action='accept' direction='in' priority='500'> <ip srcipaddr='USDSRCIPADDRESSES[@1]' dstportstart='USDDSTPORTS[@2]'/> </rule>",
"SRCIPADDRESSES = [ 10.0.0.1, 11.1.2.3 ] DSTPORTS = [ 80, 8080 ]",
"<interface type='bridge'> <source bridge='virbr0'/> <filterref filter='clean-traffic'> <parameter name='CTRL_IP_LEARNING' value='dhcp'/> </filterref> </interface>",
"<filter name='clean-traffic'> <uuid>6ef53069-ba34-94a0-d33d-17751b9b8cb1</uuid> <filterref filter='no-mac-spoofing'/> <filterref filter='no-ip-spoofing'/> <filterref filter='allow-incoming-ipv4'/> <filterref filter='no-arp-spoofing'/> <filterref filter='no-other-l2-traffic'/> <filterref filter='qemu-announce-self'/> </filter>",
"<filter name='no-ip-spoofing' chain='ipv4'> <uuid>fce8ae33-e69e-83bf-262e-30786c1f8072</uuid> <rule action='drop' direction='out' priority='500'> <ip match='no' srcipaddr='USDIP'/> </rule> </filter>",
"[...] <rule action='drop' direction='in'> <protocol match='no' attribute1='value1' attribute2='value2'/> <protocol attribute3='value3'/> </rule> [...]",
"[...] <mac match='no' srcmacaddr='USDMAC'/> [...]",
"[...] <rule direction='in' action='accept' statematch='false'> <cp dstportstart='12345'/> </rule> [...]",
"[...] <rule action='drop' direction='in' priority='400'> <tcp connlimit-above='1'/> </rule> <rule action='accept' direction='in' priority='500'> <tcp dstportstart='22'/> </rule> <rule action='drop' direction='out' priority='400'> <icmp connlimit-above='1'/> </rule> <rule action='accept' direction='out' priority='500'> <icmp/> </rule> <rule action='accept' direction='out' priority='500'> <udp dstportstart='53'/> </rule> <rule action='drop' direction='inout' priority='1000'> <all/> </rule> [...]",
"<filter name='test-eth0'> <!- - This rule references the clean traffic filter to prevent MAC, IP and ARP spoofing. By not providing an IP address parameter, libvirt will detect the IP address the guest virtual machine is using. - -> <filterref filter='clean-traffic'/> <!- - This rule enables TCP ports 22 (ssh) and 80 (http) to be reachable - -> <rule action='accept' direction='in'> <tcp dstportstart='22'/> </rule> <rule action='accept' direction='in'> <tcp dstportstart='80'/> </rule> <!- - This rule enables general ICMP traffic to be initiated by the guest virtual machine including ping traffic - -> <rule action='accept' direction='out'> <icmp/> </rule>> <!- - This rule enables outgoing DNS lookups using UDP - -> <rule action='accept' direction='out'> <udp dstportstart='53'/> </rule> <!- - This rule drops all other traffic - -> <rule action='drop' direction='inout'> <all/> </rule> </filter>",
"[...] <interface type='bridge'> <source bridge='mybridge'/> <filterref filter='test-eth0'/> </interface> [...]",
"<!- - enable outgoing ICMP echo requests- -> <rule action='accept' direction='out'> <icmp type='8'/> </rule>",
"<!- - enable incoming ICMP echo replies- -> <rule action='accept' direction='in'> <icmp type='0'/> </rule>",
"<filter name='test-eth0'> <!- - This filter (eth0) references the clean traffic filter to prevent MAC, IP, and ARP spoofing. By not providing an IP address parameter, libvirt will detect the IP address the guest virtual machine is using. - -> <filterref filter='clean-traffic'/> <!- - This rule enables TCP port 21 (FTP-control) to be reachable - -> <rule action='accept' direction='in'> <tcp dstportstart='21'/> </rule> <!- - This rule enables TCP port 20 for guest virtual machine-initiated FTP data connection related to an existing FTP control connection - -> <rule action='accept' direction='out'> <tcp srcportstart='20' state='RELATED,ESTABLISHED'/> </rule> <!- - This rule accepts all packets from a client on the FTP data connection - -> <rule action='accept' direction='in'> <tcp dstportstart='20' state='ESTABLISHED'/> </rule> <!- - This rule enables TCP port 22 (SSH) to be reachable - -> <rule action='accept' direction='in'> <tcp dstportstart='22'/> </rule> <!- -This rule enables TCP port 80 (HTTP) to be reachable - -> <rule action='accept' direction='in'> <tcp dstportstart='80'/> </rule> <!- - This rule enables general ICMP traffic to be initiated by the guest virtual machine, including ping traffic - -> <rule action='accept' direction='out'> <icmp/> </rule> <!- - This rule enables outgoing DNS lookups using UDP - -> <rule action='accept' direction='out'> <udp dstportstart='53'/> </rule> <!- - This rule drops all other traffic - -> <rule action='drop' direction='inout'> <all/> </rule> </filter>",
"<filter name='test-eth0'> <!- - This filter references the clean traffic filter to prevent MAC, IP and ARP spoofing. By not providing and IP address parameter, libvirt will detect the IP address the VM is using. - -> <filterref filter='clean-traffic'/> <!- - This rule allows the packets of all previously accepted connections to reach the guest virtual machine - -> <rule action='accept' direction='in'> <all state='ESTABLISHED'/> </rule> <!- - This rule allows the packets of all previously accepted and related connections be sent from the guest virtual machine - -> <rule action='accept' direction='out'> <all state='ESTABLISHED,RELATED'/> </rule> <!- - This rule enables traffic towards port 21 (FTP) and port 22 (SSH)- -> <rule action='accept' direction='in'> <tcp dstportstart='21' dstportend='22' state='NEW'/> </rule> <!- - This rule enables traffic towards port 80 (HTTP) - -> <rule action='accept' direction='in'> <tcp dstportstart='80' state='NEW'/> </rule> <!- - This rule enables general ICMP traffic to be initiated by the guest virtual machine, including ping traffic - -> <rule action='accept' direction='out'> <icmp state='NEW'/> </rule> <!- - This rule enables outgoing DNS lookups using UDP - -> <rule action='accept' direction='out'> <udp dstportstart='53' state='NEW'/> </rule> <!- - This rule drops all other traffic - -> <rule action='drop' direction='inout'> <all/> </rule> </filter>"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-virtual_networking-applying_network_filtering |
Chapter 2. Dynamic plugin installation | Chapter 2. Dynamic plugin installation The dynamic plugin support is based on the backend plugin manager package, which is a service that scans a configured root directory ( dynamicPlugins.rootDirectory in the app config) for dynamic plugin packages and loads them dynamically. You can use the dynamic plugins that come preinstalled with Red Hat Developer Hub or install external dynamic plugins from a public NPM registry. 2.1. Viewing installed plugins Using the Dynamic Plugins Info front-end plugin, you can view plugins that are currently installed in your Red Hat Developer Hub application. This plugin is enabled by default. Procedure Open your Developer Hub application and click Administration . Go to the Plugins tab to view a list of installed plugins and related information. 2.2. Preinstalled dynamic plugins Red Hat Developer Hub is preinstalled with a selection of dynamic plugins. The dynamic plugins that require custom configuration are disabled by default. For a complete list of dynamic plugins that are preinstalled in this release of Developer Hub, see the Dynamic plugins support matrix . Upon application startup, for each plugin that is disabled by default, the install-dynamic-plugins init container within the Developer Hub pod log displays a message similar to the following: ======= Skipping disabled dynamic plugin ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic To enable this plugin, add a package with the same name to the Helm chart and change the value in the disabled field to 'false'. For example: global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic disabled: false Note The default configuration for a plugin is extracted from the dynamic-plugins.default.yaml` file, however, you can use a pluginConfig entry to override the default configuration. 2.2.1. Preinstalled dynamic plugin descriptions and details Important Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features, see Technology Preview Features Scope . Additional detail on how Red Hat provides support for bundled community dynamic plugins is available on the Red Hat Developer Support Policy page. There are 60 plugins available in Red Hat Developer Hub. See the following table for more information: Table 2.1. Dynamic plugins support matrix Name Plugin Role Version Support Level Path Required Variables Default 3scale @janus-idp/backstage-plugin-3scale-backend Backend 1.5.15 Red Hat Tech Preview ./dynamic-plugins/dist/janus-idp-backstage-plugin-3scale-backend-dynamic THREESCALE_BASE_URL THREESCALE_ACCESS_TOKEN Disabled Ansible Automation Platform (AAP) @janus-idp/backstage-plugin-aap-backend Backend 1.6.15 Red Hat Tech Preview ./dynamic-plugins/dist/janus-idp-backstage-plugin-aap-backend-dynamic AAP_BASE_URL AAP_AUTH_TOKEN Disabled ACR @janus-idp/backstage-plugin-acr Frontend 1.4.13 Red Hat Tech Preview ./dynamic-plugins/dist/janus-idp-backstage-plugin-acr Disabled Analytics Provider Segment @janus-idp/backstage-plugin-analytics-provider-segment Frontend 1.4.9 Production ./dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment SEGMENT_WRITE_KEY SEGMENT_TEST_MODE Enabled Argo CD @janus-idp/backstage-plugin-argocd Frontend 1.2.3 Production ./dynamic-plugins/dist/janus-idp-backstage-plugin-argocd Disabled Argo CD @roadiehq/backstage-plugin-argo-cd Frontend 2.6.5 Production ./dynamic-plugins/dist/roadiehq-backstage-plugin-argo-cd Disabled Argo CD @roadiehq/backstage-plugin-argo-cd-backend Backend 3.0.2 Production ./dynamic-plugins/dist/roadiehq-backstage-plugin-argo-cd-backend-dynamic ARGOCD_USERNAME ARGOCD_PASSWORD ARGOCD_INSTANCE1_URL ARGOCD_AUTH_TOKEN ARGOCD_INSTANCE2_URL ARGOCD_AUTH_TOKEN2 Disabled Argo CD @roadiehq/scaffolder-backend-argocd Backend 1.1.27 Community Support ./dynamic-plugins/dist/roadiehq-scaffolder-backend-argocd-dynamic ARGOCD_USERNAME ARGOCD_PASSWORD ARGOCD_INSTANCE1_URL ARGOCD_AUTH_TOKEN ARGOCD_INSTANCE2_URL ARGOCD_AUTH_TOKEN2 Disabled Azure @backstage/plugin-scaffolder-backend-module-azure Backend 0.1.9 Community Support ./dynamic-plugins/dist/backstage-plugin-scaffolder-backend-module-azure-dynamic Enabled Azure Devops @backstage/plugin-azure-devops Frontend 0.4.4 Community Support ./dynamic-plugins/dist/backstage-plugin-azure-devops Disabled Azure Devops @backstage/plugin-azure-devops-backend Backend 0.6.5 Community Support ./dynamic-plugins/dist/backstage-plugin-azure-devops-backend-dynamic AZURE_TOKEN AZURE_ORG Disabled Azure Repositories @parfuemerie-douglas/scaffolder-backend-module-azure-repositories Backend 0.2.7 Community Support ./dynamic-plugins/dist/parfuemerie-douglas-scaffolder-backend-module-azure-repositories Disabled Bitbucket Cloud @backstage/plugin-catalog-backend-module-bitbucket-cloud Backend 0.2.4 Community Support ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-bitbucket-cloud-dynamic BITBUCKET_WORKSPACE Disabled Bitbucket Cloud @backstage/plugin-scaffolder-backend-module-bitbucket-cloud Backend 0.1.7 Community Support ./dynamic-plugins/dist/backstage-plugin-scaffolder-backend-module-bitbucket-cloud-dynamic Enabled Bitbucket Server @backstage/plugin-catalog-backend-module-bitbucket-server Backend 0.1.31 Community Support ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-bitbucket-server-dynamic BITBUCKET_HOST Disabled Bitbucket Server @backstage/plugin-scaffolder-backend-module-bitbucket-server Backend 0.1.7 Community Support ./dynamic-plugins/dist/backstage-plugin-scaffolder-backend-module-bitbucket-server-dynamic Enabled Datadog @roadiehq/backstage-plugin-datadog Frontend 2.2.8 Community Support ./dynamic-plugins/dist/roadiehq-backstage-plugin-datadog Disabled Dynatrace @backstage/plugin-dynatrace Frontend 10.0.4 Community Support ./dynamic-plugins/dist/backstage-plugin-dynatrace Disabled Gerrit @backstage/plugin-scaffolder-backend-module-gerrit Backend 0.1.9 Community Support ./dynamic-plugins/dist/backstage-plugin-scaffolder-backend-module-gerrit-dynamic Enabled GitHub @backstage/plugin-catalog-backend-module-github Backend 0.6.0 Community Support ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic GITHUB_ORG Disabled GitHub @backstage/plugin-scaffolder-backend-module-github Backend 0.2.7 Community Support ./dynamic-plugins/dist/backstage-plugin-scaffolder-backend-module-github-dynamic Enabled GitHub Actions @backstage/plugin-github-actions Frontend 0.6.16 Community Support ./dynamic-plugins/dist/backstage-plugin-github-actions Disabled GitHub Insights @roadiehq/backstage-plugin-github-insights Frontend 2.3.29 Community Support ./dynamic-plugins/dist/roadiehq-backstage-plugin-github-insights Disabled GitHub Issues @backstage/plugin-github-issues Frontend 0.4.2 Community Support ./dynamic-plugins/dist/backstage-plugin-github-issues Disabled GitHub Org @backstage/plugin-catalog-backend-module-github-org Backend 0.1.12 Community Support ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-org-dynamic GITHUB_URL GITHUB_ORG Disabled GitHub Pull Requests @roadiehq/backstage-plugin-github-pull-requests Frontend 2.5.26 Community Support ./dynamic-plugins/dist/roadiehq-backstage-plugin-github-pull-requests Disabled GitLab @immobiliarelabs/backstage-plugin-gitlab Frontend 6.5.0 Community Support ./dynamic-plugins/dist/immobiliarelabs-backstage-plugin-gitlab Disabled GitLab @backstage/plugin-catalog-backend-module-gitlab Backend 0.3.15 Community Support ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-gitlab-dynamic Disabled GitLab @immobiliarelabs/backstage-plugin-gitlab-backend Backend 6.5.0 Community Support ./dynamic-plugins/dist/immobiliarelabs-backstage-plugin-gitlab-backend-dynamic GITLAB_HOST GITLAB_TOKEN Disabled GitLab @backstage/plugin-scaffolder-backend-module-gitlab Backend 0.3.3 Community Support ./dynamic-plugins/dist/backstage-plugin-scaffolder-backend-module-gitlab-dynamic Enabled GitLab Org @backstage/plugin-catalog-backend-module-gitlab-org Backend 0.3.10 Community Support ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-gitlab-org-dynamic Disabled Http Request @roadiehq/scaffolder-backend-module-http-request Backend 4.3.2 Community Support ./dynamic-plugins/dist/roadiehq-scaffolder-backend-module-http-request-dynamic Enabled Jenkins @backstage/plugin-jenkins Frontend 0.9.10 Community Support ./dynamic-plugins/dist/backstage-plugin-jenkins Disabled Jenkins @backstage/plugin-jenkins-backend Backend 0.4.5 Community Support ./dynamic-plugins/dist/backstage-plugin-jenkins-backend-dynamic JENKINS_URL JENKINS_USERNAME JENKINS_TOKEN Disabled JFrog Artifactory @janus-idp/backstage-plugin-jfrog-artifactory Frontend 1.4.11 Red Hat Tech Preview ./dynamic-plugins/dist/janus-idp-backstage-plugin-jfrog-artifactory Disabled Jira @roadiehq/backstage-plugin-jira Frontend 2.5.8 Community Support ./dynamic-plugins/dist/roadiehq-backstage-plugin-jira Disabled Keycloak @janus-idp/backstage-plugin-keycloak-backend Backend 1.9.12 Production ./dynamic-plugins/dist/janus-idp-backstage-plugin-keycloak-backend-dynamic KEYCLOAK_BASE_URL KEYCLOAK_LOGIN_REALM KEYCLOAK_REALM KEYCLOAK_CLIENT_ID KEYCLOAK_CLIENT_SECRET Disabled Kubernetes @backstage/plugin-kubernetes Frontend 0.11.9 Community Support ./dynamic-plugins/dist/backstage-plugin-kubernetes Disabled Kubernetes @backstage/plugin-kubernetes-backend Backend 0.17.0 Production ./dynamic-plugins/dist/backstage-plugin-kubernetes-backend-dynamic K8S_CLUSTER_NAME K8S_CLUSTER_URL K8S_CLUSTER_TOKEN Disabled Lighthouse @backstage/plugin-lighthouse Frontend 0.4.20 Community Support ./dynamic-plugins/dist/backstage-plugin-lighthouse Disabled Nexus Repository Manager @janus-idp/backstage-plugin-nexus-repository-manager Frontend 1.6.10 Red Hat Tech Preview ./dynamic-plugins/dist/janus-idp-backstage-plugin-nexus-repository-manager Disabled OCM @janus-idp/backstage-plugin-ocm Frontend 4.1.8 Production ./dynamic-plugins/dist/janus-idp-backstage-plugin-ocm Disabled OCM @janus-idp/backstage-plugin-ocm-backend Backend 4.0.9 Production ./dynamic-plugins/dist/janus-idp-backstage-plugin-ocm-backend-dynamic OCM_HUB_NAME OCM_HUB_URL moc_infra_token Disabled PagerDuty @pagerduty/backstage-plugin Frontend 0.12.0 Community Support ./dynamic-plugins/dist/pagerduty-backstage-plugin Disabled Quay @janus-idp/backstage-plugin-quay Frontend 1.7.8 Production ./dynamic-plugins/dist/janus-idp-backstage-plugin-quay Disabled Quay @janus-idp/backstage-scaffolder-backend-module-quay Backend 1.4.12 Production ./dynamic-plugins/dist/janus-idp-backstage-scaffolder-backend-module-quay-dynamic Enabled RBAC @janus-idp/backstage-plugin-rbac Frontend 1.24.1 Production ./dynamic-plugins/dist/janus-idp-backstage-plugin-rbac Disabled Regex @janus-idp/backstage-scaffolder-backend-module-regex Backend 1.4.12 Production ./dynamic-plugins/dist/janus-idp-backstage-scaffolder-backend-module-regex-dynamic Enabled Scaffolder Relation Processor @janus-idp/backstage-plugin-catalog-backend-module-scaffolder-relation-processor Backend 1.0.3 Red Hat Tech Preview ./dynamic-plugins/dist/janus-idp-backstage-plugin-catalog-backend-module-scaffolder-relation-processor-dynamic Enabled Security Insights @roadiehq/backstage-plugin-security-insights Frontend 2.3.17 Community Support ./dynamic-plugins/dist/roadiehq-backstage-plugin-security-insights Disabled ServiceNow @janus-idp/backstage-scaffolder-backend-module-servicenow Backend 1.4.14 Red Hat Tech Preview ./dynamic-plugins/dist/janus-idp-backstage-scaffolder-backend-module-servicenow-dynamic SERVICENOW_BASE_URL SERVICENOW_USERNAME SERVICENOW_PASSWORD Disabled SonarQube @backstage/plugin-sonarqube Frontend 0.7.17 Community Support ./dynamic-plugins/dist/backstage-plugin-sonarqube Disabled SonarQube @backstage/plugin-sonarqube-backend Backend 0.2.20 Community Support ./dynamic-plugins/dist/backstage-plugin-sonarqube-backend-dynamic SONARQUBE_URL SONARQUBE_TOKEN Disabled SonarQube @janus-idp/backstage-scaffolder-backend-module-sonarqube Backend 1.4.12 Red Hat Tech Preview ./dynamic-plugins/dist/janus-idp-backstage-scaffolder-backend-module-sonarqube-dynamic Disabled TechDocs @backstage/plugin-techdocs Frontend 1.10.4 Production ./dynamic-plugins/dist/backstage-plugin-techdocs Enabled TechDocs @backstage/plugin-techdocs-backend Backend 1.10.4 Production ./dynamic-plugins/dist/backstage-plugin-techdocs-backend-dynamic Enabled Tech Radar @backstage/plugin-tech-radar Frontend 0.7.4 Community Support ./dynamic-plugins/dist/backstage-plugin-tech-radar Disabled Tekton @janus-idp/backstage-plugin-tekton Frontend 3.7.7 Production ./dynamic-plugins/dist/janus-idp-backstage-plugin-tekton Disabled Topology @janus-idp/backstage-plugin-topology Frontend 1.21.10 Production ./dynamic-plugins/dist/janus-idp-backstage-plugin-topology Disabled Utils @roadiehq/scaffolder-backend-module-utils Backend 1.15.3 Community Support ./dynamic-plugins/dist/roadiehq-scaffolder-backend-module-utils-dynamic Enabled Note To configure Keycloak, see Installation and configuration of Keycloak . To configure Techdocs, see reference documentation . After experimenting with basic setup, use CI/CD to generate docs and an external cloud storage when deploying TechDocs for production use-case. See also this recommended deployment approach . 2.3. Configuring dynamic plugins with the Red Hat Developer Hub Operator You can store the configuration for dynamic plugins in a ConfigMap object that your Backstage custom resource (CR) can reference. Note If the pluginConfig field references environment variables, you must define the variables in your secrets-rhdh secret. Procedure From the OpenShift Container Platform web console, select the ConfigMaps tab. Click Create ConfigMap . From the Create ConfigMap page, select the YAML view option in Configure via and edit the file, if needed. Example ConfigMap object using the GitHub dynamic plugin kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: './dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic' disabled: false pluginConfig: catalog: providers: github: organization: "USD{GITHUB_ORG}" schedule: frequency: { minutes: 1 } timeout: { minutes: 1 } initialDelay: { seconds: 100 } Click Create . Go to the Topology view. Click on the overflow menu for the Red Hat Developer Hub instance that you want to use and select Edit Backstage to load the YAML view of the Red Hat Developer Hub instance. Add the dynamicPluginsConfigMapName field to your Backstage CR. For example: apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: my-rhdh spec: application: # ... dynamicPluginsConfigMapName: dynamic-plugins-rhdh # ... Click Save . Navigate back to the Topology view and wait for the Red Hat Developer Hub pod to start. Click the Open URL icon to start using the Red Hat Developer Hub platform with the new configuration changes. Verification Ensure that the dynamic plugins configuration has been loaded, by appending /api/dynamic-plugins-info/loaded-plugins to your Red Hat Developer Hub root URL and checking the list of plugins: Example list of plugins [ { "name": "backstage-plugin-catalog-backend-module-github-dynamic", "version": "0.5.2", "platform": "node", "role": "backend-plugin-module" }, { "name": "backstage-plugin-techdocs", "version": "1.10.0", "role": "frontend-plugin", "platform": "web" }, { "name": "backstage-plugin-techdocs-backend-dynamic", "version": "1.9.5", "platform": "node", "role": "backend-plugin" }, ] 2.4. Installation of dynamic plugins using the Helm chart You can deploy a Developer Hub instance using a Helm chart, which is a flexible installation method. With the Helm chart, you can sideload dynamic plugins into your Developer Hub instance without having to recompile your code or rebuild the container. To install dynamic plugins in Developer Hub using Helm, add the following global.dynamic parameters in your Helm chart: plugins : the dynamic plugins list intended for installation. By default, the list is empty. You can populate the plugins list with the following fields: package : a package specification for the dynamic plugin package that you want to install. You can use a package for either a local or an external dynamic plugin installation. For a local installation, use a path to the local folder containing the dynamic plugin. For an external installation, use a package specification from a public NPM repository. integrity (required for external packages): an integrity checksum in the form of <alg>-<digest> specific to the package. Supported algorithms include sha256 , sha384 and sha512 . pluginConfig : an optional plugin-specific app-config YAML fragment. See plugin configuration for more information. disabled : disables the dynamic plugin if set to true . Default: false . includes : a list of YAML files utilizing the same syntax. Note The plugins list in the includes file is merged with the plugins list in the main Helm values. If a plugin package is mentioned in both plugins lists, the plugins fields in the main Helm values override the plugins fields in the includes file. The default configuration includes the dynamic-plugins.default.yaml file, which contains all of the dynamic plugins preinstalled in Developer Hub, whether enabled or disabled by default. 2.4.1. Obtaining the integrity checksum To obtain the integrity checksum, enter the following command: 2.4.2. Example Helm chart configurations for dynamic plugin installations The following examples demonstrate how to configure the Helm chart for specific types of dynamic plugin installations. Configuring a local plugin and an external plugin when the external plugin requires a specific app-config global: dynamic: plugins: - package: <alocal package-spec used by npm pack> - package: <external package-spec used by npm pack> integrity: sha512-<some hash> pluginConfig: ... Disabling a plugin from an included file global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.default.yaml> disabled: true Enabling a plugin from an included file global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.custom.yaml> disabled: false Enabling a plugin that is disabled in an included file global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.custom.yaml> disabled: false 2.4.3. Installing external dynamic plugins using a Helm chart The NPM registry contains external dynamic plugins that you can use for demonstration purposes. For example, the following community plugins are available in the janus-idp organization in the NPMJS repository: Notifications (frontend and backend) Kubernetes actions (scaffolder actions) To install the Notifications and Kubernetes actions plugins, include them in the Helm chart values in the global.dynamic.plugins list as shown in the following example: global: dynamic: plugins: - package: '@janus-idp/[email protected]' # Integrity can be found at https://registry.npmjs.org/@janus-idp/plugin-notifications-backend-dynamic integrity: 'sha512-Qd8pniy1yRx+x7LnwjzQ6k9zP+C1yex24MaCcx7dGDPT/XbTokwoSZr4baSSn8jUA6P45NUUevu1d629mG4JGQ==' - package: '@janus-idp/[email protected] ' # https://registry.npmjs.org/@janus-idp/plugin-notifications integrity: 'sha512-GCdEuHRQek3ay428C8C4wWgxjNpNwCXgIdFbUUFGCLLkBFSaOEw+XaBvWaBGtQ5BLgE3jQEUxa+422uzSYC5oQ==' pluginConfig: dynamicPlugins: frontend: janus-idp.backstage-plugin-notifications: appIcons: - name: notificationsIcon module: NotificationsPlugin importName: NotificationsActiveIcon dynamicRoutes: - path: /notifications importName: NotificationsPage module: NotificationsPlugin menuItem: icon: notificationsIcon text: Notifications config: pollingIntervalMs: 5000 - package: '@janus-idp/[email protected]' # https://registry.npmjs.org/@janus-idp/backstage-scaffolder-backend-module-kubernetes-dynamic integrity: 'sha512-19ie+FM3QHxWYPyYzE0uNdI5K8M4vGZ0SPeeTw85XPROY1DrIY7rMm2G0XT85L0ZmntHVwc9qW+SbHolPg/qRA==' proxy: endpoints: /explore-backend-completed: target: 'http://localhost:7017' - package: '@dfatwork-pkgs/[email protected]' # https://registry.npmjs.org/@dfatwork-pkgs/search-backend-module-explore-wrapped-dynamic integrity: 'sha512-mv6LS8UOve+eumoMCVypGcd7b/L36lH2z11tGKVrt+m65VzQI4FgAJr9kNCrjUZPMyh36KVGIjYqsu9+kgzH5A==' - package: '@dfatwork-pkgs/[email protected]' # https://registry.npmjs.org/@dfatwork-pkgs/plugin-catalog-backend-module-test-dynamic integrity: 'sha512-YsrZMThxJk7cYJU9FtAcsTCx9lCChpytK254TfGb3iMAYQyVcZnr5AA/AU+hezFnXLsr6gj8PP7z/mCZieuuDA==' 2.5. Using a custom NPM registry for dynamic plugin packages Note You can install external plugins in an air-gapped environment by setting up a custom NPM registry. You can configure the NPM registry URL and authentication information for dynamic plugin packages using a Helm chart. For dynamic plugin packages obtained through npm pack , you can use a .npmrc file. Using the Helm chart, add the .npmrc file to the NPM registry by creating a secret named dynamic-plugins-npmrc with the following content: apiVersion: v1 kind: Secret metadata: name: dynamic-plugins-npmrc type: Opaque stringData: .npmrc: | registry=<registry-url> //<registry-url>:_authToken=<auth-token> ... 2.6. Basic configuration of dynamic plugins Some dynamic plugins require environment variables to be set. If a mandatory environment variable is not set, and the plugin is enabled, then the application might fail at startup. The mandatory environment variables for each plugin are listed in the Dynamic plugins support matrix . Note Zib-bomb detection When installing some dynamic plugin containing large files, if the installation script considers the package archive to be a Zib-Bomb, the installation fails. To increase the maximum permitted size of a file inside a package archive, you can increase the MAX_ENTRY_SIZE environment value of the deployment install-dynamic-plugins initContainer from the default size of 20000000 bytes. 2.7. Installing and using Ansible plug-ins for Red Hat Developer Hub Ansible plug-ins for Red Hat Developer Hub deliver an Ansible-specific portal experience with curated learning paths, push-button content creation, integrated development tools, and other opinionated resources. Important The Ansible plug-ins are a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features, see Technology Preview Features Scope . Additional detail on how Red Hat provides support for bundled community dynamic plugins is available on the Red Hat Developer Support Policy page. 2.7.1. For administrators To install and configure the Ansible plugins, see Installing Ansible plug-ins for Red Hat Developer Hub . 2.7.2. For users To use the Ansible plugins, see Using Ansible plug-ins for Red Hat Developer Hub . 2.8. Installation and configuration of Keycloak The Keycloak backend plugin, which integrates Keycloak into Developer Hub, has the following capabilities: Synchronization of Keycloak users in a realm. Synchronization of Keycloak groups and their users in a realm. 2.8.1. For administrators 2.8.1.1. Installation The Keycloak plugin is pre-loaded in Developer Hub with basic configuration properties. To enable it, set the disabled property to false as follows: global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-keycloak-backend-dynamic disabled: false 2.8.1.2. Basic configuration To enable the Keycloak plugin, you must set the following environment variables: KEYCLOAK_BASE_URL KEYCLOAK_LOGIN_REALM KEYCLOAK_REALM KEYCLOAK_CLIENT_ID KEYCLOAK_CLIENT_SECRET 2.8.1.3. Advanced configuration Schedule configuration You can configure a schedule in the app-config.yaml file, as follows: catalog: providers: keycloakOrg: default: # ... # highlight-add-start schedule: # optional; same options as in TaskScheduleDefinition # supports cron, ISO duration, "human duration" as used in code frequency: { minutes: 1 } # supports ISO duration, "human duration" as used in code timeout: { minutes: 1 } initialDelay: { seconds: 15 } # highlight-add-end Note If you have made any changes to the schedule in the app-config.yaml file, then restart to apply the changes. Keycloak query parameters You can override the default Keycloak query parameters in the app-config.yaml file, as follows: catalog: providers: keycloakOrg: default: # ... # highlight-add-start userQuerySize: 500 # Optional groupQuerySize: 250 # Optional # highlight-add-end Communication between Developer Hub and Keycloak is enabled by using the Keycloak API. Username and password, or client credentials are supported authentication methods. The following table describes the parameters that you can configure to enable the plugin under catalog.providers.keycloakOrg.<ENVIRONMENT_NAME> object in the app-config.yaml file: Name Description Default Value Required baseUrl Location of the Keycloak server, such as https://localhost:8443/auth . Note that the newer versions of Keycloak omit the /auth context path. "" Yes realm Realm to synchronize master No loginRealm Realm used to authenticate master No username Username to authenticate "" Yes if using password based authentication password Password to authenticate "" Yes if using password based authentication clientId Client ID to authenticate "" Yes if using client credentials based authentication clientSecret Client Secret to authenticate "" Yes if using client credentials based authentication userQuerySize Number of users to query at a time 100 No groupQuerySize Number of groups to query at a time 100 No When using client credentials, the access type must be set to confidential and service accounts must be enabled. You must also add the following roles from the realm-management client role: query-groups query-users view-users 2.8.1.4. Limitations If you have self-signed or corporate certificate issues, you can set the following environment variable before starting Developer Hub: NODE_TLS_REJECT_UNAUTHORIZED=0 Note The solution of setting the environment variable is not recommended. 2.8.2. For users 2.8.2.1. Import of users and groups in Developer Hub using the Keycloak plugin After configuring the plugin successfully, the plugin imports the users and groups each time when started. Note If you set up a schedule, users and groups will also be imported. After the first import is complete, you can select User to list the users from the catalog page: You can see the list of users on the page: When you select a user, you can see the information imported from Keycloak: You can also select a group, view the list, and select or view the information imported from Keycloak for a group: 2.9. Installation and configuration of Nexus Repository Manager The Nexus Repository Manager plugin displays the information about your build artifacts in your Developer Hub application. The build artifacts are available in the Nexus Repository Manager. Important The Nexus Repository Manager plugin is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features, see Technology Preview Features Scope . Additional detail on how Red Hat provides support for bundled community dynamic plugins is available on the Red Hat Developer Support Policy page. 2.9.1. For administrators 2.9.1.1. Installing and configuring the Nexus Repository Manager plugin Installation The Nexus Repository Manager plugin is pre-loaded in Developer Hub with basic configuration properties. To enable it, set the disabled property to false as follows: global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-nexus-repository-manager disabled: false Configuration Set the proxy to the desired Nexus Repository Manager server in the app-config.yaml file as follows: proxy: '/nexus-repository-manager': target: 'https://<NEXUS_REPOSITORY_MANAGER_URL>' headers: X-Requested-With: 'XMLHttpRequest' # Uncomment the following line to access a private Nexus Repository Manager using a token # Authorization: 'Bearer <YOUR TOKEN>' changeOrigin: true # Change to "false" in case of using self hosted Nexus Repository Manager instance with a self-signed certificate secure: true Optional: Change the base URL of Nexus Repository Manager proxy as follows: nexusRepositoryManager: # default path is `/nexus-repository-manager` proxyPath: /custom-path Optional: Enable the following experimental annotations: nexusRepositoryManager: experimentalAnnotations: true Annotate your entity using the following annotations: metadata: annotations: # insert the chosen annotations here # example nexus-repository-manager/docker.image-name: `<ORGANIZATION>/<REPOSITORY>`, 2.9.2. For users 2.9.2.1. Using the Nexus Repository Manager plugin in Developer Hub The Nexus Repository Manager is a front-end plugin that enables you to view the information about build artifacts. Prerequisites Your Developer Hub application is installed and running. You have installed the Nexus Repository Manager plugin. For the installation process, see Section 2.9.1.1, "Installing and configuring the Nexus Repository Manager plugin" . Procedure Open your Developer Hub application and select a component from the Catalog page. Go to the BUILD ARTIFACTS tab. The BUILD ARTIFACTS tab contains a list of build artifacts and related information, such as VERSION , REPOSITORY , REPOSITORY TYPE , MANIFEST , MODIFIED , and SIZE . 2.10. Installation and configuration of Tekton You can use the Tekton plugin to visualize the results of CI/CD pipeline runs on your Kubernetes or OpenShift clusters. The plugin allows users to visually see high level status of all associated tasks in the pipeline for their applications. 2.10.1. For administrators 2.10.1.1. Installation Prerequsites You have installed and configured the @backstage/plugin-kubernetes and @backstage/plugin-kubernetes-backend dynamic plugins. You have configured the Kubernetes plugin to connect to the cluster using a ServiceAccount . The ClusterRole must be granted for custom resources (PipelineRuns and TaskRuns) to the ServiceAccount accessing the cluster. Note If you have the RHDH Kubernetes plugin configured, then the ClusterRole is already granted. To view the pod logs, you have granted permissions for pods/log . You can use the following code to grant the ClusterRole for custom resources and pod logs: kubernetes: ... customResources: - group: 'tekton.dev' apiVersion: 'v1' plural: 'pipelineruns' - group: 'tekton.dev' apiVersion: 'v1' ... apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - "" resources: - pods/log verbs: - get - list - watch ... - apiGroups: - tekton.dev resources: - pipelineruns - taskruns verbs: - get - list You can use the prepared manifest for a read-only ClusterRole , which provides access for both Kubernetes plugin and Tekton plugin. Add the following annotation to the entity's catalog-info.yaml file to identify whether an entity contains the Kubernetes resources: annotations: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME> You can also add the backstage.io/kubernetes-namespace annotation to identify the Kubernetes resources using the defined namespace. annotations: ... backstage.io/kubernetes-namespace: <RESOURCE_NS> Add the following annotation to the catalog-info.yaml file of the entity to enable the Tekton related features in RHDH. The value of the annotation identifies the name of the RHDH entity: annotations: ... janus-idp.io/tekton : <BACKSTAGE_ENTITY_NAME> Add a custom label selector, which RHDH uses to find the Kubernetes resources. The label selector takes precedence over the ID annotations. annotations: ... backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end' Add the following label to the resources so that the Kubernetes plugin gets the Kubernetes resources from the requested entity: labels: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME> Note When you use the label selector, the mentioned labels must be present on the resource. Procedure The Tekton plugin is pre-loaded in RHDH with basic configuration properties. To enable it, set the disabled property to false as follows: global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-tekton disabled: false 2.10.2. For users 2.10.2.1. Using the Tekton plugin in RHDH You can use the Tekton front-end plugin to view PipelineRun resources. Prerequisites You have installed the Red Hat Developer Hub (RHDH). You have installed the Tekton plugin. For the installation process, see Installing and configuring the Tekton plugin . Procedure Open your RHDH application and select a component from the Catalog page. Go to the CI tab. The CI tab displays the list of PipelineRun resources associated with a Kubernetes cluster. The list contains pipeline run details, such as NAME , VULNERABILITIES , STATUS , TASK STATUS , STARTED , and DURATION . Click the expand row button besides PipelineRun name in the list to view the PipelineRun visualization. The pipeline run resource includes tasks to complete. When you hover the mouse pointer on a task card, you can view the steps to complete that particular task. 2.11. Enabling and configuring Argo CD plugin You can use the Argo CD plugin to visualize the Continuous Delivery (CD) workflows in OpenShift GitOps. This plugin provides a visual overview of the application's status, deployment details, commit message, author of the commit, container image promoted to environment and deployment history. 2.11.1. For administrators 2.11.1.1. Enabling Argo CD plugin Prerequisites Add Argo CD instance information to your app-config.yaml configmap as shown in the following example: argocd: appLocatorMethods: - type: 'config' instances: - name: argoInstance1 url: https://argoInstance1.com username: USD{ARGOCD_USERNAME} password: USD{ARGOCD_PASSWORD} - name: argoInstance2 url: https://argoInstance2.com username: USD{ARGOCD_USERNAME} password: USD{ARGOCD_PASSWORD} Add the following annotation to the entity's catalog-info.yaml file to identify the Argo CD applications. annotations: ... # The label that Argo CD uses to fetch all the applications. The format to be used is label.key=label.value. For example, rht-gitops.com/janus-argocd=quarkus-app. argocd/app-selector: 'USD{ARGOCD_LABEL_SELECTOR}' (Optional) Add the following annotation to the entity's catalog-info.yaml file to switch between Argo CD instances as shown in the following example: annotations: ... # The Argo CD instance name used in `app-config.yaml`. argocd/instance-name: 'USD{ARGOCD_INSTANCE}' Note If you do not set this annotation, the Argo CD plugin defaults to the first Argo CD instance configured in app-config.yaml . Procedure Add the following to your dynamic-plugins ConfigMap to enable the Argo CD plugin. global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/roadiehq-backstage-plugin-argo-cd-backend-dynamic disabled: false - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-argocd disabled: false 2.11.2. For Users Prerequisites You have enabled the Argo CD plugin in Red Hat Developer Hub RHDH. Procedures Select the Catalog tab and choose the component that you want to use. Select the CD tab to view insights into deployments managed by Argo CD. Select an appropriate card to view the deployment details (for example, commit message, author name, and deployment history). Click the link icon ( ) to open the deployment details in Argo CD. Select the Overview tab and navigate to the Deployment summary section to review the summary of your application's deployment across namespaces. Additionally, select an appropriate Argo CD app to open the deployment details in Argo CD, or select a commit ID from the Revision column to review the changes in GitLab or GitHub. Additional resources For more information on dynamic plugins, see Dynamic plugin installation . | [
"======= Skipping disabled dynamic plugin ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic disabled: false",
"kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: './dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic' disabled: false pluginConfig: catalog: providers: github: organization: \"USD{GITHUB_ORG}\" schedule: frequency: { minutes: 1 } timeout: { minutes: 1 } initialDelay: { seconds: 100 }",
"apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: my-rhdh spec: application: dynamicPluginsConfigMapName: dynamic-plugins-rhdh",
"[ { \"name\": \"backstage-plugin-catalog-backend-module-github-dynamic\", \"version\": \"0.5.2\", \"platform\": \"node\", \"role\": \"backend-plugin-module\" }, { \"name\": \"backstage-plugin-techdocs\", \"version\": \"1.10.0\", \"role\": \"frontend-plugin\", \"platform\": \"web\" }, { \"name\": \"backstage-plugin-techdocs-backend-dynamic\", \"version\": \"1.9.5\", \"platform\": \"node\", \"role\": \"backend-plugin\" }, ]",
"npm view <package name>@<version> dist.integrity",
"global: dynamic: plugins: - package: <alocal package-spec used by npm pack> - package: <external package-spec used by npm pack> integrity: sha512-<some hash> pluginConfig:",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.default.yaml> disabled: true",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.custom.yaml> disabled: false",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: <some imported plugins listed in dynamic-plugins.custom.yaml> disabled: false",
"global: dynamic: plugins: - package: '@janus-idp/[email protected]' # Integrity can be found at https://registry.npmjs.org/@janus-idp/plugin-notifications-backend-dynamic integrity: 'sha512-Qd8pniy1yRx+x7LnwjzQ6k9zP+C1yex24MaCcx7dGDPT/XbTokwoSZr4baSSn8jUA6P45NUUevu1d629mG4JGQ==' - package: '@janus-idp/[email protected] ' # https://registry.npmjs.org/@janus-idp/plugin-notifications integrity: 'sha512-GCdEuHRQek3ay428C8C4wWgxjNpNwCXgIdFbUUFGCLLkBFSaOEw+XaBvWaBGtQ5BLgE3jQEUxa+422uzSYC5oQ==' pluginConfig: dynamicPlugins: frontend: janus-idp.backstage-plugin-notifications: appIcons: - name: notificationsIcon module: NotificationsPlugin importName: NotificationsActiveIcon dynamicRoutes: - path: /notifications importName: NotificationsPage module: NotificationsPlugin menuItem: icon: notificationsIcon text: Notifications config: pollingIntervalMs: 5000 - package: '@janus-idp/[email protected]' # https://registry.npmjs.org/@janus-idp/backstage-scaffolder-backend-module-kubernetes-dynamic integrity: 'sha512-19ie+FM3QHxWYPyYzE0uNdI5K8M4vGZ0SPeeTw85XPROY1DrIY7rMm2G0XT85L0ZmntHVwc9qW+SbHolPg/qRA==' proxy: endpoints: /explore-backend-completed: target: 'http://localhost:7017' - package: '@dfatwork-pkgs/[email protected]' # https://registry.npmjs.org/@dfatwork-pkgs/search-backend-module-explore-wrapped-dynamic integrity: 'sha512-mv6LS8UOve+eumoMCVypGcd7b/L36lH2z11tGKVrt+m65VzQI4FgAJr9kNCrjUZPMyh36KVGIjYqsu9+kgzH5A==' - package: '@dfatwork-pkgs/[email protected]' # https://registry.npmjs.org/@dfatwork-pkgs/plugin-catalog-backend-module-test-dynamic integrity: 'sha512-YsrZMThxJk7cYJU9FtAcsTCx9lCChpytK254TfGb3iMAYQyVcZnr5AA/AU+hezFnXLsr6gj8PP7z/mCZieuuDA=='",
"apiVersion: v1 kind: Secret metadata: name: dynamic-plugins-npmrc type: Opaque stringData: .npmrc: | registry=<registry-url> //<registry-url>:_authToken=<auth-token>",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-keycloak-backend-dynamic disabled: false",
"catalog: providers: keycloakOrg: default: # # highlight-add-start schedule: # optional; same options as in TaskScheduleDefinition # supports cron, ISO duration, \"human duration\" as used in code frequency: { minutes: 1 } # supports ISO duration, \"human duration\" as used in code timeout: { minutes: 1 } initialDelay: { seconds: 15 } # highlight-add-end",
"catalog: providers: keycloakOrg: default: # # highlight-add-start userQuerySize: 500 # Optional groupQuerySize: 250 # Optional # highlight-add-end",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-nexus-repository-manager disabled: false",
"proxy: '/nexus-repository-manager': target: 'https://<NEXUS_REPOSITORY_MANAGER_URL>' headers: X-Requested-With: 'XMLHttpRequest' # Uncomment the following line to access a private Nexus Repository Manager using a token # Authorization: 'Bearer <YOUR TOKEN>' changeOrigin: true # Change to \"false\" in case of using self hosted Nexus Repository Manager instance with a self-signed certificate secure: true",
"nexusRepositoryManager: # default path is `/nexus-repository-manager` proxyPath: /custom-path",
"nexusRepositoryManager: experimentalAnnotations: true",
"metadata: annotations: # insert the chosen annotations here # example nexus-repository-manager/docker.image-name: `<ORGANIZATION>/<REPOSITORY>`,",
"kubernetes: customResources: - group: 'tekton.dev' apiVersion: 'v1' plural: 'pipelineruns' - group: 'tekton.dev' apiVersion: 'v1' apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - \"\" resources: - pods/log verbs: - get - list - watch - apiGroups: - tekton.dev resources: - pipelineruns - taskruns verbs: - get - list",
"annotations: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>",
"annotations: backstage.io/kubernetes-namespace: <RESOURCE_NS>",
"annotations: janus-idp.io/tekton : <BACKSTAGE_ENTITY_NAME>",
"annotations: backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'",
"labels: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-tekton disabled: false",
"argocd: appLocatorMethods: - type: 'config' instances: - name: argoInstance1 url: https://argoInstance1.com username: USD{ARGOCD_USERNAME} password: USD{ARGOCD_PASSWORD} - name: argoInstance2 url: https://argoInstance2.com username: USD{ARGOCD_USERNAME} password: USD{ARGOCD_PASSWORD}",
"annotations: # The label that Argo CD uses to fetch all the applications. The format to be used is label.key=label.value. For example, rht-gitops.com/janus-argocd=quarkus-app. argocd/app-selector: 'USD{ARGOCD_LABEL_SELECTOR}'",
"annotations: # The Argo CD instance name used in `app-config.yaml`. argocd/instance-name: 'USD{ARGOCD_INSTANCE}'",
"global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/roadiehq-backstage-plugin-argo-cd-backend-dynamic disabled: false - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-argocd disabled: false"
] | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/configuring_plugins_in_red_hat_developer_hub/rhdh-installing-dynamic-plugins |
25.3.5. Log Rotation | 25.3.5. Log Rotation The following is a sample /etc/logrotate.conf configuration file: All of the lines in the sample configuration file define global options that apply to every log file. In our example, log files are rotated weekly, rotated log files are kept for four weeks, and all rotated log files are compressed by gzip into the .gz format. Any lines that begin with a hash sign (#) are comments and are not processed. You may define configuration options for a specific log file and place it under the global options. However, it is advisable to create a separate configuration file for any specific log file in the /etc/logrotate.d/ directory and define any configuration options there. The following is an example of a configuration file placed in the /etc/logrotate.d/ directory: The configuration options in this file are specific for the /var/log/messages log file only. The settings specified here override the global settings where possible. Thus the rotated /var/log/messages log file will be kept for five weeks instead of four weeks as was defined in the global options. The following is a list of some of the directives you can specify in your logrotate configuration file: weekly - Specifies the rotation of log files to be done weekly. Similar directives include: daily monthly yearly compress - Enables compression of rotated log files. Similar directives include: nocompress compresscmd - Specifies the command to be used for compressing. uncompresscmd compressext - Specifies what extension is to be used for compressing. compressoptions - Specifies any options to be passed to the compression program used. delaycompress - Postpones the compression of log files to the rotation of log files. rotate INTEGER - Specifies the number of rotations a log file undergoes before it is removed or mailed to a specific address. If the value 0 is specified, old log files are removed instead of rotated. mail ADDRESS - This option enables mailing of log files that have been rotated as many times as is defined by the rotate directive to the specified address. Similar directives include: nomail mailfirst - Specifies that the just-rotated log files are to be mailed, instead of the about-to-expire log files. maillast - Specifies that the about-to-expire log files are to be mailed, instead of the just-rotated log files. This is the default option when mail is enabled. For the full list of directives and various configuration options, see the logrotate(5) manual page. | [
"rotate log files weekly weekly keep 4 weeks worth of backlogs rotate 4 uncomment this if you want your log files compressed compress",
"/var/log/messages { rotate 5 weekly postrotate /usr/bin/killall -HUP syslogd endscript }"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-log_rotation |
function::backtrace | function::backtrace Name function::backtrace - Hex backtrace of current kernel stack Synopsis Arguments None Description This function returns a string of hex addresses that are a backtrace of the kernel stack. Output may be truncated as per maximum string length (MAXSTRINGLEN). See ubacktrace for user-space backtrace. | [
"backtrace:string()"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-backtrace |
Managing IdM users, groups, hosts, and access control rules | Managing IdM users, groups, hosts, and access control rules Red Hat Enterprise Linux 8 Configuring users and hosts, managing them in groups, and controlling access with host-based and role-based access control rules Red Hat Customer Content Services | [
"ipa help [TOPIC | COMMAND | topics | commands]",
"ipa help topics",
"ipa help user",
"ipa help user | less",
"ipa help commands",
"ipa help user-add",
"ipa user-add",
"---------------------- Added user \"euser\" ---------------------- User login: euser First name: Example Last name: User Full name: Example User Display name: Example User Initials: EU Home directory: /home/euser GECOS: Example User Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 427200006 GID: 427200006 Password: False Member of groups: ipausers Kerberos keys available: False",
"ipa user-add --first=Example --last=User --password",
"ipa user-mod euser --password",
"---------------------- Modified user \"euser\" ---------------------- User login: euser First name: Example Last name: User Home directory: /home/euser Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 427200006 GID: 427200006 Password: True Member of groups: ipausers Kerberos keys available: True",
"ipa permission-add --right=read --permissions=write --permissions=delete",
"ipa permission-add --right={read,write,delete}",
"ipa permission-mod --right=read --right=write --right=delete",
"ipa permission-mod --right={read,write,delete}",
"ipa permission-mod --right=read --right=write",
"ipa permission-mod --right={read,write}",
"ipa certprofile-show certificate_profile --out= exported\\*profile.cfg",
"ipa user-add user_login --first=first_name --last=last_name --email=email_address",
"[a-zA-Z0-9_.][a-zA-Z0-9_.-]{0,252}[a-zA-Z0-9_.USD-]?",
"ipa config-mod --maxusername=64 Maximum username length: 64",
"ipa help user-add",
"ipa user-find",
"ipa stageuser-activate user_login ------------------------- Stage user user_login activated -------------------------",
"ipa user-find",
"ipa user-del --preserve user_login -------------------- Deleted user \"user_login\" --------------------",
"ipa user-del --continue user1 user2 user3",
"ipa user-del user_login -------------------- Deleted user \"user_login\" --------------------",
"ipa user-undel user_login ------------------------------ Undeleted user account \"user_login\" ------------------------------",
"ipa user-stage user_login ------------------------------ Staged user account \"user_login\" ------------------------------",
"ipa user-find",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create user idm_user ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idm_user first: Alice last: Acme uid: 1000111 gid: 10011 phone: \"+555123457\" email: [email protected] passwordexpiration: \"2023-01-19 23:59:59\" password: \"Password123\" update_password: on_create",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-IdM-user.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa user-show idm_user User login: idm_user First name: Alice Last name: Acme .",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create user idm_users ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" users: - name: idm_user_1 first: Alice last: Acme uid: 10001 gid: 10011 phone: \"+555123457\" email: [email protected] passwordexpiration: \"2023-01-19 23:59:59\" password: \"Password123\" - name: idm_user_2 first: Bob last: Acme uid: 100011 gid: 10011 - name: idm_user_3 first: Eve last: Acme uid: 1000111 gid: 10011",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-users.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipa user-show idm_user_1 User login: idm_user_1 First name: Alice Last name: Acme Password: True .",
"[ipaserver] server.idm.example.com",
"--- - name: Ensure users' presence hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Include users_present.json include_vars: file: users_present.json - name: Users present ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" users: \"{{ users }}\"",
"{ \"users\": [ { \"name\": \"idm_user_1\", \"first\": \"First 1\", \"last\": \"Last 1\", \"password\": \"Password123\" }, { \"name\": \"idm_user_2\", \"first\": \"First 2\", \"last\": \"Last 2\" }, { \"name\": \"idm_user_3\", \"first\": \"First 3\", \"last\": \"Last 3\" } ] }",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory /inventory.file path_to_playbooks_directory /ensure-users-present-jsonfile.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipa user-show idm_user_1 User login: idm_user_1 First name: Alice Last name: Acme Password: True .",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle users hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Delete users idm_user_1, idm_user_2, idm_user_3 ipauser: ipaadmin_password: \"{{ ipaadmin_password }}\" users: - name: idm_user_1 - name: idm_user_2 - name: idm_user_3 state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory /inventory.file path_to_playbooks_directory /delete-users.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipa user-show idm_user_1 ipa: ERROR: idm_user_1: user not found",
"set -o braceexpand",
"[bjensen@server ~]USD ipa config-mod --userobjectclasses={person,organizationalperson,inetorgperson,inetuser,posixaccount,krbprincipalaux,krbticketpolicyaux,ipaobject,ipasshuser,mepOriginEntry,top,mailRecipient}",
"set -o braceexpand",
"[bjensen@server ~]USD ipa config-mod --groupobjectclasses={top,groupofnames,nestedgroup,ipausergroup,ipaobject,ipasshuser,employeegroup}",
"[bjensen@server ~]USD ipa config-show --all dn: cn=ipaConfig,cn=etc,dc=example,dc=com Maximum username length: 32 Home directory base: /home Default shell: /bin/sh Default users group: ipausers Default e-mail domain: example.com Search time limit: 2 Search size limit: 100 User search fields: uid,givenname,sn,telephonenumber,ou,title Group search fields: cn,description Enable migration mode: FALSE Certificate Subject base: O=EXAMPLE.COM Default group objectclasses: top, groupofnames, nestedgroup, ipausergroup, ipaobject Default user objectclasses: top, person, organizationalperson, inetorgperson, inetuser, posixaccount, krbprincipalaux, krbticketpolicyaux, ipaobject, ipasshuser Password Expiration Notification (days): 4 Password plugin features: AllowNThash SELinux user map order: guest_u:s0USDxguest_u:s0USDuser_u:s0USDstaff_u:s0-s0:c0.c1023USDunconfined_u:s0-s0:c0.c1023 Default SELinux user: unconfined_u:s0-s0:c0.c1023 Default PAC types: MS-PAC, nfs:NONE cn: ipaConfig objectclass: nsContainer, top, ipaGuiConfig, ipaConfigObject",
"[bjensen@server ~]USD ipa config-mod --defaultshell \"/bin/bash\"",
"pwdhash -D /etc/dirsrv/slapd-IDM-EXAMPLE-COM password {PBKDF2_SHA256}AAAgABU0bKhyjY53NcxY33ueoPjOUWtl4iyYN5uW",
"ipactl stop",
"nsslapd-rootpw: {PBKDF2_SHA256}AAAgABU0bKhyjY53NcxY33ueoPjOUWtl4iyYN5uW",
"ipactl start",
"ipa user-mod idm_user --password Password: Enter Password again to verify: -------------------- Modified user \"idm_user\" --------------------",
"ldapmodify -x -D \"cn=Directory Manager\" -W -h server.idm.example.com -p 389 Enter LDAP Password:",
"dn: cn=ipa_pwd_extop,cn=plugins,cn=config",
"changetype: modify",
"add: passSyncManagersDNs",
"passSyncManagersDNs: uid=admin,cn=users,cn=accounts,dc=example,dc=com",
"ldapmodify -x -D \"cn=Directory Manager\" -W -h server.idm.example.com -p 389 Enter LDAP Password: dn: cn=ipa_pwd_extop,cn=plugins,cn=config changetype: modify add: passSyncManagersDNs passSyncManagersDNs: uid=admin,cn=users,cn=accounts,dc=example,dc=com",
"ipa user-status example_user ----------------------- Account disabled: False ----------------------- Server: idm.example.com Failed logins: 8 Last successful authentication: N/A Last failed authentication: 20220229080317Z Time now: 2022-02-29T08:04:46Z ---------------------------- Number of entries returned 1 ----------------------------",
"ipa user-unlock idm_user ----------------------- Unlocked account \"idm_user\" -----------------------",
"ipa config-show | grep \"Password plugin features\" Password plugin features: AllowNThash , KDC:Disable Last Success",
"ipa config-mod --ipaconfigstring='AllowNThash'",
"ipactl restart",
"[ipaserver] server.idm.example.com",
"--- - name: Tests hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure presence of pwpolicy for group ops ipapwpolicy: ipaadmin_password: \"{{ ipaadmin_password }}\" name: ops minlife: 7 maxlife: 49 history: 5 priority: 1 lockouttime: 300 minlength: 8 minclasses: 4 maxfail: 3 failinterval: 5",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory_/new_pwpolicy_present.yml",
"ipa pwpolicy-add Group: group_name Priority: priority_level",
"ipa pwpolicy-find",
"ipa pwpolicy-mod --usercheck=True managers",
"ipa pwpolicy-mod --maxrepeat=2 managers",
"ipa user-add test_user First name: test Last name: user ---------------------------- Added user \"test_user\" ----------------------------",
"kinit test_user",
"Password expired. You must change it now. Enter new password: Enter it again: Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again.",
"Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again. Enter new password: Enter it again:",
"Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again. Enter new password: Enter it again:",
"klist Ticket cache: KCM:0:33945 Default principal: [email protected] Valid starting Expires Service principal 07/07/2021 12:44:44 07/08/2021 12:44:44 [email protected]@IDM.EXAMPLE.COM",
"--- - name: Tests hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure presence of usercheck and maxrepeat pwpolicy for group managers ipapwpolicy: ipaadmin_password: \"{{ ipaadmin_password }}\" name: managers usercheck: True maxrepeat: 2 maxsequence: 3",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory_/manager_pwpolicy_present.yml",
"ipa user-add test_user First name: test Last name: user ---------------------------- Added user \"test_user\" ----------------------------",
"kinit test_user",
"Password expired. You must change it now. Enter new password: Enter it again: Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again.",
"Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again. Enter new password: Enter it again:",
"Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again. Enter new password: Enter it again:",
"Password change rejected: Password not changed. Unspecified password quality failure while trying to change password. Please try again. Enter new password: Enter it again:",
"klist Ticket cache: KCM:0:33945 Default principal: [email protected] Valid starting Expires Service principal 07/07/2021 12:44:44 07/08/2021 12:44:44 [email protected]@IDM.EXAMPLE.COM",
"yum install ipa-client-epn",
"vi /etc/ipa/epn.conf",
"notify_ttls = 28, 14, 7, 3, 1",
"smtp_server = localhost smtp_port = 25",
"mail_from = [email protected]",
"smtp_client_cert = /etc/pki/tls/certs/client.pem",
"smtp_client_key = /etc/pki/tls/certs/client.key",
"smtp_client_key_pass = Secret123!",
"ipa-epn --dry-run [ { \"uid\": \"user5\", \"cn\": \"user 5\", \"krbpasswordexpiration\": \"2020-04-17 15:51:53\", \"mail\": \"['[email protected]']\" } ] [ { \"uid\": \"user6\", \"cn\": \"user 6\", \"krbpasswordexpiration\": \"2020-12-17 15:51:53\", \"mail\": \"['[email protected]']\" } ] The IPA-EPN command was successful",
"ipa-epn [ { \"uid\": \"user5\", \"cn\": \"user 5\", \"krbpasswordexpiration\": \"2020-10-01 15:51:53\", \"mail\": \"['[email protected]']\" } ] [ { \"uid\": \"user6\", \"cn\": \"user 6\", \"krbpasswordexpiration\": \"2020-12-17 15:51:53\", \"mail\": \"['[email protected]']\" } ] The IPA-EPN command was successful",
"ipa-epn --from-nbdays 8 --to-nbdays 12",
"systemctl start ipa-epn.timer",
"vi /etc/ipa/epn/expire_msg.template",
"Hi {{ fullname }}, Your password will expire on {{ expiration }}. Please change it as soon as possible.",
"kinit admin",
"ipa sudocmd-add /usr/sbin/reboot ------------------------------------- Added Sudo Command \"/usr/sbin/reboot\" ------------------------------------- Sudo Command: /usr/sbin/reboot",
"ipa sudorule-add idm_user_reboot --------------------------------- Added Sudo Rule \"idm_user_reboot\" --------------------------------- Rule name: idm_user_reboot Enabled: TRUE",
"ipa sudorule-add-allow-command idm_user_reboot --sudocmds '/usr/sbin/reboot' Rule name: idm_user_reboot Enabled: TRUE Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-host idm_user_reboot --hosts idmclient.idm.example.com Rule name: idm_user_reboot Enabled: TRUE Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-user idm_user_reboot --users idm_user Rule name: idm_user_reboot Enabled: TRUE Users: idm_user Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-mod idm_user_reboot --setattr sudonotbefore=20251231123400Z",
"ipa sudorule-mod idm_user_reboot --setattr sudonotafter=20261231123400Z",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for idm_user on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User idm_user may run the following commands on idmclient : (root) /usr/sbin/reboot",
"[idm_user@idmclient ~]USD sudo /usr/sbin/reboot [sudo] password for idm_user:",
"ipa group-add --desc='AD users external map' ad_users_external --external ------------------------------- Added group \"ad_users_external\" ------------------------------- Group name: ad_users_external Description: AD users external map",
"ipa group-add --desc='AD users' ad_users ---------------------- Added group \"ad_users\" ---------------------- Group name: ad_users Description: AD users GID: 129600004",
"ipa group-add-member ad_users_external --external \"[email protected]\" [member user]: [member group]: Group name: ad_users_external Description: AD users external map External member: S-1-5-21-3655990580-1375374850-1633065477-513 ------------------------- Number of members added 1 -------------------------",
"ipa group-add-member ad_users --groups ad_users_external Group name: ad_users Description: AD users GID: 129600004 Member groups: ad_users_external ------------------------- Number of members added 1 -------------------------",
"ipa sudocmd-add /usr/sbin/reboot ------------------------------------- Added Sudo Command \"/usr/sbin/reboot\" ------------------------------------- Sudo Command: /usr/sbin/reboot",
"ipa sudorule-add ad_users_reboot --------------------------------- Added Sudo Rule \"ad_users_reboot\" --------------------------------- Rule name: ad_users_reboot Enabled: True",
"ipa sudorule-add-allow-command ad_users_reboot --sudocmds '/usr/sbin/reboot' Rule name: ad_users_reboot Enabled: True Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-host ad_users_reboot --hosts idmclient.idm.example.com Rule name: ad_users_reboot Enabled: True Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-user ad_users_reboot --groups ad_users Rule name: ad_users_reboot Enabled: TRUE User Groups: ad_users Hosts: idmclient.idm.example.com Sudo Allow Commands: /usr/sbin/reboot ------------------------- Number of members added 1 -------------------------",
"ssh [email protected]@ipaclient Password:",
"[[email protected]@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient : (root) /usr/sbin/reboot",
"[[email protected]@idmclient ~]USD sudo /usr/sbin/reboot [sudo] password for [email protected]:",
"sudo /usr/sbin/reboot [sudo] password for idm_user:",
"kinit admin",
"ipa sudocmd-add /opt/third-party-app/bin/report ---------------------------------------------------- Added Sudo Command \"/opt/third-party-app/bin/report\" ---------------------------------------------------- Sudo Command: /opt/third-party-app/bin/report",
"ipa sudorule-add run_third-party-app_report -------------------------------------------- Added Sudo Rule \"run_third-party-app_report\" -------------------------------------------- Rule name: run_third-party-app_report Enabled: TRUE",
"ipa sudorule-add-runasuser run_third-party-app_report --users= thirdpartyapp Rule name: run_third-party-app_report Enabled: TRUE RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-allow-command run_third-party-app_report --sudocmds '/opt/third-party-app/bin/report' Rule name: run_third-party-app_report Enabled: TRUE Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-host run_third-party-app_report --hosts idmclient.idm.example.com Rule name: run_third-party-app_report Enabled: TRUE Hosts: idmclient.idm.example.com Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1 -------------------------",
"ipa sudorule-add-user run_third-party-app_report --users idm_user Rule name: run_third-party-app_report Enabled: TRUE Users: idm_user Hosts: idmclient.idm.example.com Sudo Allow Commands: /opt/third-party-app/bin/report RunAs External User: thirdpartyapp ------------------------- Number of members added 1",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient: !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient: (thirdpartyapp) /opt/third-party-app/bin/report",
"[idm_user@idmclient ~]USD sudo -u thirdpartyapp /opt/third-party-app/bin/report [sudo] password for [email protected]: Executing report Report successful.",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for [email protected] on idmclient: !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User [email protected] may run the following commands on idmclient: (thirdpartyapp) /opt/third-party-app/bin/report",
"[idm_user@idmclient ~]USD sudo -u thirdpartyapp /opt/third-party-app/bin/report [sudo] password for [email protected]: Executing report Report successful.",
"[domain/ <domain_name> ] pam_gssapi_services = sudo, sudo-i",
"systemctl restart sssd",
"authselect current Profile ID: sssd",
"authselect enable-feature with-gssapi",
"authselect select sssd with-gssapi",
"#%PAM-1.0 auth sufficient pam_sss_gss.so auth include system-auth account include system-auth password include system-auth session include system-auth",
"ssh -l [email protected] localhost [email protected]'s password:",
"[idmuser@idmclient ~]USD klist Ticket cache: KCM:1366201107 Default principal: [email protected] Valid starting Expires Service principal 01/08/2021 09:11:48 01/08/2021 19:11:48 krbtgt/[email protected] renew until 01/15/2021 09:11:44",
"[idm_user@idmclient ~]USD kdestroy -A [idm_user@idmclient ~]USD kinit [email protected] Password for [email protected] :",
"[idm_user@idmclient ~]USD sudo /usr/sbin/reboot",
"[domain/ <domain_name> ] pam_gssapi_services = sudo, sudo-i pam_gssapi_indicators_map = sudo:pkinit, sudo-i:pkinit",
"systemctl restart sssd",
"authselect current Profile ID: sssd",
"authselect select sssd",
"authselect enable-feature with-gssapi",
"authselect with-smartcard-required",
"#%PAM-1.0 auth sufficient pam_sss_gss.so auth include system-auth account include system-auth password include system-auth session include system-auth",
"#%PAM-1.0 auth sufficient pam_sss_gss.so auth include sudo account include sudo password include sudo session optional pam_keyinit.so force revoke session include sudo",
"ssh -l [email protected] localhost PIN for smart_card",
"[idm_user@idmclient ~]USD klist Ticket cache: KEYRING:persistent:1358900015:krb_cache_TObtNMd Default principal: [email protected] Valid starting Expires Service principal 02/15/2021 16:29:48 02/16/2021 02:29:48 krbtgt/[email protected] renew until 02/22/2021 16:29:44",
"[idm_user@idmclient ~]USD sudo -l Matching Defaults entries for idmuser on idmclient : !visiblepw, always_set_home, match_group_by_gid, always_query_group_plugin, env_reset, env_keep=\"COLORS DISPLAY HOSTNAME HISTSIZE KDEDIR LS_COLORS\", env_keep+=\"MAIL PS1 PS2 QTDIR USERNAME LANG LC_ADDRESS LC_CTYPE\", env_keep+=\"LC_COLLATE LC_IDENTIFICATION LC_MEASUREMENT LC_MESSAGES\", env_keep+=\"LC_MONETARY LC_NAME LC_NUMERIC LC_PAPER LC_TELEPHONE\", env_keep+=\"LC_TIME LC_ALL LANGUAGE LINGUAS _XKB_CHARSET XAUTHORITY KRB5CCNAME\", secure_path=/sbin\\:/bin\\:/usr/sbin\\:/usr/bin User idm_user may run the following commands on idmclient : (root) /usr/sbin/reboot",
"[idm_user@idmclient ~]USD sudo /usr/sbin/reboot",
"[pam] pam_gssapi_services = sudo , sudo-i pam_gssapi_indicators_map = sudo:otp pam_gssapi_check_upn = true",
"[domain/ idm.example.com ] pam_gssapi_services = sudo, sudo-i pam_gssapi_indicators_map = sudo:pkinit , sudo-i:otp pam_gssapi_check_upn = true [domain/ ad.example.com ] pam_gssapi_services = sudo pam_gssapi_check_upn = false",
"Server not found in Kerberos database",
"[idm-user@idm-client ~]USD cat /etc/krb5.conf [domain_realm] .example.com = EXAMPLE.COM example.com = EXAMPLE.COM server.example.com = EXAMPLE.COM",
"No Kerberos credentials available",
"[idm-user@idm-client ~]USD kinit [email protected] Password for [email protected] :",
"User with UPN [ <UPN> ] was not found. UPN [ <UPN> ] does not match target user [ <username> ].",
"[idm-user@idm-client ~]USD cat /etc/sssd/sssd.conf pam_gssapi_check_upn = false",
"cat /etc/pam.d/sudo #%PAM-1.0 auth sufficient pam_sss_gss.so debug auth include system-auth account include system-auth password include system-auth session include system-auth",
"cat /etc/pam.d/sudo-i #%PAM-1.0 auth sufficient pam_sss_gss.so debug auth include sudo account include sudo password include sudo session optional pam_keyinit.so force revoke session include sudo",
"[idm-user@idm-client ~]USD sudo ls -l /etc/sssd/sssd.conf pam_sss_gss: Initializing GSSAPI authentication with SSSD pam_sss_gss: Switching euid from 0 to 1366201107 pam_sss_gss: Trying to establish security context pam_sss_gss: SSSD User name: [email protected] pam_sss_gss: User domain: idm.example.com pam_sss_gss: User principal: pam_sss_gss: Target name: [email protected] pam_sss_gss: Using ccache: KCM: pam_sss_gss: Acquiring credentials, principal name will be derived pam_sss_gss: Unable to read credentials from [KCM:] [maj:0xd0000, min:0x96c73ac3] pam_sss_gss: GSSAPI: Unspecified GSS failure. Minor code may provide more information pam_sss_gss: GSSAPI: No credentials cache found pam_sss_gss: Switching euid from 1366200907 to 0 pam_sss_gss: System error [5]: Input/output error",
"[ipaservers] server.idm.example.com",
"--- - name: Playbook to manage sudo command hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure sudo command is present - ipasudocmd: ipaadmin_password: \"{{ ipaadmin_password }}\" name: /usr/sbin/reboot state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory /ensure-reboot-sudocmd-is-present.yml",
"--- - name: Tests hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure a sudorule is present granting idm_user the permission to run /usr/sbin/reboot on idmclient - ipasudorule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idm_user_reboot description: A test sudo rule. allow_sudocmd: /usr/sbin/reboot host: idmclient.idm.example.com user: idm_user state: present",
"ansible-playbook -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory /ensure-sudorule-for-idmuser-on-idmclient-is-present.yml",
"sudo /usr/sbin/reboot [sudo] password for idm_user:",
"dn: uid=user_login ,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com changetype: add objectClass: top objectClass: inetorgperson uid: user_login sn: surname givenName: first_name cn: full_name",
"dn: uid=user_login,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com changetype: add objectClass: top objectClass: person objectClass: inetorgperson objectClass: organizationalperson objectClass: posixaccount uid: user_login uidNumber: UID_number gidNumber: GID_number sn: surname givenName: first_name cn: full_name homeDirectory: /home/user_login",
"dn: distinguished_name changetype: modify replace: attribute_to_modify attribute_to_modify: new_value",
"dn: distinguished_name changetype: modify replace: nsAccountLock nsAccountLock: TRUE",
"dn: distinguished_name changetype: modify replace: nsAccountLock nsAccountLock: FALSE",
"dn: distinguished_name changetype: modrdn newrdn: uid=user_login deleteoldrdn: 0 newsuperior: cn=deleted users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com",
"ldapsearch -LLL -x -D \"uid= user_allowed_to_modify_user_entries ,cn=users,cn=accounts,dc=idm,dc=example,dc=com\" -w \"Secret123\" -H ldap://r8server.idm.example.com -b \"cn=users,cn=accounts,dc=idm,dc=example,dc=com\" uid=test_user dn: uid=test_user,cn=users,cn=accounts,dc=idm,dc=example,dc=com memberOf: cn=ipausers,cn=groups,cn=accounts,dc=idm,dc=example,dc=com",
"dn: cn=group_name,cn=groups,cn=accounts,dc=idm,dc=example,dc=com changetype: add objectClass: top objectClass: ipaobject objectClass: ipausergroup objectClass: groupofnames objectClass: nestedgroup objectClass: posixgroup uid: group_name cn: group_name gidNumber: GID_number",
"dn: group_distinguished_name changetype: delete",
"dn: group_distinguished_name changetype: modify add: member member: uid=user_login,cn=users,cn=accounts,dc=idm,dc=example,dc=com",
"dn: distinguished_name changetype: modify delete: member member: uid=user_login,cn=users,cn=accounts,dc=idm,dc=example,dc=com",
"ldapsearch -YGSSAPI -H ldap://server.idm.example.com -b \"cn=groups,cn=accounts,dc=idm,dc=example,dc=com\" \"cn=group_name\" dn: cn=group_name,cn=groups,cn=accounts,dc=idm,dc=example,dc=com ipaNTSecurityIdentifier: S-1-5-21-1650388524-2605035987-2578146103-11017 cn: testgroup objectClass: top objectClass: groupofnames objectClass: nestedgroup objectClass: ipausergroup objectClass: ipaobject objectClass: posixgroup objectClass: ipantgroupattrs ipaUniqueID: 569bf864-9d45-11ea-bea3-525400f6f085 gidNumber: 1997010017",
"ldapmodify -Y GSSAPI -H ldap://server.example.com dn: uid=testuser,cn=users,cn=accounts,dc=example,dc=com changetype: modify replace: telephoneNumber telephonenumber: 88888888",
"ldapmodify -Y GSSAPI -H ldap://server.example.com -f ~/example.ldif",
"kinit admin",
"ldapmodify -Y GSSAPI SASL/GSSAPI authentication started SASL username: [email protected] SASL SSF: 256 SASL data security layer installed.",
"dn: uid=user1,cn=users,cn=accounts,dc=idm,dc=example,dc=com",
"changetype: modrdn",
"newrdn: uid=user1",
"deleteoldrdn: 0",
"newsuperior: cn=deleted users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com",
"[Enter] modifying rdn of entry \"uid=user1,cn=users,cn=accounts,dc=idm,dc=example,dc=com\"",
"ipa user-find --preserved=true -------------- 1 user matched -------------- User login: user1 First name: First 1 Last name: Last 1 Home directory: /home/user1 Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 1997010003 GID: 1997010003 Account disabled: True Preserved user: True ---------------------------- Number of entries returned 1 ----------------------------",
"ldapsearch [-x | -Y mechanism] [options] [search_filter] [list_of_attributes]",
"ldapsearch -x -H ldap://ldap.example.com -s sub \"(uid=user01)\"",
"\"(cn=example)\"",
"kinit admin",
"ipa user-add provisionator --first=provisioning --last=account --password",
"ipa role-add --desc \"Responsible for provisioning stage users\" \"System Provisioning\"",
"ipa role-add-privilege \"System Provisioning\" --privileges=\"Stage User Provisioning\"",
"ipa role-add-member --users=provisionator \"System Provisioning\"",
"ipa user-find provisionator --all --raw -------------- 1 user matched -------------- dn: uid=provisionator,cn=users,cn=accounts,dc=idm,dc=example,dc=com uid: provisionator [...]",
"ipa user-add activator --first=activation --last=account --password",
"ipa role-add-member --users=activator \"User Administrator\"",
"ipa group-add application-accounts",
"ipa pwpolicy-add application-accounts --maxlife=10000 --minlife=0 --history=0 --minclasses=4 --minlength=8 --priority=1 --maxfail=0 --failinterval=1 --lockouttime=0",
"ipa pwpolicy-show application-accounts Group: application-accounts Max lifetime (days): 10000 Min lifetime (hours): 0 History size: 0 [...]",
"ipa group-add-member application-accounts --users={provisionator,activator}",
"kpasswd provisionator kpasswd activator",
"ipa-getkeytab -s server.idm.example.com -p \"activator\" -k /etc/krb5.ipa-activation.keytab",
"#!/bin/bash kinit -k -i activator ipa stageuser-find --all --raw | grep \" uid:\" | cut -d \":\" -f 2 | while read uid; do ipa stageuser-activate USD{uid}; done",
"chmod 755 /usr/local/sbin/ipa-activate-all chown root:root /usr/local/sbin/ipa-activate-all",
"[Unit] Description=Scan IdM every minute for any stage users that must be activated [Service] Environment=KRB5_CLIENT_KTNAME=/etc/krb5.ipa-activation.keytab Environment=KRB5CCNAME=FILE:/tmp/krb5cc_ipa-activate-all ExecStart=/usr/local/sbin/ipa-activate-all",
"[Unit] Description=Scan IdM every minute for any stage users that must be activated [Timer] OnBootSec=15min OnUnitActiveSec=1min [Install] WantedBy=multi-user.target",
"systemctl daemon-reload",
"systemctl enable ipa-activate-all.timer",
"systemctl start ipa-activate-all.timer",
"systemctl status ipa-activate-all.timer ● ipa-activate-all.timer - Scan IdM every minute for any stage users that must be activated Loaded: loaded (/etc/systemd/system/ipa-activate-all.timer; enabled; vendor preset: disabled) Active: active (waiting) since Wed 2020-06-10 16:34:55 CEST; 15s ago Trigger: Wed 2020-06-10 16:35:55 CEST; 44s left Jun 10 16:34:55 server.idm.example.com systemd[1]: Started Scan IdM every minute for any stage users that must be activated.",
"dn: uid=stageidmuser,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com changetype: add objectClass: top objectClass: inetorgperson uid: stageidmuser sn: surname givenName: first_name cn: full_name",
"scp add-stageidmuser.ldif [email protected]:/provisionator/ Password: add-stageidmuser.ldif 100% 364 217.6KB/s 00:00",
"ssh [email protected] Password: [provisionator@server ~]USD",
"[provisionator@server ~]USD kinit provisionator",
"~]USD ldapadd -h server.idm.example.com -p 389 -f add-stageidmuser.ldif SASL/GSSAPI authentication started SASL username: [email protected] SASL SSF: 256 SASL data security layer installed. adding the entry \"uid=stageidmuser,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com\"",
"ssh [email protected] Password: [provisionator@server ~]USD",
"kinit provisionator",
"ldapmodify -h server.idm.example.com -p 389 -Y GSSAPI SASL/GSSAPI authentication started SASL username: [email protected] SASL SSF: 56 SASL data security layer installed.",
"dn: uid=stageuser,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com",
"changetype: add",
"objectClass: top objectClass: inetorgperson",
"uid: stageuser",
"cn: Babs Jensen",
"sn: Jensen",
"[Enter] adding new entry \"uid=stageuser,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com\"",
"ipa stageuser-show stageuser --all --raw dn: uid=stageuser,cn=staged users,cn=accounts,cn=provisioning,dc=idm,dc=example,dc=com uid: stageuser sn: Jensen cn: Babs Jensen has_password: FALSE has_keytab: FALSE nsaccountlock: TRUE objectClass: top objectClass: inetorgperson objectClass: organizationalPerson objectClass: person",
"ipa user-add-principal <user> <useralias> -------------------------------- Added new aliases to user \"user\" -------------------------------- User login: user Principal alias: [email protected], [email protected]",
"kinit -C <useralias> Password for <user>@IDM.EXAMPLE.COM:",
"ipa user-remove-principal <user> <useralias> -------------------------------- Removed aliases from user \"user\" -------------------------------- User login: user Principal alias: [email protected]",
"ipa user-show <user> User login: user Principal name: [email protected] ipa user-remove-principal user user ipa: ERROR: invalid 'krbprincipalname': at least one value equal to the canonical principal name must be present",
"ipa: ERROR: The realm for the principal does not match the realm for this IPA server",
"ipa user-add-principal <user> <user\\\\@example.com> -------------------------------- Added new aliases to user \"user\" -------------------------------- User login: user Principal alias: [email protected], user\\@[email protected]",
"kinit -E <[email protected]> Password for user\\@[email protected]:",
"ipa: ERROR: The realm for the principal does not match the realm for this IPA server",
"ipa user-remove-principal <user> <user\\\\@example.com> -------------------------------- Removed aliases from user \"user\" -------------------------------- User login: user Principal alias: [email protected]",
"ipa config-mod --enable-sid --add-sids",
"ipa user-show admin --all | grep ipantsecurityidentifier ipantsecurityidentifier: S-1-5-21-2633809701-976279387-419745629-500",
"ipa service-add testservice/client.example.com ------------------------------------------------------------- Modified service \"testservice/[email protected]\" ------------------------------------------------------------- Principal name: testservice/[email protected] Principal alias: testservice/[email protected] Managed by: client.example.com",
"ipa-getkeytab -k /etc/testservice.keytab -p testservice/client.example.com Keytab successfully retrieved and stored in: /etc/testservice.keytab",
"ipa service-show testservice/client.example.com Principal name: testservice/[email protected] Principal alias: testservice/[email protected] Keytab: True Managed by: client.example.com",
"klist -ekt /etc/testservice.keytab Keytab name: FILE:/etc/testservice.keytab KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------ 2 04/01/2020 17:52:55 testservice/[email protected] (aes256-cts-hmac-sha1-96) 2 04/01/2020 17:52:55 testservice/[email protected] (aes128-cts-hmac-sha1-96) 2 04/01/2020 17:52:55 testservice/[email protected] (camellia128-cts-cmac) 2 04/01/2020 17:52:55 testservice/[email protected] (camellia256-cts-cmac)",
"host /[email protected] HTTP /[email protected] ldap /[email protected] DNS /[email protected] cifs /[email protected]",
"ipa service-mod testservice/[email protected] --auth-ind otp --auth-ind pkinit ------------------------------------------------------------- Modified service \"testservice/[email protected]\" ------------------------------------------------------------- Principal name: testservice/[email protected] Principal alias: testservice/[email protected] Authentication Indicators: otp, pkinit Managed by: client.example.com",
"ipa service-mod testservice/[email protected] --auth-ind '' ------------------------------------------------------ Modified service \"testservice/[email protected]\" ------------------------------------------------------ Principal name: testservice/[email protected] Principal alias: testservice/[email protected] Managed by: client.example.com",
"ipa service-show testservice/client.example.com Principal name: testservice/[email protected] Principal alias: testservice/[email protected] Authentication Indicators: otp, pkinit Keytab: True Managed by: client.example.com",
"kvno -S testservice client.example.com testservice/[email protected]: kvno = 1",
"kdestroy",
"klist_ Ticket cache: KCM:1000 Default principal: [email protected] Valid starting Expires Service principal 04/01/2020 12:52:42 04/02/2020 12:52:39 krbtgt/[email protected] 04/01/2020 12:54:07 04/02/2020 12:52:39 testservice/[email protected]",
"ipa krbtpolicy-mod --maxlife= USD((8*60*60)) --maxrenew= USD((24*60*60)) Max life: 28800 Max renew: 86400",
"ipa krbtpolicy-reset Max life: 86400 Max renew: 604800",
"ipa krbtpolicy-show Max life: 28800 Max renew: 86640",
"ipa krbtpolicy-mod --otp-maxlife= 604800 --otp-maxrenew= 604800 --pkinit-maxlife= 172800 --pkinit-maxrenew= 172800",
"ipa krbtpolicy-show Max life: 86400 OTP max life: 604800 PKINIT max life: 172800 Max renew: 604800 OTP max renew: 604800 PKINIT max renew: 172800",
"ipa krbtpolicy-mod admin --maxlife= 172800 --maxrenew= 1209600 Max life: 172800 Max renew: 1209600",
"ipa krbtpolicy-reset admin",
"ipa krbtpolicy-show admin Max life: 172800 Max renew: 1209600",
"ipa krbtpolicy-mod admin --otp-maxrenew=USD((2*24*60*60)) OTP max renew: 172800",
"ipa krbtpolicy-reset username",
"ipa krbtpolicy-show admin Max life: 28800 Max renew: 86640",
"ipa pkinit-status Server name: server1.example.com PKINIT status: enabled [...output truncated...] Server name: server2.example.com PKINIT status: disabled [...output truncated...]",
"ipa config-show Maximum username length: 32 Home directory base: /home Default shell: /bin/sh Default users group: ipausers [...output truncated...] IPA masters capable of PKINIT: server1.example.com [...output truncated...]",
"kinit admin Password for [email protected]: ipa pkinit-status --server=server.idm.example.com 1 server matched ---------------- Server name: server.idm.example.com PKINIT status:enabled ---------------------------- Number of entries returned 1 ----------------------------",
"ipa pkinit-status --server server.idm.example.com ----------------- 0 servers matched ----------------- ---------------------------- Number of entries returned 0 ----------------------------",
"ipa-cacert-manage install -t CT,C,C ca.pem",
"ipa-certupdate",
"ipa-cacert-manage list CN=CA,O=Example Organization The ipa-cacert-manage command was successful",
"ipa-server-certinstall --kdc kdc.pem kdc.key systemctl restart krb5kdc.service",
"ipa pkinit-status Server name: server1.example.com PKINIT status: enabled [...output truncated...] Server name: server2.example.com PKINIT status: disabled [...output truncated...]",
"ipa-pkinit-manage enable Configuring Kerberos KDC (krb5kdc) [1/1]: installing X509 Certificate for PKINIT Done configuring Kerberos KDC (krb5kdc). The ipa-pkinit-manage command was successful",
"klist -ekt /etc/krb5.keytab Keytab name: FILE:/etc/krb5.keytab KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------ 2 02/24/2022 20:28:09 host/[email protected] (aes256-cts-hmac-sha1-96) 2 02/24/2022 20:28:09 host/[email protected] (aes128-cts-hmac-sha1-96) 2 02/24/2022 20:28:09 host/[email protected] (camellia128-cts-cmac) 2 02/24/2022 20:28:09 host/[email protected] (camellia256-cts-cmac)",
"klist -ekt /etc/named.keytab Keytab name: FILE:/etc/named.keytab KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------ 2 11/26/2021 13:51:11 DNS/[email protected] (aes256-cts-hmac-sha1-96) 2 11/26/2021 13:51:11 DNS/[email protected] (aes128-cts-hmac-sha1-96) 2 11/26/2021 13:51:11 DNS/[email protected] (camellia128-cts-cmac) 2 11/26/2021 13:51:11 DNS/[email protected] (camellia256-cts-cmac)",
"kvno DNS/[email protected] DNS/[email protected]: kvno = 3",
"kinit admin Password for [email protected]:",
"ipa-getkeytab -s server1.idm.example.com -p DNS/server1.idm.example.com -k /etc/named.keytab",
"klist -ekt /etc/named.keytab Keytab name: FILE:/etc/named.keytab KVNO Timestamp Principal ---- ------------------- ------------------------------------------------------ 4 08/17/2022 14:42:11 DNS/[email protected] (aes256-cts-hmac-sha1-96) 4 08/17/2022 14:42:11 DNS/[email protected] (aes128-cts-hmac-sha1-96) 4 08/17/2022 14:42:11 DNS/[email protected] (camellia128-cts-cmac) 4 08/17/2022 14:42:11 DNS/[email protected] (camellia256-cts-cmac)",
"kvno DNS/[email protected] DNS/[email protected]: kvno = 4",
"kadmin.local getprinc K/M | grep -E '^Key:' Key: vno 1, aes256-cts-hmac-sha1-96",
"[realms] EXAMPLE.COM = { kdc = https://kdc.example.com/KdcProxy admin_server = https://kdc.example.com/KdcProxy kpasswd_server = https://kdc.example.com/KdcProxy default_domain = example.com }",
"~]# systemctl restart sssd",
"ls -l /etc/httpd/conf.d/ipa-kdc-proxy.conf lrwxrwxrwx. 1 root root 36 Jun 21 2020 /etc/httpd/conf.d/ipa-kdc-proxy.conf -> /etc/ipa/kdcproxy/ipa-kdc-proxy.conf",
"ipa-ldap-updater /usr/share/ipa/kdcproxy-disable.uldif Update complete The ipa-ldap-updater command was successful",
"systemctl restart httpd.service",
"ls -l /etc/httpd/conf.d/ipa-kdc-proxy.conf ls: cannot access '/etc/httpd/conf.d/ipa-kdc-proxy.conf': No such file or directory",
"ipa-ldap-updater /usr/share/ipa/kdcproxy-enable.uldif Update complete The ipa-ldap-updater command was successful",
"systemctl restart httpd.service",
"ls -l /etc/httpd/conf.d/ipa-kdc-proxy.conf lrwxrwxrwx. 1 root root 36 Jun 21 2020 /etc/httpd/conf.d/ipa-kdc-proxy.conf -> /etc/ipa/kdcproxy/ipa-kdc-proxy.conf",
"[global] use_dns = false",
"[AD. EXAMPLE.COM ] kerberos = kerberos+tcp://1.2.3.4:88 kerberos+tcp://5.6.7.8:88 kpasswd = kpasswd+tcp://1.2.3.4:464 kpasswd+tcp://5.6.7.8:464",
"ipactl restart",
"[global] configs = mit use_dns = true",
"[realms] AD. EXAMPLE.COM = { kdc = ad-server.ad.example.com kpasswd_server = ad-server.ad.example.com }",
"ipactl restart",
"ipa selfservice-add \"Users can manage their own name details\" --permissions=write --attrs=givenname --attrs=displayname --attrs=title --attrs=initials ----------------------------------------------------------- Added selfservice \"Users can manage their own name details\" ----------------------------------------------------------- Self-service name: Users can manage their own name details Permissions: write Attributes: givenname, displayname, title, initials",
"ipa selfservice-mod \"Users can manage their own name details\" --attrs=givenname --attrs=displayname --attrs=title --attrs=initials --attrs=surname -------------------------------------------------------------- Modified selfservice \"Users can manage their own name details\" -------------------------------------------------------------- Self-service name: Users can manage their own name details Permissions: write Attributes: givenname, displayname, title, initials",
"ipa selfservice-show \"Users can manage their own name details\" -------------------------------------------------------------- Self-service name: Users can manage their own name details Permissions: write Attributes: givenname, displayname, title, initials",
"ipa selfservice-del \"Users can manage their own name details\" ----------------------------------------------------------- Deleted selfservice \"Users can manage their own name details\" -----------------------------------------------------------",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-present.yml selfservice-present-copy.yml",
"--- - name: Self-service present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure self-service rule \"Users can manage their own name details\" is present ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" permission: read, write attribute: - givenname - displayname - title - initials",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-absent.yml selfservice-absent-copy.yml",
"--- - name: Self-service absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure self-service rule \"Users can manage their own name details\" is absent ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-absent-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-member-present.yml selfservice-member-present-copy.yml",
"--- - name: Self-service member present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure selfservice \"Users can manage their own name details\" member attribute surname is present ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" attribute: - surname action: member",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-member-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/selfservice/selfservice-member-absent.yml selfservice-member-absent-copy.yml",
"--- - name: Self-service member absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure selfservice \"Users can manage their own name details\" member attributes givenname and surname are absent ipaselfservice: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"Users can manage their own name details\" attribute: - givenname - surname action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory selfservice-member-absent-copy.yml",
"ipa group-add group_a --------------------- Added group \"group_a\" --------------------- Group name: group_a GID: 1133400009",
"ipa group-del group_a -------------------------- Deleted group \"group_a\" --------------------------",
"ipa group-add-member group_a --groups=group_b Group name: group_a GID: 1133400009 Member users: user_a Member groups: group_b Indirect Member users: user_b ------------------------- Number of members added 1 -------------------------",
"ipa user-add jsmith --first=John --last=Smith --noprivate --gid 10000",
"kinit admin",
"ipa-managed-entries --list",
"ipa-managed-entries -e \"UPG Definition\" disable Disabling Plugin",
"sudo systemctl restart dirsrv.target",
"ipa-managed-entries -e \"UPG Definition\" disable Plugin already disabled",
"ipa group-add-member-manager group_a --users=test Group name: group_a GID: 1133400009 Membership managed by users: test ------------------------- Number of members added 1 -------------------------",
"ipa group-add-member-manager group_a --groups=group_admins Group name: group_a GID: 1133400009 Membership managed by groups: group_admins Membership managed by users: test ------------------------- Number of members added 1 -------------------------",
"ipa group-show group_a Group name: group_a GID: 1133400009 Membership managed by groups: group_admins Membership managed by users: test",
"ipa group-show group_a Member users: user_a Member groups: group_b Indirect Member users: user_b",
"ipa group-remove-member group_name --users= user1 --users= user2 --groups= group1",
"ipa group-remove-member-manager group_a --users=test Group name: group_a GID: 1133400009 Membership managed by groups: group_admins --------------------------- Number of members removed 1 ---------------------------",
"ipa group-remove-member-manager group_a --groups=group_admins Group name: group_a GID: 1133400009 --------------------------- Number of members removed 1 ---------------------------",
"ipa group-show group_a Group name: group_a GID: 1133400009",
"Allow initgroups to default to the setting for group. initgroups: sss [SUCCESS=merge] files",
"ipa user-add idmuser First name: idm Last name: user --------------------- Added user \"idmuser\" --------------------- User login: idmuser First name: idm Last name: user Full name: idm user Display name: idm user Initials: tu Home directory: /home/idmuser GECOS: idm user Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 19000024 GID: 19000024 Password: False Member of groups: ipausers Kerberos keys available: False",
"getent group audio --------------------- audio:x:63",
"ipa group-add audio --gid 63 ------------------- Added group \"audio\" ------------------- Group name: audio GID: 63",
"ipa group-add-member audio --users= idmuser Group name: audio GID: 63 Member users: idmuser ------------------------- Number of members added 1 -------------------------",
"id idmuser uid=1867800003(idmuser) gid=1867800003(idmuser) groups=1867800003(idmuser),63(audio),10(wheel)",
"Allow initgroups to default to the setting for group. initgroups: sss [SUCCESS=merge] files",
"getent group audio --------------------- audio:x:63",
"--- - name: Playbook to manage idoverrideuser hosts: ipaserver become: false tasks: - name: Add [email protected] user to the Default Trust View ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: \"Default Trust View\" anchor: [email protected]",
"- name: Add the audio group with the aduser member and GID of 63 ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: audio idoverrideuser: - [email protected] gidnumber: 63",
"ansible-playbook --vault-password-file=password_file -v -i inventory add-aduser-to-audio-group.yml",
"ssh [email protected]@client.idm.example.com",
"id [email protected] uid=702801456([email protected]) gid=63(audio) groups=63(audio)",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle groups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Create group ops with gid 1234 ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: ops gidnumber: 1234 - name: Create group sysops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: sysops user: - idm_user - name: Create group appops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: appops - name: Add group members sysops and appops to group ops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: ops group: - sysops - appops",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-group-members.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipaserver]USD ipa group-show ops Group name: ops GID: 1234 Member groups: sysops, appops Indirect Member users: idm_user",
"--- - name: Playbook to add nonposix and external groups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Add nonposix group sysops and external group appops ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" groups: - name: sysops nonposix: true - name: appops external: true",
"ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/hosts <path_to_playbooks_directory>/add-nonposix-and-external-groups.yml",
"cd ~/ MyPlaybooks /",
"--- - name: Playbook to ensure presence of users in a group hosts: ipaserver - name: Ensure the [email protected] user ID override is a member of the admins group: ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: admins idoverrideuser: - [email protected]",
"ansible-playbook --vault-password-file=password_file -v -i inventory add-useridoverride-to-group.yml",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure user test is present for group_a ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_a membermanager_user: test - name: Ensure group_admins is present for group_a ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_a membermanager_group: group_admins",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-member-managers-user-groups.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipaserver]USD ipa group-show group_a Group name: group_a GID: 1133400009 Membership managed by groups: group_admins Membership managed by users: test",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager user and group members are absent for group_a ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_a membermanager_user: test membermanager_group: group_admins action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-member-managers-are-absent.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipaserver]USD ipa group-show group_a Group name: group_a GID: 1133400009",
"ipa automember-add Automember Rule: user_group Grouping Type: group -------------------------------- Added automember rule \"user_group\" -------------------------------- Automember Rule: user_group",
"ipa automember-add-condition Automember Rule: user_group Attribute Key: uid Grouping Type: group [Inclusive Regex]: .* [Exclusive Regex]: ---------------------------------- Added condition(s) to \"user_group\" ---------------------------------- Automember Rule: user_group Inclusive Regex: uid=.* ---------------------------- Number of conditions added 1 ----------------------------",
"ipa automember-add-condition Automember Rule: ad_users Attribute Key: objectclass Grouping Type: group [Inclusive Regex]: ntUser [Exclusive Regex]: ------------------------------------- Added condition(s) to \"ad_users\" ------------------------------------- Automember Rule: ad_users Inclusive Regex: objectclass=ntUser ---------------------------- Number of conditions added 1 ----------------------------",
"ipa automember-find Grouping Type: group --------------- 1 rules matched --------------- Automember Rule: user_group Inclusive Regex: uid=.* ---------------------------- Number of entries returned 1 ----------------------------",
"ipa automember-remove-condition Automember Rule: user_group Attribute Key: uid Grouping Type: group [Inclusive Regex]: .* [Exclusive Regex]: ----------------------------------- Removed condition(s) from \"user_group\" ----------------------------------- Automember Rule: user_group ------------------------------ Number of conditions removed 1 ------------------------------",
"ipa automember-rebuild --type=group -------------------------------------------------------- Automember rebuild task finished. Processed (9) entries. --------------------------------------------------------",
"ipa automember-rebuild --users=target_user1 --users=target_user2 -------------------------------------------------------- Automember rebuild task finished. Processed (2) entries. --------------------------------------------------------",
"ipa automember-default-group-set Default (fallback) Group: default_user_group Grouping Type: group --------------------------------------------------- Set default (fallback) group for automember \"default_user_group\" --------------------------------------------------- Default (fallback) Group: cn=default_user_group,cn=groups,cn=accounts,dc=example,dc=com",
"ipa automember-default-group-show Grouping Type: group Default (fallback) Group: cn=default_user_group,cn=groups,cn=accounts,dc=example,dc=com",
"mkdir ~/MyPlaybooks/",
"cd ~/MyPlaybooks",
"[defaults] inventory = /home/ your_username /MyPlaybooks/inventory [privilege_escalation] become=True",
"[ipaserver] server.idm.example.com [ipareplicas] replica1.idm.example.com replica2.idm.example.com [ipacluster:children] ipaserver ipareplicas [ipacluster:vars] ipaadmin_password=SomeADMINpassword [ipaclients] ipaclient1.example.com ipaclient2.example.com [ipaclients:vars] ipaadmin_password=SomeADMINpassword",
"ssh-keygen",
"ssh-copy-id [email protected] ssh-copy-id [email protected]",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-group-present.yml automember-group-present-copy.yml",
"--- - name: Automember group present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure group automember rule admins is present ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: testing_group automember_type: group state: present",
"ansible-playbook --vault-password-file=password_file -v -i inventory automember-group-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-hostgroup-rule-present.yml automember-usergroup-rule-present.yml",
"--- - name: Automember user group rule member present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure an automember condition for a user group is present ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: testing_group automember_type: group state: present action: member inclusive: - key: UID expression: . *",
"ansible-playbook --vault-password-file=password_file -v -i inventory automember-usergroup-rule-present.yml",
"kinit admin",
"ipa user-add user101 --first user --last 101 ----------------------- Added user \"user101\" ----------------------- User login: user101 First name: user Last name: 101 Member of groups: ipausers, testing_group",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-hostgroup-rule-absent.yml automember-usergroup-rule-absent.yml",
"--- - name: Automember user group rule member absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure an automember condition for a user group is absent ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: testing_group automember_type: group state: absent action: member inclusive: - key: initials expression: dp",
"ansible-playbook --vault-password-file=password_file -v -i inventory automember-usergroup-rule-absent.yml",
"kinit admin",
"ipa automember-show --type=group testing_group Automember Rule: testing_group",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-group-absent.yml automember-group-absent-copy.yml",
"--- - name: Automember group absent example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure group automember rule admins is absent ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: testing_group automember_type: group state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory automember-group-absent.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/automember/automember-hostgroup-rule-present.yml automember-hostgroup-rule-present-copy.yml",
"--- - name: Automember user group rule member present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure an automember condition for a user group is present ipaautomember: ipaadmin_password: \"{{ ipaadmin_password }}\" name: primary_dns_domain_hosts automember_type: hostgroup state: present action: member inclusive: - key: fqdn expression: .*.idm.example.com exclusive: - key: fqdn expression: .*.example.org",
"ansible-playbook --vault-password-file=password_file -v -i inventory automember-hostgroup-rule-present-copy.yml",
"ipa delegation-add \"basic manager attributes\" --permissions=read --permissions=write --attrs=businesscategory --attrs=departmentnumber --attrs=employeetype --attrs=employeenumber --group=managers --membergroup=employees ------------------------------------------- Added delegation \"basic manager attributes\" ------------------------------------------- Delegation name: basic manager attributes Permissions: read, write Attributes: businesscategory, departmentnumber, employeetype, employeenumber Member user group: employees User group: managers",
"ipa delegation-find -------------------- 1 delegation matched -------------------- Delegation name: basic manager attributes Permissions: read, write Attributes: businesscategory, departmentnumber, employeenumber, employeetype Member user group: employees User group: managers ---------------------------- Number of entries returned 1 ----------------------------",
"ipa delegation-mod \"basic manager attributes\" --attrs=businesscategory --attrs=departmentnumber --attrs=employeetype --attrs=employeenumber --attrs=displayname ---------------------------------------------- Modified delegation \"basic manager attributes\" ---------------------------------------------- Delegation name: basic manager attributes Permissions: read, write Attributes: businesscategory, departmentnumber, employeetype, employeenumber, displayname Member user group: employees User group: managers",
"ipa delegation-del Delegation name: basic manager attributes --------------------------------------------- Deleted delegation \"basic manager attributes\" ---------------------------------------------",
"mkdir ~/MyPlaybooks/",
"cd ~/MyPlaybooks",
"[defaults] inventory = /home/ <username> /MyPlaybooks/inventory [privilege_escalation] become=True",
"[eu] server.idm.example.com [us] replica.idm.example.com [ipaserver:children] eu us",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/delegation/delegation-present.yml delegation-present-copy.yml",
"--- - name: Playbook to manage a delegation rule hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure delegation \"basic manager attributes\" is present ipadelegation: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"basic manager attributes\" permission: read, write attribute: - businesscategory - departmentnumber - employeenumber - employeetype group: managers membergroup: employees",
"ansible-playbook --vault-password-file=password_file -v -i ~/ MyPlaybooks /inventory delegation-present-copy.yml",
"cd ~/ MyPlaybooks> /",
"cp /usr/share/doc/ansible-freeipa/playbooks/delegation/delegation-present.yml delegation-absent-copy.yml",
"--- - name: Delegation absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure delegation \"basic manager attributes\" is absent ipadelegation: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"basic manager attributes\" state: absent",
"ansible-playbook --vault-password-file=password_file -v -i ~/ MyPlaybooks /inventory delegation-absent-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/delegation/delegation-member-present.yml delegation-member-present-copy.yml",
"--- - name: Delegation member present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure delegation \"basic manager attributes\" member attribute departmentnumber is present ipadelegation: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"basic manager attributes\" attribute: - departmentnumber action: member",
"ansible-playbook --vault-password-file=password_file -v -i ~/ MyPlaybooks /inventory delegation-member-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/delegation/delegation-member-absent.yml delegation-member-absent-copy.yml",
"--- - name: Delegation member absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure delegation \"basic manager attributes\" member attributes employeenumber and employeetype are absent ipadelegation: ipaadmin_password: \"{{ ipaadmin_password }}\" name: \"basic manager attributes\" attribute: - employeenumber - employeetype action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i ~/ MyPlaybooks /inventory delegation-member-absent-copy.yml",
"ipa permission-add \"dns admin\"",
"ipa permission-add \"dns admin\" --bindtype=all",
"ipa permission-add \"dns admin\" --right=read --right=write",
"ipa permission-add \"dns admin\" --right={read,write}",
"ipa permission-add \"dns admin\" --attrs=description --attrs=automountKey",
"ipa permission-add \"dns admin\" --attrs={description,automountKey}",
"ipa permission-add \"manage service\" --right=all --type=service --attrs=krbprincipalkey --attrs=krbprincipalname --attrs=managedby",
"ipa permission-add \"manage automount locations\" --subtree=\"ldap://ldap.example.com:389/cn=automount,dc=example,dc=com\" --right=write --attrs=automountmapname --attrs=automountkey --attrs=automountInformation",
"ipa permission-add \"manage Windows groups\" --filter=\"(!(objectclass=posixgroup))\" --right=write --attrs=description",
"ipa permission-add ManageShell --right=\"write\" --type=user --attr=loginshell --memberof=engineers",
"ipa permission-add ManageMembers --right=\"write\" --subtree=cn=groups,cn=accounts,dc=example,dc=test --attr=member --targetgroup=engineers",
"ipa permission-show <permission> --raw",
"ipa privilege-add \"managing filesystems\" --desc=\"for filesystems\"",
"ipa privilege-add-permission \"managing filesystems\" --permissions=\"managing automount\" --permissions=\"managing ftp services\"",
"ipa role-add --desc=\"User Administrator\" useradmin ------------------------ Added role \"useradmin\" ------------------------ Role name: useradmin Description: User Administrator",
"ipa role-add-privilege --privileges=\"user administrators\" useradmin Role name: useradmin Description: User Administrator Privileges: user administrators ---------------------------- Number of privileges added 1 ----------------------------",
"ipa role-add-member --groups=useradmins useradmin Role name: useradmin Description: User Administrator Member groups: useradmins Privileges: user administrators ------------------------- Number of members added 1 -------------------------",
"mkdir ~/MyPlaybooks/",
"cd ~/MyPlaybooks",
"[defaults] inventory = /home/ your_username /MyPlaybooks/inventory [privilege_escalation] become=True",
"[eu] server.idm.example.com [us] replica.idm.example.com [ipaserver:children] eu us",
"ssh-keygen",
"ssh-copy-id [email protected] ssh-copy-id [email protected]",
"cd ~/ <MyPlaybooks> /",
"cp /usr/share/doc/ansible-freeipa/playbooks/role/role-member-user-present.yml role-member-user-present-copy.yml",
"--- - name: Playbook to manage IPA role with members. hosts: ipaserver become: true gather_facts: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - iparole: ipaadmin_password: \"{{ ipaadmin_password }}\" name: user_and_host_administrator user: idm_user01 group: idm_group01 privilege: - Group Administrators - User Administrators - Stage User Administrators - Group Administrators",
"ansible-playbook --vault-password-file=password_file -v -i ~/ <MyPlaybooks> /inventory role-member-user-present-copy.yml",
"cd ~/ <MyPlaybooks> /",
"cp /usr/share/doc/ansible-freeipa/playbooks/role/role-is-absent.yml role-is-absent-copy.yml",
"--- - name: Playbook to manage IPA role with members. hosts: ipaserver become: true gather_facts: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - iparole: ipaadmin_password: \"{{ ipaadmin_password }}\" name: user_and_host_administrator state: absent",
"ansible-playbook --vault-password-file=password_file -v -i ~/ <MyPlaybooks> /inventory role-is-absent-copy.yml",
"cd ~/ <MyPlaybooks> /",
"cp /usr/share/doc/ansible-freeipa/playbooks/role/role-member-group-present.yml role-member-group-present-copy.yml",
"--- - name: Playbook to manage IPA role with members. hosts: ipaserver become: true gather_facts: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - iparole: ipaadmin_password: \"{{ ipaadmin_password }}\" name: helpdesk group: junior_sysadmins action: member",
"ansible-playbook --vault-password-file=password_file -v -i ~/ <MyPlaybooks> /inventory role-member-group-present-copy.yml",
"cd ~/ <MyPlaybooks> /",
"cp /usr/share/doc/ansible-freeipa/playbooks/role/role-member-user-absent.yml role-member-user-absent-copy.yml",
"--- - name: Playbook to manage IPA role with members. hosts: ipaserver become: true gather_facts: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - iparole: ipaadmin_password: \"{{ ipaadmin_password }}\" name: helpdesk user - user_01 - user_02 action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i ~/ <MyPlaybooks> /inventory role-member-user-absent-copy.yml",
"cd ~/ <MyPlaybooks> /",
"cp /usr/share/doc/ansible-freeipa/playbooks/role/role-member-service-present-absent.yml role-member-service-present-copy.yml",
"--- - name: Playbook to manage IPA role with members. hosts: ipaserver become: true gather_facts: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - iparole: ipaadmin_password: \"{{ ipaadmin_password }}\" name: web_administrator service: - HTTP/client01.idm.example.com action: member",
"ansible-playbook --vault-password-file=password_file -v -i ~/ <MyPlaybooks> /inventory role-member-service-present-copy.yml",
"cd ~/ <MyPlaybooks> /",
"cp /usr/share/doc/ansible-freeipa/playbooks/role/role-member-host-present.yml role-member-host-present-copy.yml",
"--- - name: Playbook to manage IPA role with members. hosts: ipaserver become: true gather_facts: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - iparole: ipaadmin_password: \"{{ ipaadmin_password }}\" name: web_administrator host: - client01.idm.example.com action: member",
"ansible-playbook --vault-password-file=password_file -v -i ~/ <MyPlaybooks> /inventory role-member-host-present-copy.yml",
"cd ~/ <MyPlaybooks> /",
"cp /usr/share/doc/ansible-freeipa/playbooks/role/role-member-hostgroup-present.yml role-member-hostgroup-present-copy.yml",
"--- - name: Playbook to manage IPA role with members. hosts: ipaserver become: true gather_facts: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - iparole: ipaadmin_password: \"{{ ipaadmin_password }}\" name: web_administrator hostgroup: - web_servers action: member",
"ansible-playbook --vault-password-file=password_file -v -i ~/ <MyPlaybooks> /inventory role-member-hostgroup-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/privilege/privilege-present.yml privilege-present-copy.yml",
"--- - name: Privilege present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure privilege full_host_administration is present ipaprivilege: ipaadmin_password: \"{{ ipaadmin_password }}\" name: full_host_administration description: This privilege combines all IdM permissions related to host administration",
"ansible-playbook --vault-password-file=password_file -v -i inventory privilege-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/privilege/privilege-member-present.yml privilege-member-present-copy.yml",
"--- - name: Privilege member present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that permissions are present for the \"full_host_administration\" privilege ipaprivilege: ipaadmin_password: \"{{ ipaadmin_password }}\" name: full_host_administration permission: - \"System: Add krbPrincipalName to a Host\" - \"System: Enroll a Host\" - \"System: Manage Host Certificates\" - \"System: Manage Host Enrollment Password\" - \"System: Manage Host Keytab\" - \"System: Manage Host Principals\" - \"Retrieve Certificates from the CA\" - \"Revoke Certificate\" - \"System: Add Hosts\" - \"System: Add krbPrincipalName to a Host\" - \"System: Enroll a Host\" - \"System: Manage Host Certificates\" - \"System: Manage Host Enrollment Password\" - \"System: Manage Host Keytab\" - \"System: Manage Host Keytab Permissions\" - \"System: Manage Host Principals\" - \"System: Manage Host SSH Public Keys\" - \"System: Manage Service Keytab\" - \"System: Manage Service Keytab Permissions\" - \"System: Modify Hosts\" - \"System: Remove Hosts\" - \"System: Add Hostgroups\" - \"System: Modify Hostgroup Membership\" - \"System: Modify Hostgroups\" - \"System: Remove Hostgroups\"",
"ansible-playbook --vault-password-file=password_file -v -i inventory privilege-member-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/privilege/privilege-member-absent.yml privilege-member-absent-copy.yml",
"--- - name: Privilege absent example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that the \"Request Certificate ignoring CA ACLs\" permission is absent from the \"Certificate Administrators\" privilege ipaprivilege: ipaadmin_password: \"{{ ipaadmin_password }}\" name: Certificate Administrators permission: - \"Request Certificate ignoring CA ACLs\" action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory privilege-member-absent-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/privilege/privilege-present.yml rename-privilege.yml",
"--- - name: Rename a privilege hosts: ipaserver",
"[...] tasks: - name: Ensure the full_host_administration privilege is renamed to limited_host_administration ipaprivilege: [...]",
"--- - name: Rename a privilege hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure the full_host_administration privilege is renamed to limited_host_administration ipaprivilege: ipaadmin_password: \"{{ ipaadmin_password }}\" name: full_host_administration rename: limited_host_administration state: renamed",
"ansible-playbook --vault-password-file=password_file -v -i inventory rename-privilege.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/privilege/privilege-absent.yml privilege-absent-copy.yml",
"[...] tasks: - name: Ensure privilege \"CA administrator\" is absent ipaprivilege: [...]",
"--- - name: Privilege absent example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure privilege \"CA administrator\" is absent ipaprivilege: ipaadmin_password: \"{{ ipaadmin_password }}\" name: CA administrator state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory privilege-absent-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/permission/permission-present.yml permission-present-copy.yml",
"--- - name: Permission present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that the \"MyPermission\" permission is present ipapermission: ipaadmin_password: \"{{ ipaadmin_password }}\" name: MyPermission object_type: host right: all",
"ansible-playbook --vault-password-file=password_file -v -i inventory permission-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/permission/permission-present.yml permission-present-with-attribute.yml",
"--- - name: Permission present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that the \"MyPermission\" permission is present with an attribute ipapermission: ipaadmin_password: \"{{ ipaadmin_password }}\" name: MyPermission object_type: host right: all attrs: description",
"ansible-playbook --vault-password-file=password_file -v -i inventory permission-present-with-attribute.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/permission/permission-absent.yml permission-absent-copy.yml",
"--- - name: Permission absent example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that the \"MyPermission\" permission is absent ipapermission: ipaadmin_password: \"{{ ipaadmin_password }}\" name: MyPermission state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory permission-absent-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/permission/permission-member-present.yml permission-member-present-copy.yml",
"--- - name: Permission member present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that the \"gecos\" and \"description\" attributes are present in \"MyPermission\" ipapermission: ipaadmin_password: \"{{ ipaadmin_password }}\" name: MyPermission attrs: - description - gecos action: member",
"ansible-playbook --vault-password-file=password_file -v -i inventory permission-member-present-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/permission/permission-member-absent.yml permission-member-absent-copy.yml",
"--- - name: Permission absent example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure that an attribute is not a member of \"MyPermission\" ipapermission: ipaadmin_password: \"{{ ipaadmin_password }}\" name: MyPermission attrs: description action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i inventory permission-member-absent-copy.yml",
"cd ~/ MyPlaybooks /",
"cp /usr/share/doc/ansible-freeipa/playbooks/permission/permission-renamed.yml permission-renamed-copy.yml",
"--- - name: Permission present example hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Rename the \"MyPermission\" permission ipapermission: ipaadmin_password: \"{{ ipaadmin_password }}\" name: MyPermission rename: MyNewPermission state: renamed",
"ansible-playbook --vault-password-file=password_file -v -i inventory permission-renamed-copy.yml",
"ipa help idviews ID Views Manage ID Views IPA allows to override certain properties of users and groups[...] [...] Topic commands: idoverridegroup-add Add a new Group ID override idoverridegroup-del Delete a Group ID override [...]",
"ipa idview-add --help Usage: ipa [global-options] idview-add NAME [options] Add a new ID View. Options: -h, --help show this help message and exit --desc=STR Description [...]",
"ipa idview-add example_for_host1 --------------------------- Added ID View \"example_for_host1\" --------------------------- ID View Name: example_for_host1",
"ipa idoverrideuser-add example_for_host1 idm_user --login=user_1234 ----------------------------- Added User ID override \"idm_user\" ----------------------------- Anchor to override: idm_user User login: user_1234",
"ipa idoverrideuser-add-cert example_for_host1 user --certificate=\"MIIEATCC...\"",
"ipa idview-apply example_for_host1 --hosts=host1.idm.example.com ----------------------------- Applied ID View \"example_for_host1\" ----------------------------- hosts: host1.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------",
"ssh root@host1 Password:",
"root@host1 ~]# sss_cache -E",
"root@host1 ~]# systemctl restart sssd",
"ssh [email protected] Password: Last login: Sun Jun 21 22:34:25 2020 from 192.168.122.229 [user_1234@host1 ~]USD",
"[user_1234@host1 ~]USD pwd /home/idm_user/",
"id idm_user uid=779800003(user_1234) gid=779800003(idm_user) groups=779800003(idm_user) user_1234 uid=779800003(user_1234) gid=779800003(idm_user) groups=779800003(idm_user)",
"mkdir /home/user_1234/",
"chown idm_user:idm_user /home/user_1234/",
"ipa idview-show example_for_host1 --all dn: cn=example_for_host1,cn=views,cn=accounts,dc=idm,dc=example,dc=com ID View Name: example_for_host1 User object override: idm_user Hosts the view applies to: host1.idm.example.com objectclass: ipaIDView, top, nsContainer",
"ipa idoverrideuser-mod example_for_host1 idm_user --homedir=/home/user_1234 ----------------------------- Modified a User ID override \"idm_user\" ----------------------------- Anchor to override: idm_user User login: user_1234 Home directory: /home/user_1234/",
"ssh root@host1 Password:",
"root@host1 ~]# sss_cache -E",
"root@host1 ~]# systemctl restart sssd",
"ssh [email protected] Password: Last login: Sun Jun 21 22:34:25 2020 from 192.168.122.229 [user_1234@host1 ~]USD",
"[user_1234@host1 ~]USD pwd /home/user_1234/",
"mkdir /home/user_1234/",
"chown idm_user:idm_user /home/user_1234/",
"ipa idview-add example_for_host1 --------------------------- Added ID View \"example_for_host1\" --------------------------- ID View Name: example_for_host1",
"ipa idoverrideuser-add example_for_host1 idm_user --homedir=/home/user_1234 ----------------------------- Added User ID override \"idm_user\" ----------------------------- Anchor to override: idm_user Home directory: /home/user_1234/",
"ipa idview-apply example_for_host1 --hosts=host1.idm.example.com ----------------------------- Applied ID View \"example_for_host1\" ----------------------------- hosts: host1.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------",
"ssh root@host1 Password:",
"root@host1 ~]# sss_cache -E",
"root@host1 ~]# systemctl restart sssd",
"ssh [email protected] Password: Activate the web console with: systemctl enable --now cockpit.socket Last login: Sun Jun 21 22:34:25 2020 from 192.168.122.229 [idm_user@host1 /]USD",
"[idm_user@host1 /]USD pwd /home/user_1234/",
"ipa hostgroup-add --desc=\"Baltimore hosts\" baltimore --------------------------- Added hostgroup \"baltimore\" --------------------------- Host-group: baltimore Description: Baltimore hosts",
"ipa hostgroup-add-member --hosts={host102,host103} baltimore Host-group: baltimore Description: Baltimore hosts Member hosts: host102.idm.example.com, host103.idm.example.com ------------------------- Number of members added 2 -------------------------",
"ipa idview-apply --hostgroups=baltimore ID View Name: example_for_host1 ----------------------------------------- Applied ID View \"example_for_host1\" ----------------------------------------- hosts: host102.idm.example.com, host103.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 2 ---------------------------------------------",
"ipa hostgroup-add-member --hosts=somehost.idm.example.com baltimore Host-group: baltimore Description: Baltimore hosts Member hosts: host102.idm.example.com, host103.idm.example.com,somehost.idm.example.com ------------------------- Number of members added 1 -------------------------",
"ipa idview-show example_for_host1 --all dn: cn=example_for_host1,cn=views,cn=accounts,dc=idm,dc=example,dc=com ID View Name: example_for_host1 [...] Hosts the view applies to: host102.idm.example.com, host103.idm.example.com objectclass: ipaIDView, top, nsContainer",
"ipa idview-apply --host=somehost.idm.example.com ID View Name: example_for_host1 ----------------------------------------- Applied ID View \"example_for_host1\" ----------------------------------------- hosts: somehost.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------",
"ipa idview-show example_for_host1 --all dn: cn=example_for_host1,cn=views,cn=accounts,dc=idm,dc=example,dc=com ID View Name: example_for_host1 [...] Hosts the view applies to: host102.idm.example.com, host103.idm.example.com, somehost.idm.example.com objectclass: ipaIDView, top, nsContainer",
"--- - name: Playbook to manage idoverrideuser hosts: ipaserver become: false gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure idview_for_host1 is present idview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idview_for_host1 - name: Ensure idview_for_host1 is applied to host1.idm.example.com idview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idview_for_host1 host: host1.idm.example.com action: member - name: Ensure idm_user is present in idview_for_host1 with homedir /home/user_1234 and name user_1234 ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: idview_for_host1 anchor: idm_user name: user_1234 homedir: /home/user_1234",
"ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/inventory <path_to_playbooks_directory>/add-idoverrideuser-with-name-and-homedir.yml",
"ssh root@host1 Password:",
"root@host1 ~]# sss_cache -E",
"root@host1 ~]# systemctl restart sssd",
"ssh [email protected] Password: Last login: Sun Jun 21 22:34:25 2020 from 192.168.122.229 [user_1234@host1 ~]USD",
"[user_1234@host1 ~]USD pwd /home/user_1234/",
"--- - name: Playbook to manage idoverrideuser hosts: ipaserver become: false gather_facts: false vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure test user idm_user is present in idview idview_for_host1 with sshpubkey ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: idview_for_host1 anchor: idm_user sshpubkey: - ssh-rsa AAAAB3NzaC1yc2EAAADAQABAAABgQCqmVDpEX5gnSjKuv97Ay - name: Ensure idview_for_host1 is applied to host1.idm.example.com ipaidview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idview_for_host1 host: host1.idm.example.com action: member",
"ansible-playbook --vault-password-file=password_file -v -i <path_to_inventory_directory>/inventory <path_to_playbooks_directory>/ensure-idoverrideuser-can-login-with-sshkey.yml",
"ssh root@host1 Password:",
"root@host1 ~]# sss_cache -E",
"root@host1 ~]# systemctl restart sssd",
"ssh -i ~/.ssh/id_rsa.pub [email protected] Last login: Sun Jun 21 22:34:25 2023 from 192.168.122.229 [idm_user@host1 ~]USD",
"Allow initgroups to default to the setting for group. initgroups: sss [SUCCESS=merge] files",
"getent group audio --------------------- audio:x:63",
"--- - name: Playbook to manage idoverrideuser hosts: ipaserver become: false tasks: - name: Add [email protected] user to the Default Trust View ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: \"Default Trust View\" anchor: [email protected]",
"- name: Add the audio group with the aduser member and GID of 63 ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: audio idoverrideuser: - [email protected] gidnumber: 63",
"ansible-playbook --vault-password-file=password_file -v -i inventory add-aduser-to-audio-group.yml",
"ssh [email protected]@client.idm.example.com",
"id [email protected] uid=702801456([email protected]) gid=63(audio) groups=63(audio)",
"--- - name: Ensure both local user and IdM user have access to same files hosts: ipaserver become: false gather_facts: false tasks: - name: Ensure idview_for_host1 is applied to host1.idm.example.com ipaidview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idview_for_host01 host: host1.idm.example.com - name: Ensure idmuser is present in idview_for_host01 with the UID of 20001 ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: idview_for_host01 anchor: idm_user UID: 20001",
"ansible-playbook --vault-password-file=password_file -v -i inventory ensure-idmuser-and-local-user-have-access-to-same-files.yml",
"--- - name: Ensure both local user and IdM user have access to same files hosts: ipaserver become: false gather_facts: false tasks: - name: Ensure idview_for_host1 is applied to host01.idm.example.com ipaidview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idview_for_host01 host: host01.idm.example.com - name: Ensure an IdM user is present in ID view with two certificates ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: idview_for_host01 anchor: idm_user certificate: - \"{{ lookup('file', 'cert1.b64', rstrip=False) }}\" - \"{{ lookup('file', 'cert2.b64', rstrip=False) }}\"",
"ansible-playbook --vault-password-file=password_file -v -i inventory ensure-idmuser-present-in-idview-with-certificates.yml",
"getent group audio --------------------- audio:x:63",
"--- - name: Playbook to give IdM group access to sound card on IdM client hosts: ipaserver become: false tasks: - name: Ensure the audio group exists in IdM ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: audio - name: Ensure idview_for_host01 exists and is applied to host01.idm.example.com ipaidview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: idview_for_host01 host: host01.idm.example.com - name: Add an override for the IdM audio group with GID 63 to idview_for_host01 ipaidoverridegroup: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: idview_for_host01 anchor: audio GID: 63",
"ansible-playbook --vault-password-file=password_file -v -i inventory give-idm-group-access-to-sound-card-on-idm-client.yml",
"kinit admin Password:",
"ipa user-add testuser --first test --last user --password User login [tuser]: Password: Enter Password again to verify: ------------------ Added user \"tuser\" ------------------",
"ipa group-add-member --tuser audio",
"ssh [email protected]",
"id tuser uid=702801456(tuser) gid=63(audio) groups=63(audio)",
"ipa idview-show example-view ID View Name: example-view User object overrides: example-user1 Group object overrides: example-group",
"ipa idoverrideuser-add 'Default Trust View' [email protected] --gidnumber=732000006",
"sssctl cache-expire -u [email protected]",
"id [email protected] uid=702801456([email protected]) gid=732000006(ad_admins) groups=732000006(ad_admins),702800513(domain [email protected])",
"ipa idview-add example_for_host1 --------------------------- Added ID View \"example_for_host1\" --------------------------- ID View Name: example_for_host1",
"ipa idoverrideuser-add example_for_host1 [email protected] --gidnumber=732001337 ----------------------------- Added User ID override \"[email protected]\" ----------------------------- Anchor to override: [email protected] GID: 732001337",
"ipa idview-apply example_for_host1 --hosts=host1.idm.example.com ----------------------------- Applied ID View \"example_for_host1\" ----------------------------- hosts: host1.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------",
"sssctl cache-expire -u [email protected]",
"ssh [email protected]@host1.idm.example.com",
"[[email protected]@host1 ~]USD id [email protected] uid=702801456([email protected]) gid=732001337(admins2) groups=732001337(admins2),702800513(domain [email protected])",
"ipa hostgroup-add --desc=\"Baltimore hosts\" baltimore --------------------------- Added hostgroup \"baltimore\" --------------------------- Host-group: baltimore Description: Baltimore hosts",
"ipa hostgroup-add-member --hosts={host102,host103} baltimore Host-group: baltimore Description: Baltimore hosts Member hosts: host102.idm.example.com, host103.idm.example.com ------------------------- Number of members added 2 -------------------------",
"ipa idview-apply --hostgroups=baltimore ID View Name: example_for_host1 ----------------------------------------- Applied ID View \"example_for_host1\" ----------------------------------------- hosts: host102.idm.example.com, host103.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 2 ---------------------------------------------",
"ipa hostgroup-add-member --hosts=somehost.idm.example.com baltimore Host-group: baltimore Description: Baltimore hosts Member hosts: host102.idm.example.com, host103.idm.example.com,somehost.idm.example.com ------------------------- Number of members added 1 -------------------------",
"ipa idview-show example_for_host1 --all dn: cn=example_for_host1,cn=views,cn=accounts,dc=idm,dc=example,dc=com ID View Name: example_for_host1 [...] Hosts the view applies to: host102.idm.example.com, host103.idm.example.com objectclass: ipaIDView, top, nsContainer",
"ipa idview-apply --host=somehost.idm.example.com ID View Name: example_for_host1 ----------------------------------------- Applied ID View \"example_for_host1\" ----------------------------------------- hosts: somehost.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------",
"ipa idview-show example_for_host1 --all dn: cn=example_for_host1,cn=views,cn=accounts,dc=idm,dc=example,dc=com ID View Name: example_for_host1 [...] Hosts the view applies to: host102.idm.example.com, host103.idm.example.com, somehost.idm.example.com objectclass: ipaIDView, top, nsContainer",
"ipa idrange-find --------------- 1 range matched --------------- Range name: IDM.EXAMPLE.COM_id_range First Posix ID of the range: 882200000 Number of IDs in the range: 200000 Range type: local domain range ---------------------------- Number of entries returned 1 ----------------------------",
"ipa idrange-add IDM.EXAMPLE.COM_new_range --base-id 5000 --range-size 1000 --rid-base 300000 --secondary-rid-base 1300000 --type ipa-local ipa: WARNING: Service [email protected] requires restart on IPA server <all IPA servers> to apply configuration changes. ------------------------------------------ Added ID range \"IDM.EXAMPLE.COM_new_range\" ------------------------------------------ Range name: IDM.EXAMPLE.COM_new_range First Posix ID of the range: 5000 Number of IDs in the range: 1000 First RID of the corresponding RID range: 300000 First RID of the secondary RID range: 1300000 Range type: local domain range",
"systemctl restart [email protected]",
"sss_cache -E",
"systemctl restart sssd",
"ipa idrange-find ---------------- 2 ranges matched ---------------- Range name: IDM.EXAMPLE.COM_id_range First Posix ID of the range: 882200000 Number of IDs in the range: 200000 Range type: local domain range Range name: IDM.EXAMPLE.COM_new_range First Posix ID of the range: 5000 Number of IDs in the range: 1000 First RID of the corresponding RID range: 300000 First RID of the secondary RID range: 1300000 Range type: local domain range ---------------------------- Number of entries returned 2 ----------------------------",
"ipa idrange-show IDM.EXAMPLE.COM_id_range Range name: IDM.EXAMPLE.COM_id_range First Posix ID of the range: 196600000 Number of IDs in the range: 200000 First RID of the corresponding RID range: 1000 First RID of the secondary RID range: 1000000 Range type: local domain range",
"cd ~/ MyPlaybooks /",
"--- - name: Playbook to manage idrange hosts: ipaserver become: no vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure local idrange is present ipaidrange: ipaadmin_password: \"{{ ipaadmin_password }}\" name: new_id_range base_id: 12000000 range_size: 200000 rid_base: 1000000 secondary_rid_base: 200000000",
"ansible-playbook --vault-password-file=password_file -v -i inventory idrange-present.yml",
"systemctl restart [email protected]",
"sss_cache -E",
"systemctl restart sssd",
"ipa idrange-find ---------------- 2 ranges matched ---------------- Range name: IDM.EXAMPLE.COM_id_range First Posix ID of the range: 882200000 Number of IDs in the range: 200000 Range type: local domain range Range name: IDM.EXAMPLE.COM_new_id_range First Posix ID of the range: 12000000 Number of IDs in the range: 200000 Range type: local domain range ---------------------------- Number of entries returned 2 ----------------------------",
"ipa idrange-find",
"ipa idrange-del AD.EXAMPLE.COM_id_range",
"systemctl restart sssd",
"ipa-replica-manage dnarange-show serverA.example.com: 1001-1500 serverB.example.com: 1501-2000 serverC.example.com: No range set ipa-replica-manage dnarange-show serverA.example.com serverA.example.com: 1001-1500",
"ipa-replica-manage dnanextrange-show serverA.example.com: 2001-2500 serverB.example.com: No on-deck range set serverC.example.com: No on-deck range set ipa-replica-manage dnanextrange-show serverA.example.com serverA.example.com: 2001-2500",
"ipa-replica-manage dnarange-set serverA.example.com 1250-1499",
"ipa-replica-manage dnanextrange-set serverB.example.com 1500-5000",
"ipa subid-find",
"ipa subid-generate --owner=idmuser Added subordinate id \"359dfcef-6b76-4911-bd37-bb5b66b8c418\" Unique ID: 359dfcef-6b76-4911-bd37-bb5b66b8c418 Description: auto-assigned subid Owner: idmuser SubUID range start: 2147483648 SubUID range size: 65536 SubGID range start: 2147483648 SubGID range size: 65536",
"/usr/libexec/ipa/ipa-subids --all-users Found 2 user(s) without subordinate ids Processing user 'user4' (1/2) Processing user 'user5' (2/2) Updated 2 user(s) The ipa-subids command was successful",
"ipa config-mod --user-default-subid=True",
"ipa subid-find --owner=idmuser 1 subordinate id matched Unique ID: 359dfcef-6b76-4911-bd37-bb5b66b8c418 Owner: idmuser SubUID range start: 2147483648 SubUID range size: 65536 SubGID range start: 2147483648 SubGID range size: 65536 Number of entries returned 1",
"ipa subid-show 359dfcef-6b76-4911-bd37-bb5b66b8c418 Unique ID: 359dfcef-6b76-4911-bd37-bb5b66b8c418 Owner: idmuser SubUID range start: 2147483648 SubUID range size: 65536 SubGID range start: 2147483648 SubGID range size: 65536",
"ipa subid-match --subuid=2147483670 1 subordinate id matched Unique ID: 359dfcef-6b76-4911-bd37-bb5b66b8c418 Owner: uid=idmuser SubUID range start: 2147483648 SubUID range size: 65536 SubGID range start: 2147483648 SubGID range size: 65536 Number of entries returned 1",
"[...] subid: sss",
"getsubids idmuser 0: idmuser 2147483648 65536",
"ipa host-add client1.example.com",
"ipa host-add --ip-address=192.168.166.31 client1.example.com",
"ipa host-add --force client1.example.com",
"ipa host-del --updatedns client1.example.com",
"ipa-client-install --force-join",
"User authorized to enroll computers: hostadmin Password for hostadmin @ EXAMPLE.COM :",
"ipa-client-install --keytab /tmp/krb5.keytab",
"[user@client1 ~]USD id admin uid=1254400000(admin) gid=1254400000(admins) groups=1254400000(admins)",
"[user@client1 ~]USD su - idm_user Last login: Thu Oct 18 18:39:11 CEST 2018 from 192.168.122.1 on pts/0 [idm_user@client1 ~]USD",
"ipa service-find old-client-name.example.com",
"find / -name \"*.keytab\"",
"ipa hostgroup-find old-client-name.example.com",
"ipa-client-install --uninstall",
"ipa dnsrecord-del Record name: old-client-client Zone name: idm.example.com No option to delete specific record provided. Delete all? Yes/No (default No): true ------------------------ Deleted record \"old-client-name\"",
"ipa-rmkeytab -k /path/to/keytab -r EXAMPLE.COM",
"ipa host-del client.example.com",
"hostnamectl set-hostname new-client-name.example.com",
"ipa service-add service_name/new-client-name",
"kinit admin ipa host-disable client.example.com",
"ipa-getkeytab -s server.example.com -p host/client.example.com -k /etc/krb5.keytab -D \"cn=directory manager\" -w password",
"ipa service-add-host principal --hosts=<hostname>",
"ipa service-add HTTP/web.example.com ipa service-add-host HTTP/web.example.com --hosts=client1.example.com",
"kinit -kt /etc/krb5.keytab host/client1.example.com ipa-getkeytab -s server.example.com -k /tmp/test.keytab -p HTTP/web.example.com Keytab successfully retrieved and stored in: /tmp/test.keytab",
"kinit -kt /etc/krb5.keytab host/client1.example.com openssl req -newkey rsa:2048 -subj '/CN=web.example.com/O=EXAMPLE.COM' -keyout /etc/pki/tls/web.key -out /tmp/web.csr -nodes Generating a 2048 bit RSA private key .............................................................+++ ............................................................................................+++ Writing new private key to '/etc/pki/tls/private/web.key'",
"ipa cert-request --principal=HTTP/web.example.com web.csr Certificate: MIICETCCAXqgA...[snip] Subject: CN=web.example.com,O=EXAMPLE.COM Issuer: CN=EXAMPLE.COM Certificate Authority Not Before: Tue Feb 08 18:51:51 2011 UTC Not After: Mon Feb 08 18:51:51 2016 UTC Serial number: 1005",
"kinit admin",
"ipa host-add-managedby client2.example.com --hosts=client1.example.com",
"kinit -kt /etc/krb5.keytab host/client1.example.com",
"ipa-getkeytab -s server.example.com -k /tmp/client2.keytab -p host/client2.example.com Keytab successfully retrieved and stored in: /tmp/client2.keytab",
"kinit -kt /etc/krb5.keytab host/[email protected]",
"kinit -kt /etc/httpd/conf/krb5.keytab HTTP/[email protected]",
"[ipaserver] server.idm.example.com",
"--- - name: Host present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Host host01.idm.example.com present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com state: present force: true",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Principal name: host/[email protected] Principal alias: host/[email protected] Password: False Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Host present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure host01.idm.example.com is present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com description: Example host ip_address: 192.168.0.123 locality: Lab ns_host_location: Lab ns_os_version: CentOS 7 ns_hardware_platform: Lenovo T61 mac_address: - \"08:00:27:E3:B1:2D\" - \"52:54:00:BD:97:1E\" state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Description: Example host Locality: Lab Location: Lab Platform: Lenovo T61 Operating system: CentOS 7 Principal name: host/[email protected] Principal alias: host/[email protected] MAC address: 08:00:27:E3:B1:2D, 52:54:00:BD:97:1E Password: False Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Ensure hosts with random password hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Hosts host01.idm.example.com and host02.idm.example.com present with random passwords ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" hosts: - name: host01.idm.example.com random: true force: true - name: host02.idm.example.com random: true force: true register: ipahost",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-are-present.yml [...] TASK [Hosts host01.idm.example.com and host02.idm.example.com present with random passwords] changed: [r8server.idm.example.com] => {\"changed\": true, \"host\": {\"host01.idm.example.com\": {\"randompassword\": \"0HoIRvjUdH0Ycbf6uYdWTxH\"}, \"host02.idm.example.com\": {\"randompassword\": \"5VdLgrf3wvojmACdHC3uA3s\"}}}",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Password: True Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Host member IP addresses present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure host101.example.com IP addresses present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com ip_address: - 192.168.0.123 - fe80::20c:29ff:fe02:a1b3 - 192.168.0.124 - fe80::20c:29ff:fe02:a1b4 force: true",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-with-multiple-IP-addreses-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Principal name: host/[email protected] Principal alias: host/[email protected] Password: False Keytab: False Managed by: host01.idm.example.com",
"ipa dnsrecord-show idm.example.com host01 [...] Record name: host01 A record: 192.168.0.123, 192.168.0.124 AAAA record: fe80::20c:29ff:fe02:a1b3, fe80::20c:29ff:fe02:a1b4",
"[ipaserver] server.idm.example.com",
"--- - name: Host absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Host host01.idm.example.com absent ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com updatedns: true state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-absent.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipa host-show host01.idm.example.com ipa: ERROR: host01.idm.example.com: host not found",
"ipa hostgroup-find ------------------- 1 hostgroup matched ------------------- Host-group: ipaservers Description: IPA server hosts ---------------------------- Number of entries returned 1 ----------------------------",
"ipa hostgroup-find --all ------------------- 1 hostgroup matched ------------------- dn: cn=ipaservers,cn=hostgroups,cn=accounts,dc=idm,dc=local Host-group: ipaservers Description: IPA server hosts Member hosts: xxx.xxx.xxx.xxx ipauniqueid: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx objectclass: top, groupOfNames, nestedGroup, ipaobject, ipahostgroup ---------------------------- Number of entries returned 1 ----------------------------",
"ipa hostgroup-add --desc ' My new host group ' group_name --------------------- Added hostgroup \"group_name\" --------------------- Host-group: group_name Description: My new host group ---------------------",
"ipa hostgroup-del group_name -------------------------- Deleted hostgroup \"group_name\" --------------------------",
"ipa hostgroup-add-member group_name --hosts example_member Host-group: group_name Description: My host group Member hosts: example_member ------------------------- Number of members added 1 -------------------------",
"ipa hostgroup-add-member group_name --hostgroups nested_group Host-group: group_name Description: My host group Member host-groups: nested_group ------------------------- Number of members added 1 -------------------------",
"ipa hostgroup-add-member group_name --hosts={ host1,host2 } --hostgroups={ group1,group2 }",
"ipa hostgroup-remove-member group_name --hosts example_member Host-group: group_name Description: My host group ------------------------- Number of members removed 1 -------------------------",
"ipa hostgroup-remove-member group_name --hostgroups example_member Host-group: group_name Description: My host group ------------------------- Number of members removed 1 -------------------------",
"ipa hostgroup- remove -member group_name --hosts={ host1,host2 } --hostgroups={ group1,group2 }",
"ipa hostgroup-add-member-manager group_name --user example_member Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Member of netgroups: group_name Membership managed by users: example_member ------------------------- Number of members added 1 -------------------------",
"ipa hostgroup-add-member-manager group_name --groups admin_group Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Member of netgroups: group_name Membership managed by groups: admin_group Membership managed by users: example_member ------------------------- Number of members added 1 -------------------------",
"ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Membership managed by groups: admin_group Membership managed by users: example_member",
"ipa hostgroup-remove-member-manager group_name --user example_member Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Member of netgroups: group_name Membership managed by groups: nested_group --------------------------- Number of members removed 1 ---------------------------",
"ipa hostgroup-remove-member-manager group_name --groups nested_group Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins Member of netgroups: group_name --------------------------- Number of members removed 1 ---------------------------",
"ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: project_admins",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is present - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hostgroup-is-present.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is present - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases host: - db.idm.example.com action: member",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-present-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases Member hosts: db.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure hosts and hostgroups are present in existing databases hostgroup - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases hostgroup: - mysql-server - oracle-server action: member",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-present-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases Member hosts: db.idm.example.com Member host-groups: mysql-server, oracle-server",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle host group membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager user example_member is present for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_user: example_member - name: Ensure member manager group project_admins is present for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_group: project_admins",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/add-member-managers-host-groups.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipaserver]USD ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: testhostgroup2 Membership managed by groups: project_admins Membership managed by users: example_member",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure host-group databases is absent - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases host: - db.idm.example.com action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-absent-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases Member host-groups: mysql-server, oracle-server",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure hosts and hostgroups are absent in existing databases hostgroup - ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases hostgroup: - mysql-server - oracle-server action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-or-hostgroups-are-absent-in-hostgroup.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases Host-group: databases",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hostgroups hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - Ensure host-group databases is absent ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: databases state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hostgroup-is-absent.yml",
"ssh [email protected] Password: [admin@server /]USD",
"kinit admin Password for [email protected]:",
"ipa hostgroup-show databases ipa: ERROR: databases: host group not found",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle host group membership management hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure member manager host and host group members are absent for group_name ipahostgroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: group_name membermanager_user: example_member membermanager_group: project_admins action: member state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-member-managers-host-groups-are-absent.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipaserver]USD ipa hostgroup-show group_name Host-group: group_name Member hosts: server.idm.example.com Member host-groups: testhostgroup2",
"ipa hbacrule-add Rule name: rule_name --------------------------- Added HBAC rule \" rule_name \" --------------------------- Rule name: rule_name Enabled: TRUE",
"ipa hbacrule-add-user --users= sysadmin Rule name: rule_name Rule name: rule_name Enabled: True Users: sysadmin ------------------------- Number of members added 1 -------------------------",
"ipa hbacrule-mod rule_name --hostcat=all ------------------------------ Modified HBAC rule \" rule_name \" ------------------------------ Rule name: rule_name Host category: all Enabled: TRUE Users: sysadmin",
"ipa hbacrule-mod rule_name --servicecat=all ------------------------------ Modified HBAC rule \" rule_name \" ------------------------------ Rule name: rule_name Host category: all Service category: all Enabled: True Users: sysadmin",
"ipa hbactest --user= sysadmin --host=server.idm.example.com --service=sudo --rules= rule_name --------------------- Access granted: True --------------------- Matched rules: rule_name",
"ipa hbacrule-add --hostcat=all rule2_name ipa hbacrule-add-user --users sysadmin rule2_name ipa hbacrule-add-service --hbacsvcs=sshd rule2_name Rule name: rule2_name Host category: all Enabled: True Users: admin HBAC Services: sshd ------------------------- Number of members added 1 -------------------------",
"ipa hbactest --user= sysadmin --host=server.idm.example.com --service=sudo --rules= rule_name --rules= rule2_name -------------------- Access granted: True -------------------- Matched rules: rule_name Not matched rules: rule2_name",
"ipa hbacrule-disable allow_all ------------------------------ Disabled HBAC rule \"allow_all\" ------------------------------",
"ipa hbacsvc-add tftp ------------------------- Added HBAC service \"tftp\" ------------------------- Service name: tftp",
"ipa hbacsvcgroup-add Service group name: login -------------------------------- Added HBAC service group \" login \" -------------------------------- Service group name: login",
"ipa hbacsvcgroup-add-member Service group name: login [member HBAC service]: sshd Service group name: login Member HBAC service: sshd ------------------------- Number of members added 1 -------------------------",
"[ipaserver] server.idm.example.com",
"--- - name: Playbook to handle hbacrules hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: # Ensure idm_user can access client.idm.example.com via the sshd service - ipahbacrule: ipaadmin_password: \"{{ ipaadmin_password }}\" name: login user: idm_user host: client.idm.example.com hbacsvc: - sshd state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-new-hbacrule-present.yml",
"host.example.com,1.2.3.4 ssh-rsa AAA...ZZZ==",
"\"ssh-rsa ABCD1234...== ipaclient.example.com\"",
"ssh-rsa AAA...ZZZ== host.example.com,1.2.3.4",
"ssh-keygen -t rsa -C [email protected] Generating public/private rsa key pair.",
"Enter file in which to save the key (/home/user/.ssh/id_rsa):",
"Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/user/.ssh/id_rsa. Your public key has been saved in /home/user/.ssh/id_rsa.pub. The key fingerprint is: SHA256:ONxjcMX7hJ5zly8F8ID9fpbqcuxQK+ylVLKDMsJPxGA [email protected] The key's randomart image is: +---[RSA 3072]----+ | ..o | | .o + | | E. . o = | | ..o= o . + | | +oS. = + o.| | . .o .* B =.+| | o + . X.+.= | | + o o.*+. .| | . o=o . | +----[SHA256]-----+",
"server.example.com,1.2.3.4 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEApvjBvSFSkTU0WQW4eOweeo0DZZ08F9Ud21xlLy6FOhzwpXFGIyxvXZ52+siHBHbbqGL5+14N7UvElruyslIHx9LYUR/pPKSMXCGyboLy5aTNl5OQ5EHwrhVnFDIKXkvp45945R7SKYCUtRumm0Iw6wq0XD4o+ILeVbV3wmcB1bXs36ZvC/M6riefn9PcJmh6vNCvIsbMY6S+FhkWUTTiOXJjUDYRLlwM273FfWhzHK+SSQXeBp/zIn1gFvJhSZMRi9HZpDoqxLbBB9QIdIw6U4MIjNmKsSI/ASpkFm2GuQ7ZK9KuMItY2AoCuIRmRAdF8iYNHBTXNfFurGogXwRDjQ==",
"cat /home/user/.ssh/host_keys.pub ssh-rsa AAAAB3NzaC1yc2E...tJG1PK2Mq++wQ== server.example.com,1.2.3.4",
"ipa host-mod --sshpubkey=\"ssh-rsa RjlzYQo==\" --updatedns host1.example.com",
"--sshpubkey=\"RjlzYQo==\" --sshpubkey=\"ZEt0TAo==\"",
"ipa host-show client.ipa.test SSH public key fingerprint: SHA256:qGaqTZM60YPFTngFX0PtNPCKbIuudwf1D2LqmDeOcuA [email protected] (ssh-rsa)",
"kinit admin ipa host-mod --sshpubkey= --updatedns host1.example.com",
"ipa host-show client.ipa.test Host name: client.ipa.test Platform: x86_64 Operating system: 4.18.0-240.el8.x86_64 Principal name: host/[email protected] Principal alias: host/[email protected] Password: False Member of host-groups: ipaservers Roles: helpdesk Member of netgroups: test Member of Sudo rule: test2 Member of HBAC rule: test Keytab: True Managed by: client.ipa.test, server.ipa.test Users allowed to retrieve keytab: user1, user2, user3",
"ipa user-mod user --sshpubkey=\"ssh-rsa AAAAB3Nza...SNc5dv== client.example.com\"",
"--sshpubkey=\"AAAAB3Nza...SNc5dv==\" --sshpubkey=\"RjlzYQo...ZEt0TAo=\"",
"ipa user-mod user --sshpubkey=\"USD(cat ~/.ssh/id_rsa.pub)\" --sshpubkey=\"USD(cat ~/.ssh/id_rsa2.pub)\"",
"ipa user-show user User login: user First name: user Last name: user Home directory: /home/user Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 1118800019 GID: 1118800019 SSH public key fingerprint: SHA256:qGaqTZM60YPFTngFX0PtNPCKbIuudwf1D2LqmDeOcuA [email protected] (ssh-rsa) Account disabled: False Password: False Member of groups: ipausers Subordinate ids: 3167b7cc-8497-4ff2-ab4b-6fcb3cb1b047 Kerberos keys available: False",
"ipa user-mod user --sshpubkey=",
"ipa user-show user User login: user First name: user Last name: user Home directory: /home/user Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 1118800019 GID: 1118800019 Account disabled: False Password: False Member of groups: ipausers Subordinate ids: 3167b7cc-8497-4ff2-ab4b-6fcb3cb1b047 Kerberos keys available: False",
"[user@server ~]USD ipa config-mod --domain-resolution-order='ad.example.com:subdomain1.ad.example.com:idm.example.com' Maximum username length: 32 Home directory base: /home Domain Resolution Order: ad.example.com:subdomain1.ad.example.com:idm.example.com",
"id <ad_username> uid=1916901102(ad_username) gid=1916900513(domain users) groups=1916900513(domain users)",
"[user@server ~]USD ipa idview-add ADsubdomain1_first --desc \"ID view for resolving AD subdomain1 first on client1.idm.example.com\" --domain-resolution-order subdomain1.ad.example.com:ad.example.com:idm.example.com --------------------------------- Added ID View \"ADsubdomain1_first\" --------------------------------- ID View Name: ADsubdomain1_first Description: ID view for resolving AD subdomain1 first on client1.idm.example.com Domain Resolution Order: subdomain1.ad.example.com:ad.example.com:idm.example.com",
"[user@server ~]USD ipa idview-apply ADsubdomain1_first --hosts client1.idm.example.com ----------------------------------- Applied ID View \"ADsubdomain1_first\" ----------------------------------- hosts: client1.idm.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------",
"[user@server ~]USD ipa idview-show ADsubdomain1_first --show-hosts ID View Name: ADsubdomain1_first Description: ID view for resolving AD subdomain1 first on client1.idm.example.com Hosts the view applies to: client1.idm.example.com Domain resolution order: subdomain1.ad.example.com:ad.example.com:idm.example.com",
"id <user_from_subdomain1> uid=1916901106(user_from_subdomain1) gid=1916900513(domain users) groups=1916900513(domain users)",
"--- - name: Playbook to add idview and apply it to an IdM client hosts: ipaserver vars_files: - /home/<user_name>/MyPlaybooks/secret.yml become: false gather_facts: false tasks: - name: Add idview and apply it to testhost.idm.example.com ipaidview: ipaadmin_password: \"{{ ipaadmin_password }}\" name: test_idview host: testhost.idm.example.com domain_resolution_order: \"ad.example.com:ipa.example.com\"",
"ansible-playbook --vault-password-file=password_file -v -i inventory add-id-view-with-domain-resolution-order.yml",
"id aduser05 uid=1916901102(aduser05) gid=1916900513(domain users) groups=1916900513(domain users)",
"domain_resolution_order = subdomain1.ad.example.com, ad.example.com, idm.example.com",
"systemctl restart sssd",
"id <user_from_subdomain1> uid=1916901106(user_from_subdomain1) gid=1916900513(domain users) groups=1916900513(domain users)",
"ipa trust-fetch-domains Realm-Name: ad.example.com ------------------------------- No new trust domains were found ------------------------------- ---------------------------- Number of entries returned 0 ----------------------------",
"ipa trust-show Realm-Name: ad.example.com Realm-Name: ad.example.com Domain NetBIOS name: AD Domain Security Identifier: S-1-5-21-796215754-1239681026-23416912 Trust direction: One-way trust Trust type: Active Directory domain UPN suffixes: example.com",
"[global] log level = 10",
"[global] debug = True",
"systemctl restart httpd",
"ipa trust-fetch-domains <ad.example.com>",
"yum module enable idm:DL1 yum distro-sync",
"yum module install idm:DL1/adtrust",
"kinit admin ipa idoverrideuser-add 'default trust view' [email protected]",
"ipa group-add-member admins [email protected]",
"ipa role-add-member 'User Administrator' [email protected]",
"cd ~/ MyPlaybooks /",
"--- - name: Playbook to ensure presence of users in a group hosts: ipaserver - name: Ensure the [email protected] user ID override is a member of the admins group: ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: admins idoverrideuser: - [email protected]",
"ansible-playbook --vault-password-file=password_file -v -i inventory add-useridoverride-to-group.yml",
"kdestroy -A",
"kinit [email protected] Password for [email protected]:",
"ipa group-add some-new-group ---------------------------- Added group \"some-new-group\" ---------------------------- Group name: some-new-group GID: 1997000011",
"--- - name: Enable AD administrator to act as a FreeIPA admin hosts: ipaserver become: false gather_facts: false tasks: - name: Ensure idoverride for [email protected] in 'default trust view' ipaidoverrideuser: ipaadmin_password: \"{{ ipaadmin_password }}\" idview: \"Default Trust View\" anchor: [email protected]",
"- name: Add the AD administrator as a member of admins ipagroup: ipaadmin_password: \"{{ ipaadmin_password }}\" name: admins idoverrideuser: - [email protected]",
"ansible-playbook --vault-password-file=password_file -v -i inventory enable-ad-admin-to-administer-idm.yml",
"ssh [email protected]@client.idm.example.com",
"klist Ticket cache: KCM:325600500:99540 Default principal: [email protected] Valid starting Expires Service principal 02/04/2024 11:54:16 02/04/2024 21:54:16 krbtgt/[email protected] renew until 02/05/2024 11:54:16",
"ipa user-add testuser --first=test --last=user ------------------------ Added user \"tuser\" ------------------------ User login: tuser First name: test Last name: user Full name: test user [...]",
"kinit admin",
"ipa idp-add my-keycloak-idp --provider keycloak --organization main --base-url keycloak.idm.example.com:8443/auth --client-id id13778 ------------------------------------------------ Added Identity Provider reference \"my-keycloak-idp\" ------------------------------------------------ Identity Provider reference name: my-keycloak-idp Authorization URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/auth Device authorization URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/auth/device Token URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/token User info URI: https://keycloak.idm.example.com:8443/auth/realms/main/protocol/openid-connect/userinfo Client identifier: ipa_oidc_client Scope: openid email External IdP user identifier attribute: email",
"ipa idp-show my-keycloak-idp",
"ipa idp-add my-azure-idp --provider microsoft --organization main --client-id <azure_client_id>",
"ipa idp-add my-google-idp --provider google --client-id <google_client_id>",
"ipa idp-add my-github-idp --provider github --client-id <github_client_id>",
"ipa idp-add my-keycloak-idp --provider keycloak --organization main --base-url keycloak.idm.example.com:8443/auth --client-id <keycloak_client_id>",
"ipa idp-add my-okta-idp --provider okta --base-url dev-12345.okta.com --client-id <okta_client_id>",
"kinit admin",
"ipa idp-find keycloak",
"ipa idp-show my-keycloak-idp",
"ipa idp-mod my-keycloak-idp --secret",
"ipa idp-del my-keycloak-idp",
"ipa user-mod idm-user-with-external-idp --idp my-keycloak-idp --idp-user-id [email protected] --user-auth-type=idp --------------------------------- Modified user \"idm-user-with-external-idp\" --------------------------------- User login: idm-user-with-external-idp First name: Test Last name: User1 Home directory: /home/idm-user-with-external-idp Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] UID: 35000003 GID: 35000003 User authentication types: idp External IdP configuration: keycloak External IdP user identifier: [email protected] Account disabled: False Password: False Member of groups: ipausers Kerberos keys available: False",
"ipa user-show idm-user-with-external-idp User login: idm-user-with-external-idp First name: Test Last name: User1 Home directory: /home/idm-user-with-external-idp Login shell: /bin/sh Principal name: [email protected] Principal alias: [email protected] Email address: [email protected] ID: 35000003 GID: 35000003 User authentication types: idp External IdP configuration: keycloak External IdP user identifier: [email protected] Account disabled: False Password: False Member of groups: ipausers Kerberos keys available: False",
"kinit -n -c ./fast.ccache",
"klist -c fast.ccache Ticket cache: FILE:fast.ccache Default principal: WELLKNOWN/ANONYMOUS@WELLKNOWN:ANONYMOUS Valid starting Expires Service principal 03/03/2024 13:36:37 03/04/2024 13:14:28 krbtgt/[email protected]",
"kinit -T ./fast.ccache idm-user-with-external-idp Authenticate at https://oauth2.idp.com:8443/auth/realms/master/device?user_code=YHMQ-XKTL and press ENTER.:",
"klist -C Ticket cache: KCM:0:58420 Default principal: [email protected] Valid starting Expires Service principal 05/09/22 07:48:23 05/10/22 07:03:07 krbtgt/[email protected] config: fast_avail(krbtgt/[email protected]) = yes 08/17/2022 20:22:45 08/18/2022 20:22:43 krbtgt/[email protected] config: pa_type(krbtgt/[email protected]) = 152",
"[user@client ~]USD ssh [email protected] ([email protected]) Authenticate at https://oauth2.idp.com:8443/auth/realms/main/device?user_code=XYFL-ROYR and press ENTER.",
"[idm-user-with-external-idp@client ~]USD klist -C Ticket cache: KCM:0:58420 Default principal: [email protected] Valid starting Expires Service principal 05/09/22 07:48:23 05/10/22 07:03:07 krbtgt/[email protected] config: fast_avail(krbtgt/[email protected]) = yes 08/17/2022 20:22:45 08/18/2022 20:22:43 krbtgt/[email protected] config: pa_type(krbtgt/[email protected]) = 152",
"ipa idp-add MySSO --provider keycloak --org main --base-url keycloak.domain.com:8443/auth --client-id <your-client-id>",
"ipa idp-add MyOkta --provider okta --base-url dev-12345.okta.com --client-id <your-client-id>"
] | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html-single/managing_idm_users_groups_hosts_and_access_control_rules/index |
Chapter 3. Enabling user-managed encryption for Azure | Chapter 3. Enabling user-managed encryption for Azure In OpenShift Container Platform version 4.15, you can install a cluster with a user-managed encryption key in Azure. To enable this feature, you can prepare an Azure DiskEncryptionSet before installation, modify the install-config.yaml file, and then complete the installation. 3.1. Preparing an Azure Disk Encryption Set The OpenShift Container Platform installer can use an existing Disk Encryption Set with a user-managed key. To enable this feature, you can create a Disk Encryption Set in Azure and provide the key to the installer. Procedure Set the following environment variables for the Azure resource group by running the following command: USD export RESOURCEGROUP="<resource_group>" \ 1 LOCATION="<location>" 2 1 Specifies the name of the Azure resource group where you will create the Disk Encryption Set and encryption key. To avoid losing access to your keys after destroying the cluster, you should create the Disk Encryption Set in a different resource group than the resource group where you install the cluster. 2 Specifies the Azure location where you will create the resource group. Set the following environment variables for the Azure Key Vault and Disk Encryption Set by running the following command: USD export KEYVAULT_NAME="<keyvault_name>" \ 1 KEYVAULT_KEY_NAME="<keyvault_key_name>" \ 2 DISK_ENCRYPTION_SET_NAME="<disk_encryption_set_name>" 3 1 Specifies the name of the Azure Key Vault you will create. 2 Specifies the name of the encryption key you will create. 3 Specifies the name of the disk encryption set you will create. Set the environment variable for the ID of your Azure Service Principal by running the following command: USD export CLUSTER_SP_ID="<service_principal_id>" 1 1 Specifies the ID of the service principal you will use for this installation. Enable host-level encryption in Azure by running the following commands: USD az feature register --namespace "Microsoft.Compute" --name "EncryptionAtHost" USD az feature show --namespace Microsoft.Compute --name EncryptionAtHost USD az provider register -n Microsoft.Compute Create an Azure Resource Group to hold the disk encryption set and associated resources by running the following command: USD az group create --name USDRESOURCEGROUP --location USDLOCATION Create an Azure key vault by running the following command: USD az keyvault create -n USDKEYVAULT_NAME -g USDRESOURCEGROUP -l USDLOCATION \ --enable-purge-protection true Create an encryption key in the key vault by running the following command: USD az keyvault key create --vault-name USDKEYVAULT_NAME -n USDKEYVAULT_KEY_NAME \ --protection software Capture the ID of the key vault by running the following command: USD KEYVAULT_ID=USD(az keyvault show --name USDKEYVAULT_NAME --query "[id]" -o tsv) Capture the key URL in the key vault by running the following command: USD KEYVAULT_KEY_URL=USD(az keyvault key show --vault-name USDKEYVAULT_NAME --name \ USDKEYVAULT_KEY_NAME --query "[key.kid]" -o tsv) Create a disk encryption set by running the following command: USD az disk-encryption-set create -n USDDISK_ENCRYPTION_SET_NAME -l USDLOCATION -g \ USDRESOURCEGROUP --source-vault USDKEYVAULT_ID --key-url USDKEYVAULT_KEY_URL Grant the DiskEncryptionSet resource access to the key vault by running the following commands: USD DES_IDENTITY=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g \ USDRESOURCEGROUP --query "[identity.principalId]" -o tsv) USD az keyvault set-policy -n USDKEYVAULT_NAME -g USDRESOURCEGROUP --object-id \ USDDES_IDENTITY --key-permissions wrapkey unwrapkey get Grant the Azure Service Principal permission to read the DiskEncryptionSet by running the following commands: USD DES_RESOURCE_ID=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g \ USDRESOURCEGROUP --query "[id]" -o tsv) USD az role assignment create --assignee USDCLUSTER_SP_ID --role "<reader_role>" \ 1 --scope USDDES_RESOURCE_ID -o jsonc 1 Specifies an Azure role with read permissions to the disk encryption set. You can use the Owner role or a custom role with the necessary permissions. 3.2. steps Install an OpenShift Container Platform cluster: Install a cluster with customizations on installer-provisioned infrastructure Install a cluster with network customizations on installer-provisioned infrastructure Install a cluster into an existing VNet on installer-provisioned infrastructure Install a private cluster on installer-provisioned infrastructure Install a cluster into an government region on installer-provisioned infrastructure | [
"export RESOURCEGROUP=\"<resource_group>\" \\ 1 LOCATION=\"<location>\" 2",
"export KEYVAULT_NAME=\"<keyvault_name>\" \\ 1 KEYVAULT_KEY_NAME=\"<keyvault_key_name>\" \\ 2 DISK_ENCRYPTION_SET_NAME=\"<disk_encryption_set_name>\" 3",
"export CLUSTER_SP_ID=\"<service_principal_id>\" 1",
"az feature register --namespace \"Microsoft.Compute\" --name \"EncryptionAtHost\"",
"az feature show --namespace Microsoft.Compute --name EncryptionAtHost",
"az provider register -n Microsoft.Compute",
"az group create --name USDRESOURCEGROUP --location USDLOCATION",
"az keyvault create -n USDKEYVAULT_NAME -g USDRESOURCEGROUP -l USDLOCATION --enable-purge-protection true",
"az keyvault key create --vault-name USDKEYVAULT_NAME -n USDKEYVAULT_KEY_NAME --protection software",
"KEYVAULT_ID=USD(az keyvault show --name USDKEYVAULT_NAME --query \"[id]\" -o tsv)",
"KEYVAULT_KEY_URL=USD(az keyvault key show --vault-name USDKEYVAULT_NAME --name USDKEYVAULT_KEY_NAME --query \"[key.kid]\" -o tsv)",
"az disk-encryption-set create -n USDDISK_ENCRYPTION_SET_NAME -l USDLOCATION -g USDRESOURCEGROUP --source-vault USDKEYVAULT_ID --key-url USDKEYVAULT_KEY_URL",
"DES_IDENTITY=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g USDRESOURCEGROUP --query \"[identity.principalId]\" -o tsv)",
"az keyvault set-policy -n USDKEYVAULT_NAME -g USDRESOURCEGROUP --object-id USDDES_IDENTITY --key-permissions wrapkey unwrapkey get",
"DES_RESOURCE_ID=USD(az disk-encryption-set show -n USDDISK_ENCRYPTION_SET_NAME -g USDRESOURCEGROUP --query \"[id]\" -o tsv)",
"az role assignment create --assignee USDCLUSTER_SP_ID --role \"<reader_role>\" \\ 1 --scope USDDES_RESOURCE_ID -o jsonc"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/installing_on_azure/enabling-user-managed-encryption-azure |
2.5. Tuned | 2.5. Tuned Tuned is a profile-based system tuning tool that uses the udev device manager to monitor connected devices, and enables both static and dynamic tuning of system settings. Dynamic tuning is an experimental feature and is turned off by default in Red Hat Enterprise Linux 7. Tuned offers predefined profiles to handle common use cases, such as high throughput, low latency, or power saving. You can modify the Tuned rules for each profile and customize how to tune a particular device. For instructions on how to create custom Tuned profiles, from PowerTOP suggestions, see the section called "Using powertop2tuned" . A profile is automatically set as default based on the product in use. You can use the tuned-adm recommend command to determine which profile Red Hat recommends as the most suitable for a particular product. If no recommendation is available, the balanced profile is set. The balanced profile, which is suitable for most workloads, balances energy consumption, performance, and latency. With the balanced profile, finishing a task quickly with the maximum available computing power usually requires less energy than performing the same task over a longer period of time with less computing power. Using the powersave profile can prolong battery life if a laptop is in an idle state, or performing only computationally undemanding operations. For such operations, higher latency in return for lower energy consumption is generally acceptable, or the operations do not need to be finished quickly, for example using IRC, viewing simple web pages, or playing audio and video files. For detailed information on Tuned and power-saving profiles provided with tuned-adm , see the Tuned chapter in the Red Hat Enterprise Linux 7 Performance Tuning Guide . Using powertop2tuned The powertop2tuned utility allows you to create custom Tuned profiles from PowerTOP suggestions. For information on PowerTOP , see Section 2.2, "PowerTOP" . To install the powertop2tuned utility, use: To create a custom profile, use: By default, powertop2tuned creates profiles in the /etc/tuned/ directory, and bases the custom profile on the currently selected Tuned profile. For safety reasons, all PowerTOP tunings are initially disabled in the new profile. To enable tunings, uncomment them in the /etc/tuned/ profile_name /tuned.conf file. You can use the --enable or -e option to generate a new profile that enables most of the tunings suggested by PowerTOP . Certain potentially problematic tunings, such as the USB autosuspend, are disabled by default and need to be uncommented manually. By default, the new profile is not activated. To activate it, use: For a complete list of options that powertop2tuned supports, use: | [
"~]# yum install tuned-utils",
"~]# powertop2tuned new_profile_name",
"~]# tuned-adm profile new_profile_name",
"~]USD powertop2tuned --help"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/power_management_guide/Tuned |
Chapter 13. Configuring seccomp profiles | Chapter 13. Configuring seccomp profiles An OpenShift Container Platform container or a pod runs a single application that performs one or more well-defined tasks. The application usually requires only a small subset of the underlying operating system kernel APIs. Secure computing mode, seccomp, is a Linux kernel feature that can be used to limit the process running in a container to only using a subset of the available system calls. The restricted-v2 SCC applies to all newly created pods in 4.18. The default seccomp profile runtime/default is applied to these pods. Seccomp profiles are stored as JSON files on the disk. Important Seccomp profiles cannot be applied to privileged containers. 13.1. Verifying the default seccomp profile applied to a pod OpenShift Container Platform ships with a default seccomp profile that is referenced as runtime/default . In 4.18, newly created pods have the Security Context Constraint (SCC) set to restricted-v2 and the default seccomp profile applies to the pod. Procedure You can verify the Security Context Constraint (SCC) and the default seccomp profile set on a pod by running the following commands: Verify what pods are running in the namespace: USD oc get pods -n <namespace> For example, to verify what pods are running in the workshop namespace run the following: USD oc get pods -n workshop Example output NAME READY STATUS RESTARTS AGE parksmap-1-4xkwf 1/1 Running 0 2m17s parksmap-1-deploy 0/1 Completed 0 2m22s Inspect the pods: USD oc get pod parksmap-1-4xkwf -n workshop -o yaml Example output apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.131.0.18" ], "default": true, "dns": {} }] k8s.v1.cni.cncf.io/network-status: |- [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.131.0.18" ], "default": true, "dns": {} }] openshift.io/deployment-config.latest-version: "1" openshift.io/deployment-config.name: parksmap openshift.io/deployment.name: parksmap-1 openshift.io/generated-by: OpenShiftWebConsole openshift.io/scc: restricted-v2 1 seccomp.security.alpha.kubernetes.io/pod: runtime/default 2 1 The restricted-v2 SCC is added by default if your workload does not have access to a different SCC. 2 Newly created pods in 4.18 will have the seccomp profile configured to runtime/default as mandated by the SCC. 13.1.1. Upgraded cluster In clusters upgraded to 4.18 all authenticated users have access to the restricted and restricted-v2 SCC. A workload admitted by the SCC restricted for example, on a OpenShift Container Platform v4.10 cluster when upgraded may get admitted by restricted-v2 . This is because restricted-v2 is the more restrictive SCC between restricted and restricted-v2 . Note The workload must be able to run with retricted-v2 . Conversely with a workload that requires privilegeEscalation: true this workload will continue to have the restricted SCC available for any authenticated user. This is because restricted-v2 does not allow privilegeEscalation . 13.1.2. Newly installed cluster For newly installed OpenShift Container Platform 4.11 or later clusters, the restricted-v2 replaces the restricted SCC as an SCC that is available to be used by any authenticated user. A workload with privilegeEscalation: true , is not admitted into the cluster since restricted-v2 is the only SCC available for authenticated users by default. The feature privilegeEscalation is allowed by restricted but not by restricted-v2 . More features are denied by restricted-v2 than were allowed by restricted SCC. A workload with privilegeEscalation: true may be admitted into a newly installed OpenShift Container Platform 4.11 or later cluster. To give access to the restricted SCC to the ServiceAccount running the workload (or any other SCC that can admit this workload) using a RoleBinding run the following command: USD oc -n <workload-namespace> adm policy add-scc-to-user <scc-name> -z <serviceaccount_name> In OpenShift Container Platform 4.18 the ability to add the pod annotations seccomp.security.alpha.kubernetes.io/pod: runtime/default and container.seccomp.security.alpha.kubernetes.io/<container_name>: runtime/default is deprecated. 13.2. Configuring a custom seccomp profile You can configure a custom seccomp profile, which allows you to update the filters based on the application requirements. This allows cluster administrators to have greater control over the security of workloads running in OpenShift Container Platform. Seccomp security profiles list the system calls (syscalls) a process can make. Permissions are broader than SELinux, which restrict operations, such as write , system-wide. 13.2.1. Creating seccomp profiles You can use the MachineConfig object to create profiles. Seccomp can restrict system calls (syscalls) within a container, limiting the access of your application. Prerequisites You have cluster admin permissions. You have created a custom security context constraints (SCC). For more information, see Additional resources . Procedure Create the MachineConfig object: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: custom-seccomp spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<hash> filesystem: root mode: 0644 path: /var/lib/kubelet/seccomp/seccomp-nostat.json 13.2.2. Setting up the custom seccomp profile Prerequisite You have cluster administrator permissions. You have created a custom security context constraints (SCC). For more information, see "Additional resources". You have created a custom seccomp profile. Procedure Upload your custom seccomp profile to /var/lib/kubelet/seccomp/<custom-name>.json by using the Machine Config. See "Additional resources" for detailed steps. Update the custom SCC by providing reference to the created custom seccomp profile: seccompProfiles: - localhost/<custom-name>.json 1 1 Provide the name of your custom seccomp profile. 13.2.3. Applying the custom seccomp profile to the workload Prerequisite The cluster administrator has set up the custom seccomp profile. For more details, see "Setting up the custom seccomp profile". Procedure Apply the seccomp profile to the workload by setting the securityContext.seccompProfile.type field as following: Example spec: securityContext: seccompProfile: type: Localhost localhostProfile: <custom-name>.json 1 1 Provide the name of your custom seccomp profile. Alternatively, you can use the pod annotations seccomp.security.alpha.kubernetes.io/pod: localhost/<custom-name>.json . However, this method is deprecated in OpenShift Container Platform 4.18. During deployment, the admission controller validates the following: The annotations against the current SCCs allowed by the user role. The SCC, which includes the seccomp profile, is allowed for the pod. If the SCC is allowed for the pod, the kubelet runs the pod with the specified seccomp profile. Important Ensure that the seccomp profile is deployed to all worker nodes. Note The custom SCC must have the appropriate priority to be automatically assigned to the pod or meet other conditions required by the pod, such as allowing CAP_NET_ADMIN. 13.3. Additional resources Managing security context constraints Machine Config Overview | [
"oc get pods -n <namespace>",
"oc get pods -n workshop",
"NAME READY STATUS RESTARTS AGE parksmap-1-4xkwf 1/1 Running 0 2m17s parksmap-1-deploy 0/1 Completed 0 2m22s",
"oc get pod parksmap-1-4xkwf -n workshop -o yaml",
"apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.18\" ], \"default\": true, \"dns\": {} }] k8s.v1.cni.cncf.io/network-status: |- [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.18\" ], \"default\": true, \"dns\": {} }] openshift.io/deployment-config.latest-version: \"1\" openshift.io/deployment-config.name: parksmap openshift.io/deployment.name: parksmap-1 openshift.io/generated-by: OpenShiftWebConsole openshift.io/scc: restricted-v2 1 seccomp.security.alpha.kubernetes.io/pod: runtime/default 2",
"oc -n <workload-namespace> adm policy add-scc-to-user <scc-name> -z <serviceaccount_name>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: custom-seccomp spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,<hash> filesystem: root mode: 0644 path: /var/lib/kubelet/seccomp/seccomp-nostat.json",
"seccompProfiles: - localhost/<custom-name>.json 1",
"spec: securityContext: seccompProfile: type: Localhost localhostProfile: <custom-name>.json 1"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/security_and_compliance/seccomp-profiles |
4.9. Configuring Headless Virtual Machines | 4.9. Configuring Headless Virtual Machines You can configure a headless virtual machine when it is not necessary to access the machine via a graphical console. This headless machine will run without graphical and video devices. This can be useful in situations where the host has limited resources, or to comply with virtual machine usage requirements such as real-time virtual machines. Headless virtual machines can be administered via a Serial Console, SSH, or any other service for command line access. Headless mode is applied via the Console tab when creating or editing virtual machines and machine pools, and when editing templates. It is also available when creating or editing instance types. If you are creating a new headless virtual machine, you can use the Run Once window to access the virtual machine via a graphical console for the first run only. See Virtual Machine Run Once settings explained for more details. Prerequisites If you are editing an existing virtual machine, and the Red Hat Virtualization guest agent has not been installed, note the machine's IP prior to selecting Headless Mode . Before running a virtual machine in headless mode, the GRUB configuration for this machine must be set to console mode otherwise the guest operating system's boot process will hang. To set console mode, comment out the spashimage flag in the GRUB menu configuration file: #splashimage=(hd0,0)/grub/splash.xpm.gz serial --unit=0 --speed=9600 --parity=no --stop=1 terminal --timeout=2 serial Note Restart the virtual machine if it is running when selecting the Headless Mode option. Configuring a Headless Virtual Machine Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the Console tab. Select Headless Mode . All other fields in the Graphical Console section are disabled. Optionally, select Enable VirtIO serial console to enable communicating with the virtual machine via serial console. This is highly recommended. Reboot the virtual machine if it is running. See Rebooting a Virtual Machine . | [
"#splashimage=(hd0,0)/grub/splash.xpm.gz serial --unit=0 --speed=9600 --parity=no --stop=1 terminal --timeout=2 serial"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/Configuring_Headless_Machines |
Chapter 3. Configuring Dev Spaces | Chapter 3. Configuring Dev Spaces This section describes configuration methods and options for Red Hat OpenShift Dev Spaces. 3.1. Understanding the CheCluster Custom Resource A default deployment of OpenShift Dev Spaces consists of a CheCluster Custom Resource parameterized by the Red Hat OpenShift Dev Spaces Operator. The CheCluster Custom Resource is a Kubernetes object. You can configure it by editing the CheCluster Custom Resource YAML file. This file contains sections to configure each component: devWorkspace , cheServer , pluginRegistry , devfileRegistry , dashboard and imagePuller . The Red Hat OpenShift Dev Spaces Operator translates the CheCluster Custom Resource into a config map usable by each component of the OpenShift Dev Spaces installation. The OpenShift platform applies the configuration to each component, and creates the necessary Pods. When OpenShift detects changes in the configuration of a component, it restarts the Pods accordingly. Example 3.1. Configuring the main properties of the OpenShift Dev Spaces server component Apply the CheCluster Custom Resource YAML file with suitable modifications in the cheServer component section. The Operator generates the che ConfigMap . OpenShift detects changes in the ConfigMap and triggers a restart of the OpenShift Dev Spaces Pod. Additional resources Understanding Operators " Understanding Custom Resources " 3.1.1. Using dsc to configure the CheCluster Custom Resource during installation To deploy OpenShift Dev Spaces with a suitable configuration, edit the CheCluster Custom Resource YAML file during the installation of OpenShift Dev Spaces. Otherwise, the OpenShift Dev Spaces deployment uses the default configuration parameterized by the Operator. Prerequisites An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the CLI . dsc . See: Section 1.2, "Installing the dsc management tool" . Procedure Create a che-operator-cr-patch.yaml YAML file that contains the subset of the CheCluster Custom Resource to configure: spec: <component> : <property_to_configure> : <value> Deploy OpenShift Dev Spaces and apply the changes described in che-operator-cr-patch.yaml file: Verification Verify the value of the configured property: Additional resources Section 3.1.3, " CheCluster Custom Resource fields reference" . Section 3.3.2, "Advanced configuration options for Dev Spaces server" . 3.1.2. Using the CLI to configure the CheCluster Custom Resource To configure a running instance of OpenShift Dev Spaces, edit the CheCluster Custom Resource YAML file. Prerequisites An instance of OpenShift Dev Spaces on OpenShift. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Edit the CheCluster Custom Resource on the cluster: Save and close the file to apply the changes. Verification Verify the value of the configured property: Additional resources Section 3.1.3, " CheCluster Custom Resource fields reference" . Section 3.3.2, "Advanced configuration options for Dev Spaces server" . 3.1.3. CheCluster Custom Resource fields reference This section describes all fields available to customize the CheCluster Custom Resource. Example 3.2, "A minimal CheCluster Custom Resource example." Table 3.1, "Development environment configuration options." Table 3.2, " defaultNamespace options." Table 3.3, " defaultPlugins options." Table 3.4, " gatewayContainer options." Table 3.5, " storage options." Table 3.6, " per-user PVC strategy options." Table 3.7, " per-workspace PVC strategy options." Table 3.8, " trustedCerts options." Table 3.9, " containerBuildConfiguration options." Table 3.10, "OpenShift Dev Spaces components configuration." Table 3.11, "General configuration settings related to the OpenShift Dev Spaces server component." Table 3.12, " proxy options." Table 3.30, " deployment options." Table 3.35, " securityContext options." Table 3.31, " containers options." Table 3.32, " containers options." Table 3.33, " request options." Table 3.34, " limits options." Table 3.13, "Configuration settings related to the Plug-in registry component used by the OpenShift Dev Spaces installation." Table 3.14, " externalPluginRegistries options." Table 3.30, " deployment options." Table 3.35, " securityContext options." Table 3.31, " containers options." Table 3.32, " containers options." Table 3.33, " request options." Table 3.34, " limits options." Table 3.15, "Configuration settings related to the Devfile registry component used by the OpenShift Dev Spaces installation." Table 3.16, " externalDevfileRegistries options." Table 3.30, " deployment options." Table 3.35, " securityContext options." Table 3.31, " containers options." Table 3.32, " containers options." Table 3.33, " request options." Table 3.34, " limits options." Table 3.17, "Configuration settings related to the Dashboard component used by the OpenShift Dev Spaces installation." Table 3.18, " headerMessage options." Table 3.30, " deployment options." Table 3.35, " securityContext options." Table 3.31, " containers options." Table 3.32, " containers options." Table 3.33, " request options." Table 3.34, " limits options." Table 3.19, "Kubernetes Image Puller component configuration." Table 3.20, "OpenShift Dev Spaces server metrics component configuration." Table 3.21, "Configuration settings that allows users to work with remote Git repositories." Table 3.22, " github options." Table 3.23, " gitlab options." Table 3.24, " bitbucket options." Table 3.25, " azure options." Table 3.26, "Networking, OpenShift Dev Spaces authentication and TLS configuration." Table 3.27, " auth options." Table 3.28, " gateway options." Table 3.30, " deployment options." Table 3.35, " securityContext options." Table 3.31, " containers options." Table 3.32, " containers options." Table 3.33, " request options." Table 3.34, " limits options." Table 3.29, "Configuration of an alternative registry that stores OpenShift Dev Spaces images." Table 3.36, " CheCluster Custom Resource status defines the observed state of OpenShift Dev Spaces installation" Example 3.2. A minimal CheCluster Custom Resource example. apiVersion: org.eclipse.che/v2 kind: CheCluster metadata: name: devspaces namespace: openshift-devspaces spec: components: {} devEnvironments: {} networking: {} Table 3.1. Development environment configuration options. Property Description Default containerBuildConfiguration Container build configuration. defaultComponents Default components applied to DevWorkspaces. These default components are meant to be used when a Devfile, that does not contain any components. defaultEditor The default editor to workspace create with. It could be a plugin ID or a URI. The plugin ID must have publisher/name/version format. The URI must start from http:// or https:// . defaultNamespace User's default namespace. { "autoProvision": true, "template": "<username>-che"} defaultPlugins Default plug-ins applied to DevWorkspaces. deploymentStrategy DeploymentStrategy defines the deployment strategy to use to replace existing workspace pods with new ones. The available deployment stragies are Recreate and RollingUpdate . With the Recreate deployment strategy, the existing workspace pod is killed before the new one is created. With the RollingUpdate deployment strategy, a new workspace pod is created and the existing workspace pod is deleted only when the new workspace pod is in a ready state. If not specified, the default Recreate deployment strategy is used. disableContainerBuildCapabilities Disables the container build capabilities. When set to false (the default value), the devEnvironments.security.containerSecurityContext field is ignored, and the following container SecurityContext is applied: containerSecurityContext: allowPrivilegeEscalation: true capabilities: add: - SETGID - SETUID gatewayContainer GatewayContainer configuration. imagePullPolicy ImagePullPolicy defines the imagePullPolicy used for containers in a DevWorkspace. maxNumberOfRunningWorkspacesPerUser The maximum number of running workspaces per user. The value, -1, allows users to run an unlimited number of workspaces. maxNumberOfWorkspacesPerUser Total number of workspaces, both stopped and running, that a user can keep. The value, -1, allows users to keep an unlimited number of workspaces. -1 nodeSelector The node selector limits the nodes that can run the workspace pods. persistUserHome PersistUserHome defines configuration options for persisting the user home directory in workspaces. podSchedulerName Pod scheduler for the workspace pods. If not specified, the pod scheduler is set to the default scheduler on the cluster. projectCloneContainer Project clone container configuration. secondsOfInactivityBeforeIdling Idle timeout for workspaces in seconds. This timeout is the duration after which a workspace will be idled if there is no activity. To disable workspace idling due to inactivity, set this value to -1. 1800 secondsOfRunBeforeIdling Run timeout for workspaces in seconds. This timeout is the maximum duration a workspace runs. To disable workspace run timeout, set this value to -1. -1 security Workspace security configuration. serviceAccount ServiceAccount to use by the DevWorkspace operator when starting the workspaces. serviceAccountTokens List of ServiceAccount tokens that will be mounted into workspace pods as projected volumes. startTimeoutSeconds StartTimeoutSeconds determines the maximum duration (in seconds) that a workspace can take to start before it is automatically failed. If not specified, the default value of 300 seconds (5 minutes) is used. 300 storage Workspaces persistent storage. { "pvcStrategy": "per-user"} tolerations The pod tolerations of the workspace pods limit where the workspace pods can run. trustedCerts Trusted certificate settings. user User configuration. workspacesPodAnnotations WorkspacesPodAnnotations defines additional annotations for workspace pods. Table 3.2. defaultNamespace options. Property Description Default autoProvision Indicates if is allowed to automatically create a user namespace. If it set to false, then user namespace must be pre-created by a cluster administrator. true template If you don't create the user namespaces in advance, this field defines the Kubernetes namespace created when you start your first workspace. You can use <username> and <userid> placeholders, such as che-workspace-<username>. "<username>-che" Table 3.3. defaultPlugins options. Property Description Default editor The editor ID to specify default plug-ins for. The plugin ID must have publisher/name/version format. plugins Default plug-in URIs for the specified editor. Table 3.4. gatewayContainer options. Property Description Default env List of environment variables to set in the container. image Container image. Omit it or leave it empty to use the default container image provided by the Operator. imagePullPolicy Image pull policy. Default value is Always for nightly , or latest images, and IfNotPresent in other cases. name Container name. resources Compute resources required by this container. Table 3.5. storage options. Property Description Default perUserStrategyPvcConfig PVC settings when using the per-user PVC strategy. perWorkspaceStrategyPvcConfig PVC settings when using the per-workspace PVC strategy. pvcStrategy Persistent volume claim strategy for the OpenShift Dev Spaces server. The supported strategies are: per-user (all workspaces PVCs in one volume), per-workspace (each workspace is given its own individual PVC) and ephemeral (non-persistent storage where local changes will be lost when the workspace is stopped.) "per-user" Table 3.6. per-user PVC strategy options. Property Description Default claimSize Persistent Volume Claim size. To update the claim size, the storage class that provisions it must support resizing. storageClass Storage class for the Persistent Volume Claim. When omitted or left blank, a default storage class is used. Table 3.7. per-workspace PVC strategy options. Property Description Default claimSize Persistent Volume Claim size. To update the claim size, the storage class that provisions it must support resizing. storageClass Storage class for the Persistent Volume Claim. When omitted or left blank, a default storage class is used. Table 3.8. trustedCerts options. Property Description Default gitTrustedCertsConfigMapName The ConfigMap contains certificates to propagate to the OpenShift Dev Spaces components and to provide a particular configuration for Git. See the following page: https://www.eclipse.org/che/docs/stable/administration-guide/deploying-che-with-support-for-git-repositories-with-self-signed-certificates/ The ConfigMap must have a app.kubernetes.io/part-of=che.eclipse.org label. Table 3.9. containerBuildConfiguration options. Property Description Default openShiftSecurityContextConstraint OpenShift security context constraint to build containers. "container-build" Table 3.10. OpenShift Dev Spaces components configuration. Property Description Default cheServer General configuration settings related to the OpenShift Dev Spaces server. { "debug": false, "logLevel": "INFO"} dashboard Configuration settings related to the dashboard used by the OpenShift Dev Spaces installation. devWorkspace DevWorkspace Operator configuration. devfileRegistry Configuration settings related to the devfile registry used by the OpenShift Dev Spaces installation. imagePuller Kubernetes Image Puller configuration. metrics OpenShift Dev Spaces server metrics configuration. { "enable": true} pluginRegistry Configuration settings related to the plug-in registry used by the OpenShift Dev Spaces installation. Table 3.11. General configuration settings related to the OpenShift Dev Spaces server component. Property Description Default clusterRoles Additional ClusterRoles assigned to OpenShift Dev Spaces ServiceAccount. Each role must have a app.kubernetes.io/part-of=che.eclipse.org label. The defaults roles are: - <devspaces-namespace>-cheworkspaces-clusterrole - <devspaces-namespace>-cheworkspaces-namespaces-clusterrole - <devspaces-namespace>-cheworkspaces-devworkspace-clusterrole where the <devspaces-namespace> is the namespace where the CheCluster CR is created. The OpenShift Dev Spaces Operator must already have all permissions in these ClusterRoles to grant them. debug Enables the debug mode for OpenShift Dev Spaces server. false deployment Deployment override options. extraProperties A map of additional environment variables applied in the generated che ConfigMap to be used by the OpenShift Dev Spaces server in addition to the values already generated from other fields of the CheCluster custom resource (CR). If the extraProperties field contains a property normally generated in che ConfigMap from other CR fields, the value defined in the extraProperties is used instead. logLevel The log level for the OpenShift Dev Spaces server: INFO or DEBUG . "INFO" proxy Proxy server settings for Kubernetes cluster. No additional configuration is required for OpenShift cluster. By specifying these settings for the OpenShift cluster, you override the OpenShift proxy configuration. Table 3.12. proxy options. Property Description Default credentialsSecretName The secret name that contains user and password for a proxy server. The secret must have a app.kubernetes.io/part-of=che.eclipse.org label. nonProxyHosts A list of hosts that can be reached directly, bypassing the proxy. Specify wild card domain use the following form .<DOMAIN> , for example: - localhost - my.host.com - 123.42.12.32 Use only when a proxy configuration is required. The Operator respects OpenShift cluster-wide proxy configuration, defining nonProxyHosts in a custom resource leads to merging non-proxy hosts lists from the cluster proxy configuration, and the ones defined in the custom resources. See the following page: https://docs.openshift.com/container-platform/latest/networking/enable-cluster-wide-proxy.html . port Proxy server port. url URL (protocol+hostname) of the proxy server. Use only when a proxy configuration is required. The Operator respects OpenShift cluster-wide proxy configuration, defining url in a custom resource leads to overriding the cluster proxy configuration. See the following page: https://docs.openshift.com/container-platform/latest/networking/enable-cluster-wide-proxy.html . Table 3.13. Configuration settings related to the Plug-in registry component used by the OpenShift Dev Spaces installation. Property Description Default deployment Deployment override options. disableInternalRegistry Disables internal plug-in registry. externalPluginRegistries External plugin registries. openVSXURL Open VSX registry URL. If omitted an embedded instance will be used. Table 3.14. externalPluginRegistries options. Property Description Default url Public URL of the plug-in registry. Table 3.15. Configuration settings related to the Devfile registry component used by the OpenShift Dev Spaces installation. Property Description Default deployment Deployment override options. disableInternalRegistry Disables internal devfile registry. externalDevfileRegistries External devfile registries serving sample ready-to-use devfiles. Table 3.16. externalDevfileRegistries options. Property Description Default url The public UR of the devfile registry that serves sample ready-to-use devfiles. Table 3.17. Configuration settings related to the Dashboard component used by the OpenShift Dev Spaces installation. Property Description Default branding Dashboard branding resources. deployment Deployment override options. headerMessage Dashboard header message. logLevel The log level for the Dashboard. "ERROR" Table 3.18. headerMessage options. Property Description Default show Instructs dashboard to show the message. text Warning message displayed on the user dashboard. Table 3.19. Kubernetes Image Puller component configuration. Property Description Default enable Install and configure the community supported Kubernetes Image Puller Operator. When you set the value to true without providing any specs, it creates a default Kubernetes Image Puller object managed by the Operator. When you set the value to false , the Kubernetes Image Puller object is deleted, and the Operator uninstalled, regardless of whether a spec is provided. If you leave the spec.images field empty, a set of recommended workspace-related images is automatically detected and pre-pulled after installation. Note that while this Operator and its behavior is community-supported, its payload may be commercially-supported for pulling commercially-supported images. spec A Kubernetes Image Puller spec to configure the image puller in the CheCluster. Table 3.20. OpenShift Dev Spaces server metrics component configuration. Property Description Default enable Enables metrics for the OpenShift Dev Spaces server endpoint. true Table 3.21. Configuration settings that allows users to work with remote Git repositories. Property Description Default azure Enables users to work with repositories hosted on Azure DevOps Service (dev.azure.com). bitbucket Enables users to work with repositories hosted on Bitbucket (bitbucket.org or self-hosted). github Enables users to work with repositories hosted on GitHub (github.com or GitHub Enterprise). gitlab Enables users to work with repositories hosted on GitLab (gitlab.com or self-hosted). Table 3.22. github options. Property Description Default disableSubdomainIsolation Disables subdomain isolation. Deprecated in favor of che.eclipse.org/scm-github-disable-subdomain-isolation annotation. See the following page for details: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-github/ . endpoint GitHub server endpoint URL. Deprecated in favor of che.eclipse.org/scm-server-endpoint annotation. See the following page for details: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-github/ . secretName Kubernetes secret, that contains Base64-encoded GitHub OAuth Client id and GitHub OAuth Client secret. See the following page for details: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-github/ . Table 3.23. gitlab options. Property Description Default endpoint GitLab server endpoint URL. Deprecated in favor of che.eclipse.org/scm-server-endpoint annotation. See the following page: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-gitlab/ . secretName Kubernetes secret, that contains Base64-encoded GitHub Application id and GitLab Application Client secret. See the following page: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-gitlab/ . Table 3.24. bitbucket options. Property Description Default endpoint Bitbucket server endpoint URL. Deprecated in favor of che.eclipse.org/scm-server-endpoint annotation. See the following page: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-1-for-a-bitbucket-server/ . secretName Kubernetes secret, that contains Base64-encoded Bitbucket OAuth 1.0 or OAuth 2.0 data. See the following pages for details: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-1-for-a-bitbucket-server/ and https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-the-bitbucket-cloud/ . Table 3.25. azure options. Property Description Default secretName Kubernetes secret, that contains Base64-encoded Azure DevOps Service Application ID and Client Secret. See the following page: https://www.eclipse.org/che/docs/stable/administration-guide/configuring-oauth-2-for-microsoft-azure-devops-services Table 3.26. Networking, OpenShift Dev Spaces authentication and TLS configuration. Property Description Default annotations Defines annotations which will be set for an Ingress (a route for OpenShift platform). The defaults for kubernetes platforms are: kubernetes.io/ingress.class: "nginx" nginx.ingress.kubernetes.io/proxy-read-timeout: "3600", nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600", nginx.ingress.kubernetes.io/ssl-redirect: "true" auth Authentication settings. { "gateway": { "configLabels": { "app": "che", "component": "che-gateway-config" } }} domain For an OpenShift cluster, the Operator uses the domain to generate a hostname for the route. The generated hostname follows this pattern: che-<devspaces-namespace>.<domain>. The <devspaces-namespace> is the namespace where the CheCluster CRD is created. In conjunction with labels, it creates a route served by a non-default Ingress controller. For a Kubernetes cluster, it contains a global ingress domain. There are no default values: you must specify them. hostname The public hostname of the installed OpenShift Dev Spaces server. ingressClassName IngressClassName is the name of an IngressClass cluster resource. If a class name is defined in both the IngressClassName field and the kubernetes.io/ingress.class annotation, IngressClassName field takes precedence. labels Defines labels which will be set for an Ingress (a route for OpenShift platform). tlsSecretName The name of the secret used to set up Ingress TLS termination. If the field is an empty string, the default cluster certificate is used. The secret must have a app.kubernetes.io/part-of=che.eclipse.org label. Table 3.27. auth options. Property Description Default advancedAuthorization Advance authorization settings. Determines which users and groups are allowed to access Che. User is allowed to access OpenShift Dev Spaces if he/she is either in the allowUsers list or is member of group from allowGroups list and not in neither the denyUsers list nor is member of group from denyGroups list. If allowUsers and allowGroups are empty, then all users are allowed to access Che. if denyUsers and denyGroups are empty, then no users are denied to access Che. gateway Gateway settings. { "configLabels": { "app": "che", "component": "che-gateway-config" }} identityProviderURL Public URL of the Identity Provider server. identityToken Identity token to be passed to upstream. There are two types of tokens supported: id_token and access_token . Default value is id_token . This field is specific to OpenShift Dev Spaces installations made for Kubernetes only and ignored for OpenShift. oAuthAccessTokenInactivityTimeoutSeconds Inactivity timeout for tokens to set in the OpenShift OAuthClient resource used to set up identity federation on the OpenShift side. 0 means tokens for this client never time out. oAuthAccessTokenMaxAgeSeconds Access token max age for tokens to set in the OpenShift OAuthClient resource used to set up identity federation on the OpenShift side. 0 means no expiration. oAuthClientName Name of the OpenShift OAuthClient resource used to set up identity federation on the OpenShift side. oAuthScope Access Token Scope. This field is specific to OpenShift Dev Spaces installations made for Kubernetes only and ignored for OpenShift. oAuthSecret Name of the secret set in the OpenShift OAuthClient resource used to set up identity federation on the OpenShift side. For Kubernetes, this can either be the plain text oAuthSecret value, or the name of a kubernetes secret which contains a key oAuthSecret and the value is the secret. NOTE: this secret must exist in the same namespace as the CheCluster resource and contain the label app.kubernetes.io/part-of=che.eclipse.org . Table 3.28. gateway options. Property Description Default configLabels Gateway configuration labels. { "app": "che", "component": "che-gateway-config"} deployment Deployment override options. Since gateway deployment consists of several containers, they must be distinguished in the configuration by their names: - gateway - configbump - oauth-proxy - kube-rbac-proxy kubeRbacProxy Configuration for kube-rbac-proxy within the OpenShift Dev Spaces gateway pod. oAuthProxy Configuration for oauth-proxy within the OpenShift Dev Spaces gateway pod. traefik Configuration for Traefik within the OpenShift Dev Spaces gateway pod. Table 3.29. Configuration of an alternative registry that stores OpenShift Dev Spaces images. Property Description Default hostname An optional hostname or URL of an alternative container registry to pull images from. This value overrides the container registry hostname defined in all the default container images involved in a OpenShift Dev Spaces deployment. This is particularly useful for installing OpenShift Dev Spaces in a restricted environment. organization An optional repository name of an alternative registry to pull images from. This value overrides the container registry organization defined in all the default container images involved in a OpenShift Dev Spaces deployment. This is particularly useful for installing OpenShift Dev Spaces in a restricted environment. Table 3.30. deployment options. Property Description Default containers List of containers belonging to the pod. securityContext Security options the pod should run with. Table 3.31. containers options. Property Description Default env List of environment variables to set in the container. image Container image. Omit it or leave it empty to use the default container image provided by the Operator. imagePullPolicy Image pull policy. Default value is Always for nightly , or latest images, and IfNotPresent in other cases. name Container name. resources Compute resources required by this container. Table 3.32. containers options. Property Description Default limits Describes the maximum amount of compute resources allowed. request Describes the minimum amount of compute resources required. Table 3.33. request options. Property Description Default cpu CPU, in cores. (500m = .5 cores) If the value is not specified, then the default value is set depending on the component. If value is 0 , then no value is set for the component. memory Memory, in bytes. (500Gi = 500GiB = 500 * 1024 * 1024 * 1024) If the value is not specified, then the default value is set depending on the component. If value is 0 , then no value is set for the component. Table 3.34. limits options. Property Description Default cpu CPU, in cores. (500m = .5 cores) If the value is not specified, then the default value is set depending on the component. If value is 0 , then no value is set for the component. memory Memory, in bytes. (500Gi = 500GiB = 500 * 1024 * 1024 * 1024) If the value is not specified, then the default value is set depending on the component. If value is 0 , then no value is set for the component. Table 3.35. securityContext options. Property Description Default fsGroup A special supplemental group that applies to all containers in a pod. The default value is 1724 . runAsUser The UID to run the entrypoint of the container process. The default value is 1724 . Table 3.36. CheCluster Custom Resource status defines the observed state of OpenShift Dev Spaces installation Property Description Default chePhase Specifies the current phase of the OpenShift Dev Spaces deployment. cheURL Public URL of the OpenShift Dev Spaces server. cheVersion Currently installed OpenShift Dev Spaces version. devfileRegistryURL The public URL of the internal devfile registry. gatewayPhase Specifies the current phase of the gateway deployment. message A human readable message indicating details about why the OpenShift Dev Spaces deployment is in the current phase. pluginRegistryURL The public URL of the internal plug-in registry. reason A brief CamelCase message indicating details about why the OpenShift Dev Spaces deployment is in the current phase. workspaceBaseDomain The resolved workspace base domain. This is either the copy of the explicitly defined property of the same name in the spec or, if it is undefined in the spec and we're running on OpenShift, the automatically resolved basedomain for routes. 3.2. Configuring projects For each user, OpenShift Dev Spaces isolates workspaces in a project. OpenShift Dev Spaces identifies the user project by the presence of labels and annotations. When starting a workspace, if the required project doesn't exist, OpenShift Dev Spaces creates the project using a template name. You can modify OpenShift Dev Spaces behavior by: Section 3.2.1, "Configuring project name" Section 3.2.2, "Provisioning projects in advance" 3.2.1. Configuring project name You can configure the project name template that OpenShift Dev Spaces uses to create the required project when starting a workspace. A valid project name template follows these conventions: The <username> or <userid> placeholder is mandatory. Usernames and IDs cannot contain invalid characters. If the formatting of a username or ID is incompatible with the naming conventions for OpenShift objects, OpenShift Dev Spaces changes the username or ID to a valid name by replacing incompatible characters with the - symbol. OpenShift Dev Spaces evaluates the <userid> placeholder into a 14 character long string, and adds a random six character long suffix to prevent IDs from colliding. The result is stored in the user preferences for reuse. Kubernetes limits the length of a project name to 63 characters. OpenShift limits the length further to 49 characters. Procedure Configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: components: devEnvironments: defaultNamespace: template: <workspace_namespace_template_> Example 3.3. User workspaces project name template examples User workspaces project name template Resulting project example <username>-devspaces (default) user1-devspaces <userid>-namespace cge1egvsb2nhba-namespace-ul1411 <userid>-aka-<username>-namespace cgezegvsb2nhba-aka-user1-namespace-6m2w2b Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.2.2. Provisioning projects in advance You can provision workspaces projects in advance, rather than relying on automatic provisioning. Repeat the procedure for each user. Procedure Create the <project_name> project for <username> user with the following labels and annotations: kind: Namespace apiVersion: v1 metadata: name: <project_name> 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-namespace annotations: che.eclipse.org/username: <username> 1 Use a project name of your choosing. Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.3. Configuring server components Section 3.3.1, "Mounting a Secret or a ConfigMap as a file or an environment variable into a Red Hat OpenShift Dev Spaces container" Section 3.3.2, "Advanced configuration options for Dev Spaces server" Section 3.4.1, "Configuring number of replicas for a Red Hat OpenShift Dev Spaces container" 3.3.1. Mounting a Secret or a ConfigMap as a file or an environment variable into a Red Hat OpenShift Dev Spaces container Secrets are OpenShift objects that store sensitive data such as: usernames passwords authentication tokens in an encrypted form. Users can mount a OpenShift Secret that contains sensitive data or a ConfigMap that contains configuration in a OpenShift Dev Spaces managed containers as: a file an environment variable The mounting process uses the standard OpenShift mounting mechanism, but it requires additional annotations and labeling. 3.3.1.1. Mounting a Secret or a ConfigMap as a file into a OpenShift Dev Spaces container Prerequisites A running instance of Red Hat OpenShift Dev Spaces. Procedure Create a new OpenShift Secret or a ConfigMap in the OpenShift project where OpenShift Dev Spaces is deployed. The labels of the object that is about to be created must match the set of labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND> The <DEPLOYMENT_NAME> corresponds to the one following deployments: devspaces-dashboard devfile-registry plugin-registry devspaces and <OBJECT_KIND> is either: secret or configmap Example 3.4. Example: apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret ... or apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap ... Configure the annotation values. Annotations must indicate that the given object is mounted as a file: che.eclipse.org/mount-as: file - To indicate that a object is mounted as a file. che.eclipse.org/mount-path: <TARGET_PATH> - To provide a required mount path. Example 3.5. Example: apiVersion: v1 kind: Secret metadata: name: custom-data annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret ... or apiVersion: v1 kind: ConfigMap metadata: name: custom-data annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap ... The OpenShift object can contain several items whose names must match the desired file name mounted into the container. Example 3.6. Example: apiVersion: v1 kind: Secret metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data data: ca.crt: <base64 encoded data content here> or apiVersion: v1 kind: ConfigMap metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data data: ca.crt: <data content here> This results in a file named ca.crt being mounted at the /data path of the OpenShift Dev Spaces container. Important To make the changes in the OpenShift Dev Spaces container visible, re-create the Secret or the ConfigMap object entirely. Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.3.1.2. Mounting a Secret or a ConfigMap as a subPath into a OpenShift Dev Spaces container Prerequisites A running instance of Red Hat OpenShift Dev Spaces. Procedure Create a new OpenShift Secret or a ConfigMap in the OpenShift project where OpenShift Dev Spaces is deployed. The labels of the object that is about to be created must match the set of labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND> The <DEPLOYMENT_NAME> corresponds to the one following deployments: devspaces-dashboard devfile-registry plugin-registry devspaces and <OBJECT_KIND> is either: secret or configmap Example 3.7. Example: apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret ... or apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap ... Configure the annotation values. Annotations must indicate that the given object is mounted as a subPath.: che.eclipse.org/mount-as: subpath - To indicate that an object is mounted as a subPath. che.eclipse.org/mount-path: <TARGET_PATH> - To provide a required mount path. Example 3.8. Example: apiVersion: v1 kind: Secret metadata: name: custom-data annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret ... or apiVersion: v1 kind: ConfigMap metadata: name: custom-data annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap ... The OpenShift object can contain several items whose names must match the file name mounted into the container. Example 3.9. Example: apiVersion: v1 kind: Secret metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data data: ca.crt: <base64 encoded data content here> or apiVersion: v1 kind: ConfigMap metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data data: ca.crt: <data content here> This results in a file named ca.crt being mounted at the /data path of OpenShift Dev Spaces container. Important To make the changes in a OpenShift Dev Spaces container visible, re-create the Secret or the ConfigMap object entirely. Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.3.1.3. Mounting a Secret or a ConfigMap as an environment variable into OpenShift Dev Spaces container Prerequisites A running instance of Red Hat OpenShift Dev Spaces. Procedure Create a new OpenShift Secret or a ConfigMap in the OpenShift project where OpenShift Dev Spaces is deployed. The labels of the object that is about to be created must match the set of labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: <DEPLOYMENT_NAME>-<OBJECT_KIND> The <DEPLOYMENT_NAME> corresponds to the one following deployments: devspaces-dashboard devfile-registry plugin-registry devspaces and <OBJECT_KIND> is either: secret or configmap Example 3.10. Example: apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret ... or apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap ... Configure the annotation values. Annotations must indicate that the given object is mounted as an environment variable: che.eclipse.org/mount-as: env - to indicate that a object is mounted as an environment variable che.eclipse.org/env-name: <FOO_ENV> - to provide an environment variable name, which is required to mount a object key value Example 3.11. Example: apiVersion: v1 kind: Secret metadata: name: custom-settings annotations: che.eclipse.org/env-name: FOO_ENV che.eclipse.org/mount-as: env labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret data: mykey: myvalue or apiVersion: v1 kind: ConfigMap metadata: name: custom-settings annotations: che.eclipse.org/env-name: FOO_ENV che.eclipse.org/mount-as: env labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap data: mykey: myvalue This results in two environment variables: FOO_ENV myvalue being provisioned into the OpenShift Dev Spaces container. If the object provides more than one data item, the environment variable name must be provided for each of the data keys as follows: Example 3.12. Example: apiVersion: v1 kind: Secret metadata: name: custom-settings annotations: che.eclipse.org/mount-as: env che.eclipse.org/mykey_env-name: FOO_ENV che.eclipse.org/otherkey_env-name: OTHER_ENV labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret stringData: mykey: <data_content_here> otherkey: <data_content_here> or apiVersion: v1 kind: ConfigMap metadata: name: custom-settings annotations: che.eclipse.org/mount-as: env che.eclipse.org/mykey_env-name: FOO_ENV che.eclipse.org/otherkey_env-name: OTHER_ENV labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap data: mykey: <data content here> otherkey: <data content here> This results in two environment variables: FOO_ENV OTHER_ENV being provisioned into a OpenShift Dev Spaces container. Note The maximum length of annotation names in a OpenShift object is 63 characters, where 9 characters are reserved for a prefix that ends with / . This acts as a restriction for the maximum length of the key that can be used for the object. Important To make the changes in the OpenShift Dev Spaces container visible, re-create the Secret or the ConfigMap object entirely. Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.3.2. Advanced configuration options for Dev Spaces server The following section describes advanced deployment and configuration methods for the OpenShift Dev Spaces server component. 3.3.2.1. Understanding OpenShift Dev Spaces server advanced configuration The following section describes the OpenShift Dev Spaces server component advanced configuration method for a deployment. Advanced configuration is necessary to: Add environment variables not automatically generated by the Operator from the standard CheCluster Custom Resource fields. Override the properties automatically generated by the Operator from the standard CheCluster Custom Resource fields. The customCheProperties field, part of the CheCluster Custom Resource server settings, contains a map of additional environment variables to apply to the OpenShift Dev Spaces server component. Example 3.13. Override the default memory limit for workspaces Configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . apiVersion: org.eclipse.che/v2 kind: CheCluster spec: components: cheServer: extraProperties: CHE_LOGS_APPENDERS_IMPL: json Note versions of the OpenShift Dev Spaces Operator had a ConfigMap named custom to fulfill this role. If the OpenShift Dev Spaces Operator finds a configMap with the name custom , it adds the data it contains into the customCheProperties field, redeploys OpenShift Dev Spaces, and deletes the custom configMap . Additional resources Section 3.1.3, " CheCluster Custom Resource fields reference" . 3.4. Configuring autoscaling Learn about different aspects of autoscaling for Red Hat OpenShift Dev Spaces. Section 3.4.1, "Configuring number of replicas for a Red Hat OpenShift Dev Spaces container" Section 3.4.2, "Configuring machine autoscaling" 3.4.1. Configuring number of replicas for a Red Hat OpenShift Dev Spaces container To configure the number of replicas for OpenShift Dev Spaces operands using Kubernetes HorizontalPodAutoscaler (HPA), you can define an HPA resource for deployment. The HPA dynamically adjusts the number of replicas based on specified metrics. Procedure Create an HPA resource for a deployment, specifying the target metrics and desired replica count. apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: scaler namespace: openshift-devspaces spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: <deployment_name> 1 ... 1 The <deployment_name> corresponds to the one following deployments: devspaces che-gateway devspaces-dashboard plugin-registry devfile-registry Example 3.14. Create a HorizontalPodAutoscaler for devspaces deployment: apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: devspaces-scaler namespace: openshift-devspaces spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: devspaces minReplicas: 2 maxReplicas: 5 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 75 In this example, the HPA is targeting the Deployment named devspaces, with a minimum of 2 replicas, a maximum of 5 replicas and scaling based on CPU utilization. Additional resources Horizontal Pod Autoscaling 3.4.2. Configuring machine autoscaling If you configured the cluster to adjust the number of nodes depending on resource needs, you need additional configuration to maintain the seamless operation of OpenShift Dev Spaces workspaces. Workspaces need special consideration when the autoscaler adds and removes nodes. When a new node is being added by the autoscaler, workspace startup can take longer than usual until the node provisioning is complete. Conversely when a node is being removed, ideally nodes that are running workspace pods should not be evicted by the autoscaler to avoid any interruptions while using the workspace and potentially losing any unsaved data. 3.4.2.1. When the autoscaler adds a new node You need to make additional configurations to the OpenShift Dev Spaces installation to ensure proper workspace startup while a new node is being added. Procedure In the CheCluster Custom Resource, set the spec.devEnvironments.startTimeoutSeconds field to at least 600 seconds to allow time for a new node to be provisioned when needed during workspace startup. spec: devEnvironments: startTimeoutSeconds: 600 In the DevWorkspaceOperatorConfig Custom Resource in the openshift-devspaces namespace, add the FailedScheduling event to the config.workpsace.ignoredUnrecoverableEvents field. This allows the workspace startup to not fail if not enough nodes are available. When the new node is provisioned, this allows the workspace startup to continue. config: workspace: ignoredUnrecoverableEvents: - FailedScheduling 3.4.2.2. When the autoscaler removes a node To prevent workspace pods from being evicted when the autoscaler needs to remove a node, add the "cluster-autoscaler.kubernetes.io/safe-to-evict": "false" annotation to every workspace pod. Procedure In the CheCluster Custom Resource, add the cluster-autoscaler.kubernetes.io/safe-to-evict: "false" annotation in the spec.devEnvironments.workspacesPodAnnotations field. spec: devEnvironments: workspacesPodAnnotations: cluster-autoscaler.kubernetes.io/safe-to-evict: "false" Verification steps Start a workspace and verify that the workspace pod contains the cluster-autoscaler.kubernetes.io/safe-to-evict: "false" annotation. 3.5. Configuring workspaces globally This section describes how an administrator can configure workspaces globally. Section 3.5.1, "Limiting the number of workspaces that a user can keep" Section 3.5.2, "Enabling users to run multiple workspaces simultaneously" Section 3.5.3, "Git with self-signed certificates" Section 3.5.4, "Configuring workspaces nodeSelector" 3.5.1. Limiting the number of workspaces that a user can keep By default, users can keep an unlimited number of workspaces in the dashboard, but you can limit this number to reduce demand on the cluster. This configuration is part of the CheCluster Custom Resource: spec: devEnvironments: maxNumberOfWorkspacesPerUser: <kept_workspaces_limit> 1 1 Sets the maximum number of workspaces per user. The default value, -1 , allows users to keep an unlimited number of workspaces. Use a positive integer to set the maximum number of workspaces per user. Procedure Get the name of the OpenShift Dev Spaces namespace. The default is openshift-devspaces . USD oc get checluster --all-namespaces \ -o=jsonpath="{.items[*].metadata.namespace}" Configure the maxNumberOfWorkspacesPerUser : 1 The OpenShift Dev Spaces namespace that you got in step 1. 2 Your choice of the <kept_workspaces_limit> value. Additional resources Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.5.2. Enabling users to run multiple workspaces simultaneously By default, a user can run only one workspace at a time. You can enable users to run multiple workspaces simultaneously. Note If using the default storage method, users might experience problems when concurrently running workspaces if pods are distributed across nodes in a multi-node cluster. Switching from the per-user common storage strategy to the per-workspace storage strategy or using the ephemeral storage type can avoid or solve those problems. This configuration is part of the CheCluster Custom Resource: spec: devEnvironments: maxNumberOfRunningWorkspacesPerUser: <running_workspaces_limit> 1 1 Sets the maximum number of simultaneously running workspaces per user. The -1 value enables users to run an unlimited number of workspaces. The default value is 1 . Procedure Get the name of the OpenShift Dev Spaces namespace. The default is openshift-devspaces . USD oc get checluster --all-namespaces \ -o=jsonpath="{.items[*].metadata.namespace}" Configure the maxNumberOfRunningWorkspacesPerUser : 1 The OpenShift Dev Spaces namespace that you got in step 1. 2 Your choice of the <running_workspaces_limit> value. Additional resources Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.5.3. Git with self-signed certificates You can configure OpenShift Dev Spaces to support operations on Git providers that use self-signed certificates. Prerequisites An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . Git version 2 or later Procedure Create a new ConfigMap with details about the Git server: 1 Path to the self-signed certificate. 2 Optional parameter to specify the Git server URL e.g. https://git.example.com:8443 . When omitted, the self-signed certificate is used for all repositories over HTTPS. Note Certificate files are typically stored as Base64 ASCII files, such as. .pem , .crt , .ca-bundle . All ConfigMaps that hold certificate files should use the Base64 ASCII certificate rather than the binary data certificate. A certificate chain of trust is required. If the ca.crt is signed by a certificate authority (CA), the CA certificate must be included in the ca.crt file. Add the required labels to the ConfigMap: Configure OpenShift Dev Spaces operand to use self-signed certificates for Git repositories. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: devEnvironments: trustedCerts: gitTrustedCertsConfigMapName: che-git-self-signed-cert Verification steps Create and start a new workspace. Every container used by the workspace mounts a special volume that contains a file with the self-signed certificate. The container's /etc/gitconfig file contains information about the Git server host (its URL) and the path to the certificate in the http section (see Git documentation about git-config ). Example 3.15. Contents of an /etc/gitconfig file Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" Section 3.8.3, "Importing untrusted TLS certificates to Dev Spaces" . 3.5.4. Configuring workspaces nodeSelector This section describes how to configure nodeSelector for Pods of OpenShift Dev Spaces workspaces. Procedure Using NodeSelector OpenShift Dev Spaces uses CheCluster Custom Resource to configure nodeSelector : spec: devEnvironments: nodeSelector: key: value This section must contain a set of key=value pairs for each node label to form the nodeSelector rule. Using Taints and Tolerations This works in the opposite way to nodeSelector . Instead of specifying which nodes the Pod will be scheduled on, you specify which nodes the Pod cannot be scheduled on. For more information, see: https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration . OpenShift Dev Spaces uses CheCluster Custom Resource to configure tolerations : spec: devEnvironments: tolerations: - effect: NoSchedule key: key value: value operator: Equal Important nodeSelector must be configured during OpenShift Dev Spaces installation. This prevents existing workspaces from failing to run due to volumes affinity conflict caused by existing workspace PVC and Pod being scheduled in different zones. To avoid Pods and PVCs to be scheduled in different zones on large, multizone clusters, create an additional StorageClass object (pay attention to the allowedTopologies field), which will coordinate the PVC creation process. Pass the name of this newly created StorageClass to OpenShift Dev Spaces through the CheCluster Custom Resource. For more information, see: Section 3.9.1, "Configuring storage classes" . Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.5.5. Open VSX registry URL To search and install extensions, the Microsoft Visual Studio Code - Open Source editor uses an embedded Open VSX registry instance. You can also configure OpenShift Dev Spaces to use another Open VSX registry instance rather than the embedded one. Procedure Set the URL of your Open VSX registry instance in the CheCluster Custom Resource spec.components.pluginRegistry.openVSXURL field. spec: components: # [...] pluginRegistry: openVSXURL: <your_open_vsx_registy> # [...] Additional resources Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" Open VSX registry 3.5.6. Configuring a user namespace This procedure walks you through the process of using OpenShift Dev Spaces to replicate ConfigMaps , Secrets and PersistentVolumeClaim from openshift-devspaces namespace to numerous user-specific namespaces. The OpenShift Dev Spaces automates the synchronization of important configuration data such as shared credentials, configuration files, and certificates to user namespaces. If you make changes to a Kubernetes resource in an openshift-devspaces namespace, OpenShift Dev Spaces will immediately replicate the changes across all users namespaces. In reverse, if a Kubernetes resource is modified in a user namespace, OpenShift Dev Spaces will immediately revert the changes. Procedure Create the ConfigMap below to replicate it to every user namespace. To enhance the configurability, you can customize the ConfigMap by adding additional labels and annotations. See the Automatically mounting volumes, configmaps, and secrets for other possible labels and annotations. kind: ConfigMap apiVersion: v1 metadata: name: user-configmap namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config data: ... Example 3.16. Mounting a settings.xml file to a user workspace: kind: ConfigMap apiVersion: v1 metadata: name: user-settings-xml namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.m2 data: settings.xml: | <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd"> <localRepository>/home/user/.m2/repository</localRepository> <interactiveMode>true</interactiveMode> <offline>false</offline> </settings> Create the Secret below to replicate it to every user namespace. To enhance the configurability, you can customize the Secret by adding additional labels and annotations. See the Automatically mounting volumes, configmaps, and secrets for other possible labels and annotations. kind: Secret apiVersion: v1 metadata: name: user-secret namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config data: ... Example 3.17. Mounting certificates to a user workspace: kind: Secret apiVersion: v1 metadata: name: user-certificates namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /etc/pki/ca-trust/source/anchors stringData: trusted-certificates.crt: | ... Note Run update-ca-trust command on workspace startup to import certificates. It can be achieved manually or by adding this command to a postStart event in a devfile. See the Adding event bindings in a devfile . Example 3.18. Mounting environment variables to a user workspace: kind: Secret apiVersion: v1 metadata: name: user-env namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: env stringData: ENV_VAR_1: value_1 ENV_VAR_2: value_2 Create the PersistentVolumeClaim below to replicate it to every user namespace. To enhance the configurability, you can customize the PersistentVolumeClaim by adding additional labels and annotations. See the Automatically mounting volumes, configmaps, and secrets for other possible labels and annotations. To modify the 'PersistentVolumeClaim', delete it and create a new one in openshift-devspaces namespace. apiVersion: v1 kind: PersistentVolumeClaim metadata: name: user-pvc namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config spec: ... Example 3.19. Mounting a PersistentVolumeClaim to a user workspace: apiVersion: v1 kind: PersistentVolumeClaim metadata: name: user-pvc namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config controller.devfile.io/mount-to-devworkspace: 'true' annotations: controller.devfile.io/mount-path: /home/user/data controller.devfile.io/read-only: 'true' spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi volumeMode: Filesystem Additional resources https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.15/html-single/user_guide/index#end-user-guide:mounting-configmaps https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.15/html-single/user_guide/index#end-user-guide:mounting-secrets https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.15/html-single/user_guide/index#end-user-guide:requesting-persistent-storage-for-workspaces Automatically mounting volumes, configmaps, and secrets 3.6. Caching images for faster workspace start To improve the start time performance of OpenShift Dev Spaces workspaces, use the Image Puller, a OpenShift Dev Spaces-agnostic component that can be used to pre-pull images for OpenShift clusters. The Image Puller is an additional OpenShift deployment which creates a DaemonSet that can be configured to pre-pull relevant OpenShift Dev Spaces workspace images on each node. These images would already be available when a OpenShift Dev Spaces workspace starts, therefore improving the workspace start time. https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.15/html-single/user_guide/index#installing-image-puller-on-kubernetes-by-using-cli Section 3.6.2, "Installing Image Puller on OpenShift by using the web console" Section 3.6.1, "Installing Image Puller on OpenShift using CLI" Section 3.6.3, "Configuring Image Puller to pre-pull default Dev Spaces images" Section 3.6.4, "Configuring Image Puller to pre-pull custom images" Section 3.6.5, "Configuring Image Puller to pre-pull additional images" Additional resources Kubernetes Image Puller source code repository 3.6.1. Installing Image Puller on OpenShift using CLI You can install the Kubernetes Image Puller on OpenShift by using OpenShift oc management tool. Prerequisites An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . Procedure Gather a list of relevant container images to pull by navigating to the links: https:// <openshift_dev_spaces_fqdn> /plugin-registry/v3/external_images.txt https:// <openshift_dev_spaces_fqdn> /devfile-registry/devfiles/external_images.txt Define the memory requests and limits parameters to ensure pulled containers and the platform have enough memory to run. When defining the minimal value for CACHING_MEMORY_REQUEST or CACHING_MEMORY_LIMIT , consider the necessary amount of memory required to run each of the container images to pull. When defining the maximal value for CACHING_MEMORY_REQUEST or CACHING_MEMORY_LIMIT , consider the total memory allocated to the DaemonSet Pods in the cluster: Pulling 5 images on 20 nodes, with a container memory limit of 20Mi requires 2000Mi of memory. Clone the Image Puller repository and get in the directory containing the OpenShift templates: Configure the app.yaml , configmap.yaml and serviceaccount.yaml OpenShift templates using following parameters: Table 3.37. Image Puller OpenShift templates parameters in app.yaml Value Usage Default DEPLOYMENT_NAME The value of DEPLOYMENT_NAME in the ConfigMap kubernetes-image-puller IMAGE Image used for the kubernetes-image-puller deployment registry.redhat.io/devspaces/imagepuller-rhel8 IMAGE_TAG The image tag to pull latest SERVICEACCOUNT_NAME The name of the ServiceAccount created and used by the deployment kubernetes-image-puller Table 3.38. Image Puller OpenShift templates parameters in configmap.yaml Value Usage Default CACHING_CPU_LIMIT The value of CACHING_CPU_LIMIT in the ConfigMap .2 CACHING_CPU_REQUEST The value of CACHING_CPU_REQUEST in the ConfigMap .05 CACHING_INTERVAL_HOURS The value of CACHING_INTERVAL_HOURS in the ConfigMap "1" CACHING_MEMORY_LIMIT The value of CACHING_MEMORY_LIMIT in the ConfigMap "20Mi" CACHING_MEMORY_REQUEST The value of CACHING_MEMORY_REQUEST in the ConfigMap "10Mi" DAEMONSET_NAME The value of DAEMONSET_NAME in the ConfigMap kubernetes-image-puller DEPLOYMENT_NAME The value of DEPLOYMENT_NAME in the ConfigMap kubernetes-image-puller IMAGES The value of IMAGES in the ConfigMap {} NAMESPACE The value of NAMESPACE in the ConfigMap k8s-image-puller NODE_SELECTOR The value of NODE_SELECTOR in the ConfigMap "{}" Table 3.39. Image Puller OpenShift templates parameters in serviceaccount.yaml Value Usage Default SERVICEACCOUNT_NAME The name of the ServiceAccount created and used by the deployment kubernetes-image-puller Create an OpenShift project to host the Image Puller: Process and apply the templates to install the puller: Verification steps Verify the existence of a <kubernetes-image-puller> deployment and a <kubernetes-image-puller> DaemonSet. The DaemonSet needs to have a Pod for each node in the cluster: USD oc get deployment,daemonset,pod --namespace <k8s-image-puller> Verify the values of the <kubernetes-image-puller> ConfigMap . USD oc get configmap <kubernetes-image-puller> --output yaml 3.6.2. Installing Image Puller on OpenShift by using the web console You can install the community supported Kubernetes Image Puller Operator on OpenShift by using the OpenShift web console. Prerequisites An OpenShift web console session by a cluster administrator. See Accessing the web console . Procedure Install the community supported Kubernetes Image Puller Operator. See Installing from OperatorHub using the web console . Create a kubernetes-image-puller KubernetesImagePuller operand from the community supported Kubernetes Image Puller Operator. See Creating applications from installed Operators . 3.6.3. Configuring Image Puller to pre-pull default Dev Spaces images You can configure Kubernetes Image Puller to pre-pull default OpenShift Dev Spaces images. Red Hat OpenShift Dev Spaces operator will control the list of images to pre-pull and automatically updates them on OpenShift Dev Spaces upgrade. Prerequisites Your organization's instance of OpenShift Dev Spaces is installed and running on Kubernetes cluster. Image Puller is installed on Kubernetes cluster. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Configure the Image Puller to pre-pull OpenShift Dev Spaces images. oc patch checluster/devspaces \ --namespace openshift-devspaces \ --type='merge' \ --patch '{ "spec": { "components": { "imagePuller": { "enable": true } } } }' 3.6.4. Configuring Image Puller to pre-pull custom images You can configure Kubernetes Image Puller to pre-pull custom images. Prerequisites Your organization's instance of OpenShift Dev Spaces is installed and running on Kubernetes cluster. Image Puller is installed on Kubernetes cluster. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Configure the Image Puller to pre-pull custom images. oc patch checluster/devspaces \ --namespace openshift-devspaces \ --type='merge' \ --patch '{ "spec": { "components": { "imagePuller": { "enable": true, "spec": { "images": " NAME-1 = IMAGE-1 ; NAME-2 = IMAGE-2 " 1 } } } } }' 1 The semicolon separated list of images 3.6.5. Configuring Image Puller to pre-pull additional images You can configure Kubernetes Image Puller to pre-pull additional OpenShift Dev Spaces images. Prerequisites Your organization's instance of OpenShift Dev Spaces is installed and running on Kubernetes cluster. Image Puller is installed on Kubernetes cluster. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Create k8s-image-puller namespace: oc create namespace k8s-image-puller Create KubernetesImagePuller Custom Resource: oc apply -f - <<EOF apiVersion: che.eclipse.org/v1alpha1 kind: KubernetesImagePuller metadata: name: k8s-image-puller-images namespace: k8s-image-puller spec: images: "__NAME-1__=__IMAGE-1__;__NAME-2__=__IMAGE-2__" 1 EOF 1 The semicolon separated list of images Addition resources Kubernetes Image Puller source code repository community supported Kubernetes Image Puller Operator source code repository 3.7. Configuring observability To configure OpenShift Dev Spaces observability features, see: Section 3.7.2.15, "Configuring server logging" Section 3.7.2.16, "Collecting logs using dsc" Section 3.7.3, "Monitoring the Dev Workspace Operator" Section 3.7.4, "Monitoring Dev Spaces Server" 3.7.1. The Woopra telemetry plugin The Woopra Telemetry Plugin is a plugin built to send telemetry from a Red Hat OpenShift Dev Spaces installation to Segment and Woopra. This plugin is used by Eclipse Che hosted by Red Hat , but any Red Hat OpenShift Dev Spaces deployment can take advantage of this plugin. There are no dependencies other than a valid Woopra domain and Segment Write key. The devfile v2 for the plugin, plugin.yaml , has four environment variables that can be passed to the plugin: WOOPRA_DOMAIN - The Woopra domain to send events to. SEGMENT_WRITE_KEY - The write key to send events to Segment and Woopra. WOOPRA_DOMAIN_ENDPOINT - If you prefer not to pass in the Woopra domain directly, the plugin will get it from a supplied HTTP endpoint that returns the Woopra Domain. SEGMENT_WRITE_KEY_ENDPOINT - If you prefer not to pass in the Segment write key directly, the plugin will get it from a supplied HTTP endpoint that returns the Segment write key. To enable the Woopra plugin on the Red Hat OpenShift Dev Spaces installation: Procedure Deploy the plugin.yaml devfile v2 file to an HTTP server with the environment variables set correctly. Configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: devEnvironments: defaultPlugins: - editor: eclipse/che-theia/ 1 plugins: 2 - 'https://your-web-server/plugin.yaml' 1 The editorId to set the telemetry plugin for. 2 The URL to the telemetry plugin's devfile v2 definition. Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.7.2. Creating a telemetry plugin This section shows how to create an AnalyticsManager class that extends AbstractAnalyticsManager and implements the following methods: isEnabled() - determines whether the telemetry backend is functioning correctly. This can mean always returning true , or have more complex checks, for example, returning false when a connection property is missing. destroy() - cleanup method that is run before shutting down the telemetry backend. This method sends the WORKSPACE_STOPPED event. onActivity() - notifies that some activity is still happening for a given user. This is mainly used to send WORKSPACE_INACTIVE events. onEvent() - submits telemetry events to the telemetry server, such as WORKSPACE_USED or WORKSPACE_STARTED . increaseDuration() - increases the duration of a current event rather than sending many events in a small frame of time. The following sections cover: Creating a telemetry server to echo events to standard output. Extending the OpenShift Dev Spaces telemetry client and implementing a user's custom backend. Creating a plugin.yaml file representing a Dev Workspace plugin for the custom backend. Specifying of a location of a custom plugin to OpenShift Dev Spaces by setting the workspacesDefaultPlugins attribute from the CheCluster custom resource. 3.7.2.1. Getting started This document describes the steps required to extend the OpenShift Dev Spaces telemetry system to communicate with to a custom backend: Creating a server process that receives events Extending OpenShift Dev Spaces libraries to create a backend that sends events to the server Packaging the telemetry backend in a container and deploying it to an image registry Adding a plugin for your backend and instructing OpenShift Dev Spaces to load the plugin in your Dev Workspaces A finished example of the telemetry backend is available here . 3.7.2.2. Creating a server that receives events For demonstration purposes, this example shows how to create a server that receives events from our telemetry plugin and writes them to standard output. For production use cases, consider integrating with a third-party telemetry system (for example, Segment, Woopra) rather than creating your own telemetry server. In this case, use your provider's APIs to send events from your custom backend to their system. The following Go code starts a server on port 8080 and writes events to standard output: Example 3.20. main.go package main import ( "io/ioutil" "net/http" "go.uber.org/zap" ) var logger *zap.SugaredLogger func event(w http.ResponseWriter, req *http.Request) { switch req.Method { case "GET": logger.Info("GET /event") case "POST": logger.Info("POST /event") } body, err := req.GetBody() if err != nil { logger.With("err", err).Info("error getting body") return } responseBody, err := ioutil.ReadAll(body) if err != nil { logger.With("error", err).Info("error reading response body") return } logger.With("body", string(responseBody)).Info("got event") } func activity(w http.ResponseWriter, req *http.Request) { switch req.Method { case "GET": logger.Info("GET /activity, doing nothing") case "POST": logger.Info("POST /activity") body, err := req.GetBody() if err != nil { logger.With("error", err).Info("error getting body") return } responseBody, err := ioutil.ReadAll(body) if err != nil { logger.With("error", err).Info("error reading response body") return } logger.With("body", string(responseBody)).Info("got activity") } } func main() { log, _ := zap.NewProduction() logger = log.Sugar() http.HandleFunc("/event", event) http.HandleFunc("/activity", activity) logger.Info("Added Handlers") logger.Info("Starting to serve") http.ListenAndServe(":8080", nil) } Create a container image based on this code and expose it as a deployment in OpenShift in the openshift-devspaces project. The code for the example telemetry server is available at telemetry-server-example . To deploy the telemetry server, clone the repository and build the container: Both manifest_with_ingress.yaml and manifest_with_route contain definitions for a Deployment and Service. The former also defines a Kubernetes Ingress, while the latter defines an OpenShift Route. In the manifest file, replace the image and host fields to match the image you pushed, and the public hostname of your OpenShift cluster. Then run: 3.7.2.3. Creating the back-end project Note For fast feedback when developing, it is recommended to do development inside a Dev Workspace. This way, you can run the application in a cluster and receive events from the front-end telemetry plugin. Maven Quarkus project scaffolding: Remove the files under src/main/java/mygroup and src/test/java/mygroup . Consult the GitHub packages for the latest version and Maven coordinates of backend-base . Add the following dependencies to your pom.xml : Example 3.21. pom.xml <!-- Required --> <dependency> <groupId>org.eclipse.che.incubator.workspace-telemetry</groupId> <artifactId>backend-base</artifactId> <version>LATEST VERSION FROM STEP</version> </dependency> <!-- Used to make http requests to the telemetry server --> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client-jackson</artifactId> </dependency> Create a personal access token with read:packages permissions to download the org.eclipse.che.incubator.workspace-telemetry:backend-base dependency from GitHub packages . Add your GitHub username, personal access token and che-incubator repository details in your ~/.m2/settings.xml file: Example 3.22. settings.xml <settings xmlns="http://maven.apache.org/SETTINGS/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd"> <servers> <server> <id>che-incubator</id> <username>YOUR GITHUB USERNAME</username> <password>YOUR GITHUB TOKEN</password> </server> </servers> <profiles> <profile> <id>github</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>central</id> <url>https://repo1.maven.org/maven2</url> <releases><enabled>true</enabled></releases> <snapshots><enabled>false</enabled></snapshots> </repository> <repository> <id>che-incubator</id> <url>https://maven.pkg.github.com/che-incubator/che-workspace-telemetry-client</url> </repository> </repositories> </profile> </profiles> </settings> 3.7.2.4. Creating a concrete implementation of AnalyticsManager and adding specialized logic Create two files in your project under src/main/java/mygroup : MainConfiguration.java - contains configuration provided to AnalyticsManager . AnalyticsManager.java - contains logic specific to the telemetry system. Example 3.23. MainConfiguration.java package org.my.group; import java.util.Optional; import javax.enterprise.context.Dependent; import javax.enterprise.inject.Alternative; import org.eclipse.che.incubator.workspace.telemetry.base.BaseConfiguration; import org.eclipse.microprofile.config.inject.ConfigProperty; @Dependent @Alternative public class MainConfiguration extends BaseConfiguration { @ConfigProperty(name = "welcome.message") 1 Optional<String> welcomeMessage; 2 } 1 A MicroProfile configuration annotation is used to inject the welcome.message configuration. For more details on how to set configuration properties specific to your backend, see the Quarkus Configuration Reference Guide . Example 3.24. AnalyticsManager.java package org.my.group; import java.util.HashMap; import java.util.Map; import javax.enterprise.context.Dependent; import javax.enterprise.inject.Alternative; import javax.inject.Inject; import org.eclipse.che.incubator.workspace.telemetry.base.AbstractAnalyticsManager; import org.eclipse.che.incubator.workspace.telemetry.base.AnalyticsEvent; import org.eclipse.che.incubator.workspace.telemetry.finder.DevWorkspaceFinder; import org.eclipse.che.incubator.workspace.telemetry.finder.UsernameFinder; import org.eclipse.microprofile.rest.client.inject.RestClient; import org.slf4j.Logger; import static org.slf4j.LoggerFactory.getLogger; @Dependent @Alternative public class AnalyticsManager extends AbstractAnalyticsManager { private static final Logger LOG = getLogger(AbstractAnalyticsManager.class); public AnalyticsManager(MainConfiguration mainConfiguration, DevWorkspaceFinder devworkspaceFinder, UsernameFinder usernameFinder) { super(mainConfiguration, devworkspaceFinder, usernameFinder); mainConfiguration.welcomeMessage.ifPresentOrElse( 1 (str) -> LOG.info("The welcome message is: {}", str), () -> LOG.info("No welcome message provided") ); } @Override public boolean isEnabled() { return true; } @Override public void destroy() {} @Override public void onEvent(AnalyticsEvent event, String ownerId, String ip, String userAgent, String resolution, Map<String, Object> properties) { LOG.info("The received event is: {}", event); 2 } @Override public void increaseDuration(AnalyticsEvent event, Map<String, Object> properties) { } @Override public void onActivity() {} } 1 Log the welcome message if it was provided. 2 Log the event received from the front-end plugin. Since org.my.group.AnalyticsManager and org.my.group.MainConfiguration are alternative beans, specify them using the quarkus.arc.selected-alternatives property in src/main/resources/application.properties . Example 3.25. application.properties 3.7.2.5. Running the application within a Dev Workspace Set the DEVWORKSPACE_TELEMETRY_BACKEND_PORT environment variable in the Dev Workspace. Here, the value is set to 4167 . Restart the Dev Workspace from the Red Hat OpenShift Dev Spaces dashboard. Run the following command within a Dev Workspace's terminal window to start the application. Use the --settings flag to specify path to the location of the settings.xml file that contains the GitHub access token. The application now receives telemetry events through port 4167 from the front-end plugin. Verification steps Verify that the following output is logged: To verify that the onEvent() method of AnalyticsManager receives events from the front-end plugin, press the l key to disable Quarkus live coding and edit any file within the IDE. The following output should be logged: 3.7.2.6. Implementing isEnabled() For the purposes of the example, this method always returns true whenever it is called. Example 3.26. AnalyticsManager.java @Override public boolean isEnabled() { return true; } It is possible to put more complex logic in isEnabled() . For example, the hosted OpenShift Dev Spaces Woopra backend checks that a configuration property exists before determining if the backend is enabled. 3.7.2.7. Implementing onEvent() onEvent() sends the event received by the backend to the telemetry system. For the example application, it sends an HTTP POST payload to the /event endpoint from the telemetry server. 3.7.2.7.1. Sending a POST request to the example telemetry server For the following example, the telemetry server application is deployed to OpenShift at the following URL: http://little-telemetry-server-che.apps-crc.testing , where apps-crc.testing is the ingress domain name of the OpenShift cluster. Set up the RESTEasy REST Client by creating TelemetryService.java Example 3.27. TelemetryService.java package org.my.group; import java.util.Map; import javax.ws.rs.Consumes; import javax.ws.rs.POST; import javax.ws.rs.Path; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; @RegisterRestClient public interface TelemetryService { @POST @Path("/event") 1 @Consumes(MediaType.APPLICATION_JSON) Response sendEvent(Map<String, Object> payload); } 1 The endpoint to make the POST request to. Specify the base URL for TelemetryService in the src/main/resources/application.properties file: Example 3.28. application.properties Inject TelemetryService into AnalyticsManager and send a POST request in onEvent() Example 3.29. AnalyticsManager.java @Dependent @Alternative public class AnalyticsManager extends AbstractAnalyticsManager { @Inject @RestClient TelemetryService telemetryService; ... @Override public void onEvent(AnalyticsEvent event, String ownerId, String ip, String userAgent, String resolution, Map<String, Object> properties) { Map<String, Object> payload = new HashMap<String, Object>(properties); payload.put("event", event); telemetryService.sendEvent(payload); } This sends an HTTP request to the telemetry server and automatically delays identical events for a small period of time. The default duration is 1500 milliseconds. 3.7.2.8. Implementing increaseDuration() Many telemetry systems recognize event duration. The AbstractAnalyticsManager merges similar events that happen in the same frame of time into one event. This implementation of increaseDuration() is a no-op. This method uses the APIs of the user's telemetry provider to alter the event or event properties to reflect the increased duration of an event. Example 3.30. AnalyticsManager.java @Override public void increaseDuration(AnalyticsEvent event, Map<String, Object> properties) {} 3.7.2.9. Implementing onActivity() Set an inactive timeout limit, and use onActivity() to send a WORKSPACE_INACTIVE event if the last event time is longer than the timeout. Example 3.31. AnalyticsManager.java public class AnalyticsManager extends AbstractAnalyticsManager { ... private long inactiveTimeLimit = 60000 * 3; ... @Override public void onActivity() { if (System.currentTimeMillis() - lastEventTime >= inactiveTimeLimit) { onEvent(WORKSPACE_INACTIVE, lastOwnerId, lastIp, lastUserAgent, lastResolution, commonProperties); } } 3.7.2.10. Implementing destroy() When destroy() is called, send a WORKSPACE_STOPPED event and shutdown any resources such as connection pools. Example 3.32. AnalyticsManager.java @Override public void destroy() { onEvent(WORKSPACE_STOPPED, lastOwnerId, lastIp, lastUserAgent, lastResolution, commonProperties); } Running mvn quarkus:dev as described in Section 3.7.2.5, "Running the application within a Dev Workspace" and terminating the application with Ctrl + C sends a WORKSPACE_STOPPED event to the server. 3.7.2.11. Packaging the Quarkus application See the Quarkus documentation for the best instructions to package the application in a container. Build and push the container to a container registry of your choice. 3.7.2.11.1. Sample Dockerfile for building a Quarkus image running with JVM Example 3.33. Dockerfile.jvm FROM registry.access.redhat.com/ubi8/openjdk-11:1.11 ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en' COPY --chown=185 target/quarkus-app/lib/ /deployments/lib/ COPY --chown=185 target/quarkus-app/*.jar /deployments/ COPY --chown=185 target/quarkus-app/app/ /deployments/app/ COPY --chown=185 target/quarkus-app/quarkus/ /deployments/quarkus/ EXPOSE 8080 USER 185 ENTRYPOINT ["java", "-Dquarkus.http.host=0.0.0.0", "-Djava.util.logging.manager=org.jboss.logmanager.LogManager", "-Dquarkus.http.port=USD{DEVWORKSPACE_TELEMETRY_BACKEND_PORT}", "-jar", "/deployments/quarkus-run.jar"] To build the image, run: 3.7.2.11.2. Sample Dockerfile for building a Quarkus native image Example 3.34. Dockerfile.native FROM registry.access.redhat.com/ubi8/ubi-minimal:8.5 WORKDIR /work/ RUN chown 1001 /work \ && chmod "g+rwX" /work \ && chown 1001:root /work COPY --chown=1001:root target/*-runner /work/application EXPOSE 8080 USER 1001 CMD ["./application", "-Dquarkus.http.host=0.0.0.0", "-Dquarkus.http.port=USDDEVWORKSPACE_TELEMETRY_BACKEND_PORT}"] To build the image, run: 3.7.2.12. Creating a plugin.yaml for your plugin Create a plugin.yaml devfile v2 file representing a Dev Workspace plugin that runs your custom backend in a Dev Workspace Pod. For more information about devfile v2, see Devfile v2 documentation Example 3.35. plugin.yaml schemaVersion: 2.1.0 metadata: name: devworkspace-telemetry-backend-plugin version: 0.0.1 description: A Demo telemetry backend displayName: Devworkspace Telemetry Backend components: - name: devworkspace-telemetry-backend-plugin attributes: workspaceEnv: - name: DEVWORKSPACE_TELEMETRY_BACKEND_PORT value: '4167' container: image: YOUR IMAGE 1 env: - name: WELCOME_MESSAGE 2 value: 'hello world!' 1 Specify the container image built from Section 3.7.2.11, "Packaging the Quarkus application" . 2 Set the value for the welcome.message optional configuration property from Example 4. Typically, the user deploys this file to a corporate web server. This guide demonstrates how to create an Apache web server on OpenShift and host the plugin there. Create a ConfigMap object that references the new plugin.yaml file. Create a deployment, a service, and a route to expose the web server. The deployment references this ConfigMap object and places it in the /var/www/html directory. Example 3.36. manifest.yaml kind: Deployment apiVersion: apps/v1 metadata: name: apache spec: replicas: 1 selector: matchLabels: app: apache template: metadata: labels: app: apache spec: volumes: - name: plugin-yaml configMap: name: telemetry-plugin-yaml defaultMode: 420 containers: - name: apache image: 'registry.redhat.io/rhscl/httpd-24-rhel7:latest' ports: - containerPort: 8080 protocol: TCP resources: {} volumeMounts: - name: plugin-yaml mountPath: /var/www/html strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 25% revisionHistoryLimit: 10 progressDeadlineSeconds: 600 --- kind: Service apiVersion: v1 metadata: name: apache spec: ports: - protocol: TCP port: 8080 targetPort: 8080 selector: app: apache type: ClusterIP --- kind: Route apiVersion: route.openshift.io/v1 metadata: name: apache spec: host: apache-che.apps-crc.testing to: kind: Service name: apache weight: 100 port: targetPort: 8080 wildcardPolicy: None Verification steps After the deployment has started, confirm that plugin.yaml is available in the web server: 3.7.2.13. Specifying the telemetry plugin in a Dev Workspace Add the following to the components field of an existing Dev Workspace: Start the Dev Workspace from the OpenShift Dev Spaces dashboard. Verification steps Verify that the telemetry plugin container is running in the Dev Workspace pod. Here, this is verified by checking the Workspace view within the editor. Edit files within the editor and observe their events in the example telemetry server's logs. 3.7.2.14. Applying the telemetry plugin for all Dev Workspaces Set the telemetry plugin as a default plugin. Default plugins are applied on Dev Workspace startup for new and existing Dev Workspaces. Configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . 1 The editor identification to set the default plugins for. 2 List of URLs to devfile v2 plugins. Additional resources Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . Verification steps Start a new or existing Dev Workspace from the Red Hat OpenShift Dev Spaces dashboard. Verify that the telemetry plugin is working by following the verification steps for Section 3.7.2.13, "Specifying the telemetry plugin in a Dev Workspace" . 3.7.2.15. Configuring server logging It is possible to fine-tune the log levels of individual loggers available in the OpenShift Dev Spaces server. The log level of the whole OpenShift Dev Spaces server is configured globally using the cheLogLevel configuration property of the Operator. See Section 3.1.3, " CheCluster Custom Resource fields reference" . To set the global log level in installations not managed by the Operator, specify the CHE_LOG_LEVEL environment variable in the che ConfigMap. It is possible to configure the log levels of the individual loggers in the OpenShift Dev Spaces server using the CHE_LOGGER_CONFIG environment variable. 3.7.2.15.1. Configuring log levels Procedure Configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: " <key1=value1,key2=value2> " 1 1 Comma-separated list of key-value pairs, where keys are the names of the loggers as seen in the OpenShift Dev Spaces server log output and values are the required log levels. Example 3.37. Configuring debug mode for the WorkspaceManager spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: "org.eclipse.che.api.workspace.server.WorkspaceManager=DEBUG" Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.7.2.15.2. Logger naming The names of the loggers follow the class names of the internal server classes that use those loggers. 3.7.2.15.3. Logging HTTP traffic Procedure To log the HTTP traffic between the OpenShift Dev Spaces server and the API server of the Kubernetes or OpenShift cluster, configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: "che.infra.request-logging=TRACE" Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.7.2.16. Collecting logs using dsc An installation of Red Hat OpenShift Dev Spaces consists of several containers running in the OpenShift cluster. While it is possible to manually collect logs from each running container, dsc provides commands which automate the process. Following commands are available to collect Red Hat OpenShift Dev Spaces logs from the OpenShift cluster using the dsc tool: dsc server:logs Collects existing Red Hat OpenShift Dev Spaces server logs and stores them in a directory on the local machine. By default, logs are downloaded to a temporary directory on the machine. However, this can be overwritten by specifying the -d parameter. For example, to download OpenShift Dev Spaces logs to the /home/user/che-logs/ directory, use the command dsc server:logs -d /home/user/che-logs/ When run, dsc server:logs prints a message in the console specifying the directory that will store the log files: If Red Hat OpenShift Dev Spaces is installed in a non-default project, dsc server:logs requires the -n <NAMESPACE> paremeter, where <NAMESPACE> is the OpenShift project in which Red Hat OpenShift Dev Spaces was installed. For example, to get logs from OpenShift Dev Spaces in the my-namespace project, use the command dsc server:logs -n my-namespace dsc server:deploy Logs are automatically collected during the OpenShift Dev Spaces installation when installed using dsc . As with dsc server:logs , the directory logs are stored in can be specified using the -d parameter. Additional resources " dsc reference documentation " 3.7.3. Monitoring the Dev Workspace Operator You can configure the OpenShift in-cluster monitoring stack to scrape metrics exposed by the Dev Workspace Operator. 3.7.3.1. Collecting Dev Workspace Operator metrics To use the in-cluster Prometheus instance to collect, store, and query metrics about the Dev Workspace Operator: Prerequisites Your organization's instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . The devworkspace-controller-metrics Service is exposing metrics on port 8443 . This is preconfigured by default. Procedure Create the ServiceMonitor for detecting the Dev Workspace Operator metrics Service. Example 3.38. ServiceMonitor apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: devworkspace-controller namespace: openshift-devspaces 1 spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token interval: 10s 2 port: metrics scheme: https tlsConfig: insecureSkipVerify: true namespaceSelector: matchNames: - openshift-operators selector: matchLabels: app.kubernetes.io/name: devworkspace-controller 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The rate at which a target is scraped. Allow the in-cluster Prometheus instance to detect the ServiceMonitor in the OpenShift Dev Spaces namespace. The default OpenShift Dev Spaces namespace is openshift-devspaces . Verification For a fresh installation of OpenShift Dev Spaces, generate metrics by creating a OpenShift Dev Spaces workspace from the Dashboard. In the Administrator view of the OpenShift web console, go to Observe Metrics . Run a PromQL query to confirm that the metrics are available. For example, enter devworkspace_started_total and click Run queries . For more metrics, see Section 3.7.3.2, "Dev Workspace-specific metrics" . Tip To troubleshoot missing metrics, view the Prometheus container logs for possible RBAC-related errors: Get the name of the Prometheus pod: USD oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}' Print the last 20 lines of the Prometheus container logs from the Prometheus pod from the step: USD oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring Additional resources Querying Prometheus Prometheus metric types 3.7.3.2. Dev Workspace-specific metrics The following tables describe the Dev Workspace-specific metrics exposed by the devworkspace-controller-metrics Service. Table 3.40. Metrics Name Type Description Labels devworkspace_started_total Counter Number of Dev Workspace starting events. source , routingclass devworkspace_started_success_total Counter Number of Dev Workspaces successfully entering the Running phase. source , routingclass devworkspace_fail_total Counter Number of failed Dev Workspaces. source , reason devworkspace_startup_time Histogram Total time taken to start a Dev Workspace, in seconds. source , routingclass Table 3.41. Labels Name Description Values source The controller.devfile.io/devworkspace-source label of the Dev Workspace. string routingclass The spec.routingclass of the Dev Workspace. "basic|cluster|cluster-tls|web-terminal" reason The workspace startup failure reason. "BadRequest|InfrastructureFailure|Unknown" Table 3.42. Startup failure reasons Name Description BadRequest Startup failure due to an invalid devfile used to create a Dev Workspace. InfrastructureFailure Startup failure due to the following errors: CreateContainerError , RunContainerError , FailedScheduling , FailedMount . Unknown Unknown failure reason. 3.7.3.3. Viewing Dev Workspace Operator metrics from an OpenShift web console dashboard After configuring the in-cluster Prometheus instance to collect Dev Workspace Operator metrics, you can view the metrics on a custom dashboard in the Administrator perspective of the OpenShift web console. Prerequisites Your organization's instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . The in-cluster Prometheus instance is collecting metrics. See Section 3.7.3.1, "Collecting Dev Workspace Operator metrics" . Procedure Create a ConfigMap for the dashboard definition in the openshift-config-managed project and apply the necessary label. Note The command contains a link to material from the upstream community. This material represents the very latest available content and the most recent best practices. These tips have not yet been vetted by Red Hat's QE department, and they have not yet been proven by a wide user group. Please, use this information cautiously. Note The dashboard definition is based on Grafana 6.x dashboards. Not all Grafana 6.x dashboard features are supported in the OpenShift web console. Verification steps In the Administrator view of the OpenShift web console, go to Observe Dashboards . Go to Dashboard Che Server JVM and verify that the dashboard panels contain data. 3.7.3.4. Dashboard for the Dev Workspace Operator The OpenShift web console custom dashboard is based on Grafana 6.x and displays the following metrics from the Dev Workspace Operator. Note Not all features for Grafana 6.x dashboards are supported as an OpenShift web console dashboard. 3.7.3.4.1. Dev Workspace metrics The Dev Workspace-specific metrics are displayed in the Dev Workspace Metrics panel. Figure 3.1. The Dev Workspace Metrics panel Average workspace start time The average workspace startup duration. Workspace starts The number of successful and failed workspace startups. Dev Workspace successes and failures A comparison between successful and failed Dev Workspace startups. Dev Workspace failure rate The ratio between the number of failed workspace startups and the number of total workspace startups. Dev Workspace startup failure reasons A pie chart that displays the distribution of workspace startup failures: BadRequest InfrastructureFailure Unknown 3.7.3.4.2. Operator metrics The Operator-specific metrics are displayed in the Operator Metrics panel. Figure 3.2. The Operator Metrics panel Webhooks in flight A comparison between the number of different webhook requests. Work queue depth The number of reconcile requests that are in the work queue. Memory Memory usage for the Dev Workspace controller and the Dev Workspace webhook server. Average reconcile counts per second (DWO) The average per-second number of reconcile counts for the Dev Workspace controller. 3.7.4. Monitoring Dev Spaces Server You can configure OpenShift Dev Spaces to expose JVM metrics such as JVM memory and class loading for OpenShift Dev Spaces Server. 3.7.4.1. Enabling and exposing OpenShift Dev Spaces Server metrics OpenShift Dev Spaces exposes the JVM metrics on port 8087 of the che-host Service. You can configure this behaviour. Procedure Configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: components: metrics: enable: <boolean> 1 1 true to enable, false to disable. 3.7.4.2. Collecting OpenShift Dev Spaces Server metrics with Prometheus To use the in-cluster Prometheus instance to collect, store, and query JVM metrics for OpenShift Dev Spaces Server: Prerequisites Your organization's instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . OpenShift Dev Spaces is exposing metrics on port 8087 . See Enabling and exposing OpenShift Dev Spaces server JVM metrics . Procedure Create the ServiceMonitor for detecting the OpenShift Dev Spaces JVM metrics Service. Example 3.39. ServiceMonitor apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: che-host namespace: openshift-devspaces 1 spec: endpoints: - interval: 10s 2 port: metrics scheme: http namespaceSelector: matchNames: - openshift-devspaces 3 selector: matchLabels: app.kubernetes.io/name: devspaces 1 3 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The rate at which a target is scraped. Create a Role and RoleBinding to allow Prometheus to view the metrics. Example 3.40. Role kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: prometheus-k8s namespace: openshift-devspaces 1 rules: - verbs: - get - list - watch apiGroups: - '' resources: - services - endpoints - pods 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . Example 3.41. RoleBinding kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: view-devspaces-openshift-monitoring-prometheus-k8s namespace: openshift-devspaces 1 subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: prometheus-k8s 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . Allow the in-cluster Prometheus instance to detect the ServiceMonitor in the OpenShift Dev Spaces namespace. The default OpenShift Dev Spaces namespace is openshift-devspaces . USD oc label namespace openshift-devspaces openshift.io/cluster-monitoring=true Verification In the Administrator view of the OpenShift web console, go to Observe Metrics . Run a PromQL query to confirm that the metrics are available. For example, enter process_uptime_seconds{job="che-host"} and click Run queries . Tip To troubleshoot missing metrics, view the Prometheus container logs for possible RBAC-related errors: Get the name of the Prometheus pod: USD oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}' Print the last 20 lines of the Prometheus container logs from the Prometheus pod from the step: USD oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring Additional resources Querying Prometheus Prometheus metric types 3.7.4.3. Viewing OpenShift Dev Spaces Server from an OpenShift web console dashboard After configuring the in-cluster Prometheus instance to collect OpenShift Dev Spaces Server JVM metrics, you can view the metrics on a custom dashboard in the Administrator perspective of the OpenShift web console. Prerequisites Your organization's instance of OpenShift Dev Spaces is installed and running in Red Hat OpenShift. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . The in-cluster Prometheus instance is collecting metrics. See Section 3.7.4.2, "Collecting OpenShift Dev Spaces Server metrics with Prometheus" . Procedure Create a ConfigMap for the dashboard definition in the openshift-config-managed project and apply the necessary label. Note The command contains a link to material from the upstream community. This material represents the very latest available content and the most recent best practices. These tips have not yet been vetted by Red Hat's QE department, and they have not yet been proven by a wide user group. Please, use this information cautiously. Note The dashboard definition is based on Grafana 6.x dashboards. Not all Grafana 6.x dashboard features are supported in the OpenShift web console. Verification steps In the Administrator view of the OpenShift web console, go to Observe Dashboards . Go to Dashboard Dev Workspace Operator and verify that the dashboard panels contain data. Figure 3.3. Quick Facts Figure 3.4. JVM Memory Figure 3.5. JVM Misc Figure 3.6. JVM Memory Pools (heap) Figure 3.7. JVM Memory Pools (Non-Heap) Figure 3.8. Garbage Collection Figure 3.9. Class loading Figure 3.10. Buffer Pools 3.8. Configuring networking Section 3.8.1, "Configuring network policies" Section 3.8.2, "Configuring Dev Spaces hostname" Section 3.8.3, "Importing untrusted TLS certificates to Dev Spaces" Section 3.8.4, "Adding labels and annotations" 3.8.1. Configuring network policies By default, all Pods in a OpenShift cluster can communicate with each other even if they are in different namespaces. In the context of OpenShift Dev Spaces, this makes it possible for a workspace Pod in one user project to send traffic to another workspace Pod in a different user project. For security, multitenant isolation could be configured by using NetworkPolicy objects to restrict all incoming communication to Pods in a user project. However, Pods in the OpenShift Dev Spaces project must be able to communicate with Pods in user projects. Prerequisites The OpenShift cluster has network restrictions such as multitenant isolation. Procedure Apply the allow-from-openshift-devspaces NetworkPolicy to each user project. The allow-from-openshift-devspaces NetworkPolicy allows incoming traffic from the OpenShift Dev Spaces namespace to all Pods in the user project. Example 3.42. allow-from-openshift-devspaces.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-devspaces spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-devspaces 1 podSelector: {} 2 policyTypes: - Ingress 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The empty podSelector selects all Pods in the project. OPTIONAL: In case you applied Configuring multitenant isolation with network policy , you also must apply allow-from-openshift-apiserver and allow-from-workspaces-namespaces NetworkPolicies to openshift-devspaces . The allow-from-openshift-apiserver NetworkPolicy allows incoming traffic from openshift-apiserver namespace to the devworkspace-webhook-server enabling webhooks. The allow-from-workspaces-namespaces NetworkPolicy allows incoming traffic from each user project to che-gateway pod. Example 3.43. allow-from-openshift-apiserver.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-apiserver namespace: openshift-devspaces 1 spec: podSelector: matchLabels: app.kubernetes.io/name: devworkspace-webhook-server 2 ingress: - from: - podSelector: {} namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-apiserver policyTypes: - Ingress 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The podSelector only selects devworkspace-webhook-server pods Example 3.44. allow-from-workspaces-namespaces.yaml apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-workspaces-namespaces namespace: openshift-devspaces 1 spec: podSelector: {} 2 ingress: - from: - podSelector: {} namespaceSelector: matchLabels: app.kubernetes.io/component: workspaces-namespace policyTypes: - Ingress 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The empty podSelector selects all pods in the OpenShift Dev Spaces namespace. Section 3.2, "Configuring projects" Network isolation Configuring multitenant isolation with network policy 3.8.2. Configuring Dev Spaces hostname This procedure describes how to configure OpenShift Dev Spaces to use custom hostname. Prerequisites An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . The certificate and the private key files are generated. Important To generate the pair of a private key and certificate, the same certification authority (CA) must be used as for other OpenShift Dev Spaces hosts. Important Ask a DNS provider to point the custom hostname to the cluster ingress. Procedure Pre-create a project for OpenShift Dev Spaces: Create a TLS secret: 1 The TLS secret name 2 A file with the private key 3 A file with the certificate Add the required labels to the secret: 1 The TLS secret name Configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . 1 Custom Red Hat OpenShift Dev Spaces server hostname 2 The TLS secret name If OpenShift Dev Spaces has been already deployed, wait until the rollout of all OpenShift Dev Spaces components finishes. Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.8.3. Importing untrusted TLS certificates to Dev Spaces OpenShift Dev Spaces components communications with external services are encrypted with TLS. They require TLS certificates signed by trusted Certificate Authorities (CA). Therefore, you must import into OpenShift Dev Spaces all untrusted CA chains in use by an external service such as: A proxy An identity provider (OIDC) A source code repositories provider (Git) OpenShift Dev Spaces uses labeled config maps in OpenShift Dev Spaces project as sources for TLS certificates. The config maps can have an arbitrary amount of keys with a random amount of certificates each. Note When an OpenShift cluster contains cluster-wide trusted CA certificates added through the cluster-wide-proxy configuration , OpenShift Dev Spaces Operator detects them and automatically injects them into a config map with the config.openshift.io/inject-trusted-cabundle="true" label. Based on this annotation, OpenShift automatically injects the cluster-wide trusted CA certificates inside the ca-bundle.crt key of the config map. Prerequisites An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . The openshift-devspaces project exists. For each CA chain to import: the root CA and intermediate certificates, in PEM format, in a ca-cert-for-devspaces- <count> .pem file. Procedure Concatenate all CA chains PEM files to import, into the custom-ca-certificates.pem file, and remove the return character that is incompatible with the Java truststore. Create the custom-ca-certificates config map with the required TLS certificates: Label the custom-ca-certificates config map: Deploy OpenShift Dev Spaces if it hasn't been deployed before. Otherwise, wait until the rollout of OpenShift Dev Spaces components finishes. Restart running workspaces for the changes to take effect. Verification steps Verify that the config map contains your custom CA certificates. This command returns your custom CA certificates in PEM format: USD oc get configmap \ --namespace=openshift-devspaces \ --output='jsonpath={.items[0:].data.custom-ca-certificates\.pem}' \ --selector=app.kubernetes.io/component=ca-bundle,app.kubernetes.io/part-of=che.eclipse.org Verify OpenShift Dev Spaces pod contains a volume mounting the ca-certs-merged config map: USD oc get pod \ --selector=app.kubernetes.io/component=devspaces \ --output='jsonpath={.items[0].spec.volumes[0:].configMap.name}' \ --namespace=openshift-devspaces \ | grep ca-certs-merged Verify the OpenShift Dev Spaces server container has your custom CA certificates. This command returns your custom CA certificates in PEM format: USD oc exec -t deploy/devspaces \ --namespace=openshift-devspaces \ -- cat /public-certs/custom-ca-certificates.pem Verify in the OpenShift Dev Spaces server logs that the imported certificates count is not null: USD oc logs deploy/devspaces --namespace=openshift-devspaces \ | grep custom-ca-certificates.pem List the SHA256 fingerprints of your certificates: USD for certificate in ca-cert*.pem ; do openssl x509 -in USDcertificate -digest -sha256 -fingerprint -noout | cut -d= -f2; done Verify that OpenShift Dev Spaces server Java truststore contains certificates with the same fingerprint: USD oc exec -t deploy/devspaces --namespace=openshift-devspaces -- \ keytool -list -keystore /home/user/cacerts \ | grep --after-context=1 custom-ca-certificates.pem Start a workspace, get the project name in which it has been created: <workspace_namespace> , and wait for the workspace to be started. Verify that the che-trusted-ca-certs config map contains your custom CA certificates. This command returns your custom CA certificates in PEM format: USD oc get configmap che-trusted-ca-certs \ --namespace= <workspace_namespace> \ --output='jsonpath={.data.custom-ca-certificates\.custom-ca-certificates\.pem}' Verify that the workspace pod mounts the che-trusted-ca-certs config map: USD oc get pod \ --namespace= <workspace_namespace> \ --selector='controller.devfile.io/devworkspace_name= <workspace_name> ' \ --output='jsonpath={.items[0:].spec.volumes[0:].configMap.name}' \ | grep che-trusted-ca-certs Verify that the universal-developer-image container (or the container defined in the workspace devfile) mounts the che-trusted-ca-certs volume: USD oc get pod \ --namespace= <workspace_namespace> \ --selector='controller.devfile.io/devworkspace_name= <workspace_name> ' \ --output='jsonpath={.items[0:].spec.containers[0:]}' \ | jq 'select (.volumeMounts[].name == "che-trusted-ca-certs") | .name' Get the workspace pod name <workspace_pod_name> : USD oc get pod \ --namespace= <workspace_namespace> \ --selector='controller.devfile.io/devworkspace_name= <workspace_name> ' \ --output='jsonpath={.items[0:].metadata.name}' \ Verify that the workspace container has your custom CA certificates. This command returns your custom CA certificates in PEM format: USD oc exec <workspace_pod_name> \ --namespace= <workspace_namespace> \ -- cat /public-certs/custom-ca-certificates.custom-ca-certificates.pem Additional resources Section 3.5.3, "Git with self-signed certificates" . 3.8.4. Adding labels and annotations 3.8.4.1. Configuring OpenShift Route to work with Router Sharding You can configure labels, annotations, and domains for OpenShift Route to work with Router Sharding . Prerequisites An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI . dsc . See: Section 1.2, "Installing the dsc management tool" . Procedure Configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: networking: labels: <labels> 1 domain: <domain> 2 annotations: <annotations> 3 1 An unstructured key value map of labels that the target ingress controller uses to filter the set of Routes to service. 2 The DNS name serviced by the target ingress controller. 3 An unstructured key value map stored with a resource. Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.9. Configuring storage Warning OpenShift Dev Spaces does not support the Network File System (NFS) protocol. Section 3.9.1, "Configuring storage classes" Section 3.9.2, "Configuring the storage strategy" Section 3.9.3, "Configuring storage sizes" 3.9.1. Configuring storage classes To configure OpenShift Dev Spaces to use a configured infrastructure storage, install OpenShift Dev Spaces using storage classes. This is especially useful when you want to bind a persistent volume provided by a non-default provisioner. OpenShift Dev Spaces has one component that requires persistent volumes to store data: A OpenShift Dev Spaces workspace. OpenShift Dev Spaces workspaces store source code using volumes, for example /projects volume. Note OpenShift Dev Spaces workspaces source code is stored in the persistent volume only if a workspace is not ephemeral. Persistent volume claims facts: OpenShift Dev Spaces does not create persistent volumes in the infrastructure. OpenShift Dev Spaces uses persistent volume claims (PVC) to mount persistent volumes. The Section 1.3.1.2, "Dev Workspace operator" creates persistent volume claims. Define a storage class name in the OpenShift Dev Spaces configuration to use the storage classes feature in the OpenShift Dev Spaces PVC. Procedure Use CheCluster Custom Resource definition to define storage classes: Define storage class names: configure the CheCluster Custom Resource, and install OpenShift Dev Spaces. See Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" . spec: devEnvironments: storage: perUserStrategyPvcConfig: claimSize: <claim_size> 1 storageClass: <storage_class_name> 2 perWorkspaceStrategyPvcConfig: claimSize: <claim_size> 3 storageClass: <storage_class_name> 4 pvcStrategy: <pvc_strategy> 5 1 3 Persistent Volume Claim size. 2 4 Storage class for the Persistent Volume Claim. When omitted or left blank, a default storage class is used. 5 Persistent volume claim strategy. The supported strategies are: per-user (all workspaces Persistent Volume Claims in one volume), per-workspace (each workspace is given its own individual Persistent Volume Claim) and ephemeral (non-persistent storage where local changes will be lost when the workspace is stopped.) Additional resources Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" 3.9.2. Configuring the storage strategy OpenShift Dev Spaces can be configured to provide persistent or non-persistent storage to workspaces by selecting a storage strategy. The selected storage strategy will be applied to all newly created workspaces by default. Users can opt for a non-default storage strategy for their workspace in their devfile or through the URL parameter . Available storage strategies: per-user : Use a single PVC for all workspaces created by a user. per-workspace : Each workspace is given its own PVC. ephemeral : Non-persistent storage; any local changes will be lost when the workspace is stopped. The default storage strategy used in OpenShift Dev Spaces is per-user . Procedure Set the pvcStrategy field in the Che Cluster Custom Resource to per-user , per-workspace or ephemeral . Note You can set this field at installation. See Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" . You can update this field on the command line. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: devEnvironments: storage: pvc: pvcStrategy: 'per-user' 1 1 The available storage strategies are per-user , per-workspace and ephemeral . 3.9.3. Configuring storage sizes You can configure the persistent volume claim (PVC) size using the per-user or per-workspace storage strategies. You must specify the PVC sizes in the CheCluster Custom Resource in the format of a Kubernetes resource quantity . For more details on the available storage strategies, see this page . Default persistent volume claim sizes: per-user: 10Gi per-workspace: 5Gi Procedure Set the appropriate claimSize field for the desired storage strategy in the Che Cluster Custom Resource. Note You can set this field at installation. See Section 3.1.1, "Using dsc to configure the CheCluster Custom Resource during installation" . You can update this field on the command line. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: devEnvironments: storage: pvc: pvcStrategy: ' <strategy_name> ' 1 perUserStrategyPvcConfig: 2 claimSize: <resource_quantity> 3 perWorkspaceStrategyPvcConfig: 4 claimSize: <resource_quantity> 5 1 Select the storage strategy: per-user or per-workspace or ephemeral . Note: the ephemeral storage strategy does not use persistent storage, therefore you cannot configure its storage size or other PVC-related attributes. 2 4 Specify a claim size on the line or omit the line to set the default claim size value. The specified claim size is only used when you select this storage strategy. 3 5 The claim size must be specified as a Kubernetes resource quantity . The available quantity units include: Ei , Pi , Ti , Gi , Mi and Ki . 3.10. Configuring dashboard Section 3.10.1, "Configuring getting started samples" Section 3.10.2, "Configuring editors definitions" Section 3.10.3, "Customizing OpenShift Eclipse Che ConsoleLink icon" 3.10.1. Configuring getting started samples This procedure describes how to configure OpenShift Dev Spaces Dashboard to display custom samples. Prerequisites An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the CLI . Procedure Create a JSON file with the samples configuration. The file must contain an array of objects, where each object represents a sample. cat > my-samples.json <<EOF [ { "displayName": " <display_name> ", 1 "description": " <description> ", 2 "tags": <tags> , 3 "url": " <url> ", 4 "icon": { "base64data": " <base64data> ", 5 "mediatype": " <mediatype> " 6 } } ] EOF 1 The display name of the sample. 2 The description of the sample. 3 The JSON array of tags, for example, ["java", "spring"] . 4 The URL to the repository containing the devfile. 5 The base64-encoded data of the icon. 6 The media type of the icon. For example, image/png . Create a ConfigMap with the samples configuration: oc create configmap getting-started-samples --from-file=my-samples.json -n openshift-devspaces Add the required labels to the ConfigMap: oc label configmap getting-started-samples app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=getting-started-samples -n openshift-devspaces Refresh the OpenShift Dev Spaces Dashboard page to see the new samples. 3.10.2. Configuring editors definitions Learn how to configure OpenShift Dev Spaces editor definitions. Prerequisites An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the CLI . Procedure Create the my-editor-definition-devfile.yaml YAML file with the editor definition configuration. Important Make sure you provide the actual values for publisher and version under metadata.attributes . They are used to construct the editor id along with editor name in the following format publisher/name/version . Below you can find the supported values, including optional ones: # Version of the devile schema schemaVersion: 2.2.2 # Meta information of the editor metadata: # (MANDATORY) The editor name # Must consist of lower case alphanumeric characters, '-' or '.' name: editor-name displayName: Display Name description: Run Editor Foo on top of Eclipse Che # (OPTIONAL) Array of tags of the current editor. The Tech-Preview tag means the option is considered experimental and is not recommended for production environments. While it can include new features and improvements, it may still contain bugs or undergo significant changes before reaching a stable version. tags: - Tech-Preview # Additional attributes attributes: title: This is my editor # (MANDATORY) The publisher name publisher: publisher # (MANDATORY) The editor version version: version repository: https://github.com/editor/repository/ firstPublicationDate: '2024-01-01' iconMediatype: image/svg+xml iconData: | <icon-content> # List of editor components components: # Name of the component - name: che-code-injector # Configuration of devworkspace-related container container: # Image of the container image: 'quay.io/che-incubator/che-code:insiders' # The command to run in the dockerimage component instead of the default one provided in the image command: - /entrypoint-init-container.sh # (OPTIONAL) List of volumes mounts that should be mounted in this container volumeMounts: # The name of the mount - name: checode # The path of the mount path: /checode # (OPTIONAL) The memory limit of the container memoryLimit: 256Mi # (OPTIONAL) The memory request of the container memoryRequest: 32Mi # (OPTIONAL) The CPU limit of the container cpuLimit: 500m # (OPTIONAL) The CPU request of the container cpuRequest: 30m # Name of the component - name: che-code-runtime-description # (OPTIONAL) Map of implementation-dependant free-form YAML attributes attributes: # The component within the architecture app.kubernetes.io/component: che-code-runtime # The name of a higher level application this one is part of app.kubernetes.io/part-of: che-code.eclipse.org # Defines a container component as a "container contribution". If a flattened DevWorkspace has a container component with the merge-contribution attribute, then any container contributions are merged into that container component controller.devfile.io/container-contribution: true container: # Can be a dummy image because the component is expected to be injected into workspace dev component image: quay.io/devfile/universal-developer-image:latest # (OPTIONAL) List of volume mounts that should be mounted in this container volumeMounts: # The name of the mount - name: checode # (OPTIONAL) The path in the component container where the volume should be mounted. If no path is defined, the default path is the is /<name> path: /checode # (OPTIONAL) The memory limit of the container memoryLimit: 1024Mi # (OPTIONAL) The memory request of the container memoryRequest: 256Mi # (OPTIONAL) The CPU limit of the container cpuLimit: 500m # (OPTIONAL) The CPU request of the container cpuRequest: 30m # (OPTIONAL) Environment variables used in this container env: - name: ENV_NAME value: value # Component endpoints endpoints: # Name of the editor - name: che-code # (OPTIONAL) Map of implementation-dependant string-based free-form attributes attributes: # Type of the endpoint. You can only set its value to main, indicating that the endpoint should be used as the mainUrl in the workspace status (i.e. it should be the URL used to access the editor in this context) type: main # An attribute that instructs the service to automatically redirect the unauthenticated requests for current user authentication. Setting this attribute to true has security consequences because it makes Cross-site request forgery (CSRF) attacks possible. The default value of the attribute is false. cookiesAuthEnabled: true # Defines an endpoint as "discoverable", meaning that a service should be created using the endpoint name (i.e. instead of generating a service name for all endpoints, this endpoint should be statically accessible) discoverable: false # Used to secure the endpoint with authorization on OpenShift, so that not anyone on the cluster can access the endpoint, the attribute enables authentication. urlRewriteSupported: true # Port number to be used within the container component targetPort: 3100 # (OPTIONAL) Describes how the endpoint should be exposed on the network (public, internal, none) exposure: public # (OPTIONAL) Describes whether the endpoint should be secured and protected by some authentication process secure: true # (OPTIONAL) Describes the application and transport protocols of the traffic that will go through this endpoint protocol: https # Mandatory name that allows referencing the component from other elements - name: checode # (OPTIONAL) Allows specifying the definition of a volume shared by several other components. Ephemeral volumes are not stored persistently across restarts. Defaults to false volume: {ephemeral: true} # (OPTIONAL) Bindings of commands to events. Each command is referred-to by its name events: # IDs of commands that should be executed before the devworkspace start. These commands would typically be executed in an init container preStart: - init-container-command # IDs of commands that should be executed after the devworkspace has completely started. In the case of Che-Code, these commands should be executed after all plugins and extensions have started, including project cloning. This means that those commands are not triggered until the user opens the IDE within the browser postStart: - init-che-code-command # (OPTIONAL) Predefined, ready-to-use, devworkspace-related commands commands: # Mandatory identifier that allows referencing this command - id: init-container-command apply: # Describes the component for the apply command component: che-code-injector # Mandatory identifier that allows referencing this command - id: init-che-code-command # CLI Command executed in an existing component container exec: # Describes component for the exec command component: che-code-runtime-description # The actual command-line string commandLine: 'nohup /checode/entrypoint-volume.sh > /checode/entrypoint-logs.txt 2>&1 &' Create a ConfigMap with the editor definition content: oc create configmap my-editor-definition --from-file=my-editor-definition-devfile.yaml -n openshift-devspaces Add the required labels to the ConfigMap: oc label configmap my-editor-definition app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=editor-definition -n openshift-devspaces Refresh the OpenShift Dev Spaces Dashboard page to see new available editor. 3.10.2.1. Retrieving the editor definition The editor definition is also served by the OpenShift Dev Spaces dashboard API from the following URL: https:// <openshift_dev_spaces_fqdn> /dashboard/api/editors/devfile?che-editor= <editor id> For the example from Section 3.10.2, "Configuring editors definitions" , the editor definition can be retrieved by accessing the following URL: https:// <openshift_dev_spaces_fqdn> /dashboard/api/editors/devfile?che-editor=publisher/editor-name/version Tip When retrieving the editor definition from within the OpenShift cluster, the OpenShift Dev Spaces dashboard API can be accessed via the dashboard service: http://devspaces-dashboard.openshift-devspaces.svc.cluster.local:8080/dashboard/api/editors/devfile?che-editor= <editor id> Additional resources Devfile documentation {editor-definition-samples-link} 3.10.3. Customizing OpenShift Eclipse Che ConsoleLink icon This procedure describes how to customize Red Hat OpenShift Dev Spaces ConsoleLink icon. Prerequisites An active oc session with administrative permissions to the OpenShift cluster. See Getting started with the CLI . Procedure Create a Secret: oc apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: devspaces-dashboard-customization namespace: openshift-devspaces annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /public/dashboard/assets/branding labels: app.kubernetes.io/component: devspaces-dashboard-secret app.kubernetes.io/part-of: che.eclipse.org data: loader.svg: <Base64_encoded_content_of_the_image> 1 type: Opaque EOF 1 Base64 encoding with disabled line wrapping. Wait until the rollout of devspaces-dashboard finishes. Additional resources Creating custom links in the web console 3.11. Managing identities and authorizations This section describes different aspects of managing identities and authorizations of Red Hat OpenShift Dev Spaces. 3.11.1. Configuring OAuth for Git providers Note To enable the experimental feature that forces a refresh of the personal access token on workspace startup in Red Hat OpenShift Dev Spaces, modify the Custom Resource configuration as follows: spec: components: cheServer: extraProperties: CHE_FORCE_REFRESH_PERSONAL_ACCESS_TOKEN: "true" You can configure OAuth between OpenShift Dev Spaces and Git providers, enabling users to work with remote Git repositories: Section 3.11.1.1, "Configuring OAuth 2.0 for GitHub" Section 3.11.1.2, "Configuring OAuth 2.0 for GitLab" Configuring OAuth 2.0 for a Bitbucket Server or OAuth 2.0 for the Bitbucket Cloud Configuring OAuth 1.0 for a Bitbucket Server Section 3.11.1.6, "Configuring OAuth 2.0 for Microsoft Azure DevOps Services" 3.11.1.1. Configuring OAuth 2.0 for GitHub To enable users to work with a remote Git repository that is hosted on GitHub: Set up the GitHub OAuth App (OAuth 2.0). Apply the GitHub OAuth App Secret. 3.11.1.1.1. Setting up the GitHub OAuth App Set up a GitHub OAuth App using OAuth 2.0. Prerequisites You are logged in to GitHub. Procedure Go to https://github.com/settings/applications/new . Enter the following values: Application name : < application name > Homepage URL : https:// <openshift_dev_spaces_fqdn> / Authorization callback URL : https:// <openshift_dev_spaces_fqdn> /api/oauth/callback Click Register application . Click Generate new client secret . Copy and save the GitHub OAuth Client ID for use when applying the GitHub OAuth App Secret. Copy and save the GitHub OAuth Client Secret for use when applying the GitHub OAuth App Secret. Additional resources GitHub Docs: Creating an OAuth App 3.11.1.1.2. Applying the GitHub OAuth App Secret Prepare and apply the GitHub OAuth App Secret. Prerequisites Setting up the GitHub OAuth App is completed. The following values, which were generated when setting up the GitHub OAuth App, are prepared: GitHub OAuth Client ID GitHub OAuth Client Secret An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Prepare the Secret: kind: Secret apiVersion: v1 metadata: name: github-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: github che.eclipse.org/scm-server-endpoint: <github_server_url> 2 che.eclipse.org/scm-github-disable-subdomain-isolation: 'false' 3 type: Opaque stringData: id: <GitHub_OAuth_Client_ID> 4 secret: <GitHub_OAuth_Client_Secret> 5 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 This depends on the GitHub product your organization is using: When hosting repositories on GitHub.com or GitHub Enterprise Cloud, omit this line or enter the default https://github.com . When hosting repositories on GitHub Enterprise Server, enter the GitHub Enterprise Server URL. 3 If you are using GitHub Enterprise Server with a disabled subdomain isolation option, you must set the annotation to true , otherwise you can either omit the annotation or set it to false . 4 The GitHub OAuth Client ID . 5 The GitHub OAuth Client Secret . Apply the Secret: Verify in the output that the Secret is created. To configure OAuth 2.0 for another GitHub provider, you have to repeat the steps above and create a second GitHub OAuth Secret with a different name. 3.11.1.2. Configuring OAuth 2.0 for GitLab To enable users to work with a remote Git repository that is hosted using a GitLab instance: Set up the GitLab authorized application (OAuth 2.0). Apply the GitLab authorized application Secret. 3.11.1.2.1. Setting up the GitLab authorized application Set up a GitLab authorized application using OAuth 2.0. Prerequisites You are logged in to GitLab. Procedure Click your avatar and go to Edit profile Applications . Enter OpenShift Dev Spaces as the Name . Enter https:// <openshift_dev_spaces_fqdn> /api/oauth/callback as the Redirect URI . Check the Confidential and Expire access tokens checkboxes. Under Scopes , check the api , write_repository , and openid checkboxes. Click Save application . Copy and save the GitLab Application ID for use when applying the GitLab-authorized application Secret. Copy and save the GitLab Client Secret for use when applying the GitLab-authorized application Secret. Additional resources GitLab Docs: Authorized applications 3.11.1.2.2. Applying the GitLab-authorized application Secret Prepare and apply the GitLab-authorized application Secret. Prerequisites Setting up the GitLab authorized application is completed. The following values, which were generated when setting up the GitLab authorized application, are prepared: GitLab Application ID GitLab Client Secret An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Prepare the Secret: kind: Secret apiVersion: v1 metadata: name: gitlab-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: gitlab che.eclipse.org/scm-server-endpoint: <gitlab_server_url> 2 type: Opaque stringData: id: <GitLab_Application_ID> 3 secret: <GitLab_Client_Secret> 4 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The GitLab server URL . Use https://gitlab.com for the SAAS version. 3 The GitLab Application ID . 4 The GitLab Client Secret . Apply the Secret: Verify in the output that the Secret is created. 3.11.1.3. Configuring OAuth 2.0 for a Bitbucket Server You can use OAuth 2.0 to enable users to work with a remote Git repository that is hosted on a Bitbucket Server: Set up an OAuth 2.0 application link on the Bitbucket Server. Apply an application link Secret for the Bitbucket Server. 3.11.1.3.1. Setting up an OAuth 2.0 application link on the Bitbucket Server Set up an OAuth 2.0 application link on the Bitbucket Server. Prerequisites You are logged in to the Bitbucket Server. Procedure Go to Administration > Applications > Application links . Select Create link . Select External application and Incoming . Enter https:// <openshift_dev_spaces_fqdn> /api/oauth/callback to the Redirect URL field. Select the Admin - Write checkbox in Application permissions . Click Save . Copy and save the Client ID for use when applying the Bitbucket application link Secret. Copy and save the Client secret for use when applying the Bitbucket application link Secret. Additional resources Atlassian Documentation: Configure an incoming link 3.11.1.3.2. Applying an OAuth 2.0 application link Secret for the Bitbucket Server Prepare and apply the OAuth 2.0 application link Secret for the Bitbucket Server. Prerequisites The application link is set up on the Bitbucket Server. The following values, which were generated when setting up the Bitbucket application link, are prepared: Bitbucket Client ID Bitbucket Client secret An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Prepare the Secret: kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: bitbucket che.eclipse.org/scm-server-endpoint: <bitbucket_server_url> 2 type: Opaque stringData: id: <Bitbucket_Client_ID> 3 secret: <Bitbucket_Client_Secret> 4 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The URL of the Bitbucket Server. 3 The Bitbucket Client ID . 4 The Bitbucket Client secret . Apply the Secret: Verify in the output that the Secret is created. 3.11.1.4. Configuring OAuth 2.0 for the Bitbucket Cloud You can enable users to work with a remote Git repository that is hosted in the Bitbucket Cloud: Set up an OAuth consumer (OAuth 2.0) in the Bitbucket Cloud. Apply an OAuth consumer Secret for the Bitbucket Cloud. 3.11.1.4.1. Setting up an OAuth consumer in the Bitbucket Cloud Set up an OAuth consumer for OAuth 2.0 in the Bitbucket Cloud. Prerequisites You are logged in to the Bitbucket Cloud. Procedure Click your avatar and go to the All workspaces page. Select a workspace and click it. Go to Settings OAuth consumers Add consumer . Enter OpenShift Dev Spaces as the Name . Enter https:// <openshift_dev_spaces_fqdn> /api/oauth/callback as the Callback URL . Under Permissions , check all of the Account and Repositories checkboxes, and click Save . Expand the added consumer and then copy and save the Key value for use when applying the Bitbucket OAuth consumer Secret: Copy and save the Secret value for use when applying the Bitbucket OAuth consumer Secret. Additional resources Bitbucket Docs: Use OAuth on Bitbucket Cloud 3.11.1.4.2. Applying an OAuth consumer Secret for the Bitbucket Cloud Prepare and apply an OAuth consumer Secret for the Bitbucket Cloud. Prerequisites The OAuth consumer is set up in the Bitbucket Cloud. The following values, which were generated when setting up the Bitbucket OAuth consumer, are prepared: Bitbucket OAuth consumer Key Bitbucket OAuth consumer Secret An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Prepare the Secret: kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: bitbucket type: Opaque stringData: id: <Bitbucket_Oauth_Consumer_Key> 2 secret: <Bitbucket_Oauth_Consumer_Secret> 3 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The Bitbucket OAuth consumer Key . 3 The Bitbucket OAuth consumer Secret . Apply the Secret: Verify in the output that the Secret is created. 3.11.1.5. Configuring OAuth 1.0 for a Bitbucket Server To enable users to work with a remote Git repository that is hosted on a Bitbucket Server: Set up an application link (OAuth 1.0) on the Bitbucket Server. Apply an application link Secret for the Bitbucket Server. 3.11.1.5.1. Setting up an application link on the Bitbucket Server Set up an application link for OAuth 1.0 on the Bitbucket Server. Prerequisites You are logged in to the Bitbucket Server. openssl is installed in the operating system you are using. Procedure On a command line, run the commands to create the necessary files for the steps and for use when applying the application link Secret: Go to Administration Application Links . Enter https:// <openshift_dev_spaces_fqdn> / into the URL field and click Create new link . Under The supplied Application URL has redirected once , check the Use this URL checkbox and click Continue . Enter OpenShift Dev Spaces as the Application Name . Select Generic Application as the Application Type . Enter OpenShift Dev Spaces as the Service Provider Name . Paste the content of the bitbucket-consumer-key file as the Consumer key . Paste the content of the bitbucket-shared-secret file as the Shared secret . Enter <bitbucket_server_url> /plugins/servlet/oauth/request-token as the Request Token URL . Enter <bitbucket_server_url> /plugins/servlet/oauth/access-token as the Access token URL . Enter <bitbucket_server_url> /plugins/servlet/oauth/authorize as the Authorize URL . Check the Create incoming link checkbox and click Continue . Paste the content of the bitbucket-consumer-key file as the Consumer Key . Enter OpenShift Dev Spaces as the Consumer name . Paste the content of the public-stripped.pub file as the Public Key and click Continue . Additional resources Atlassian Documentation: Link to other applications 3.11.1.5.2. Applying an application link Secret for the Bitbucket Server Prepare and apply the application link Secret for the Bitbucket Server. Prerequisites The application link is set up on the Bitbucket Server. The following files, which were created when setting up the application link, are prepared: privatepkcs8-stripped.pem bitbucket-consumer-key bitbucket-shared-secret An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Prepare the Secret: kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/component: oauth-scm-configuration app.kubernetes.io/part-of: che.eclipse.org annotations: che.eclipse.org/oauth-scm-server: bitbucket che.eclipse.org/scm-server-endpoint: <bitbucket_server_url> 2 type: Opaque stringData: private.key: <Content_of_privatepkcs8-stripped.pem> 3 consumer.key: <Content_of_bitbucket-consumer-key> 4 shared_secret: <Content_of_bitbucket-shared-secret> 5 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The URL of the Bitbucket Server. 3 The content of the privatepkcs8-stripped.pem file. 4 The content of the bitbucket-consumer-key file. 5 The content of the bitbucket-shared-secret file. Apply the Secret: Verify in the output that the Secret is created. 3.11.1.6. Configuring OAuth 2.0 for Microsoft Azure DevOps Services To enable users to work with a remote Git repository that is hosted on Microsoft Azure Repos: Set up the Microsoft Azure DevOps Services OAuth App (OAuth 2.0). Apply the Microsoft Azure DevOps Services OAuth App Secret. 3.11.1.6.1. Setting up the Microsoft Azure DevOps Services OAuth App Set up a Microsoft Azure DevOps Services OAuth App using OAuth 2.0. Prerequisites You are logged in to Microsoft Azure DevOps Services . Important Third-party application access via OAuth is enabled for your organization. See Change application connection & security policies for your organization . Procedure Visit https://app.vsaex.visualstudio.com/app/register/ . Enter the following values: Company name : OpenShift Dev Spaces Application name : OpenShift Dev Spaces Application website : https:// <openshift_dev_spaces_fqdn> / Authorization callback URL : https:// <openshift_dev_spaces_fqdn> /api/oauth/callback In Select Authorized scopes , select Code (read and write) . Click Create application . Copy and save the App ID for use when applying the Microsoft Azure DevOps Services OAuth App Secret. Click Show to display the Client Secret . Copy and save the Client Secret for use when applying the Microsoft Azure DevOps Services OAuth App Secret. Additional resources Authorize access to REST APIs with OAuth 2.0 Change application connection & security policies for your organization 3.11.1.6.2. Applying the Microsoft Azure DevOps Services OAuth App Secret Prepare and apply the Microsoft Azure DevOps Services Secret. Prerequisites Setting up the Microsoft Azure DevOps Services OAuth App is completed. The following values, which were generated when setting up the Microsoft Azure DevOps Services OAuth App, are prepared: App ID Client Secret An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Prepare the Secret: kind: Secret apiVersion: v1 metadata: name: azure-devops-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: azure-devops type: Opaque stringData: id: <Microsoft_Azure_DevOps_Services_OAuth_App_ID> 2 secret: <Microsoft_Azure_DevOps_Services_OAuth_Client_Secret> 3 1 The OpenShift Dev Spaces namespace. The default is openshift-devspaces . 2 The Microsoft Azure DevOps Services OAuth App ID . 3 The Microsoft Azure DevOps Services OAuth Client Secret . Apply the Secret: Verify in the output that the Secret is created. Wait for the rollout of the OpenShift Dev Spaces server components to be completed. 3.11.2. Configuring cluster roles for Dev Spaces users You can grant OpenShift Dev Spaces users more cluster permissions by adding cluster roles to those users. Prerequisites An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Define the user roles name: 1 Unique resource name. Find out the namespace where the OpenShift Dev Spaces Operator is deployed: Create needed roles: 1 As <verbs> , list all Verbs that apply to all ResourceKinds and AttributeRestrictions contained in this rule. You can use * to represent all verbs. 2 As <apiGroups> , name the APIGroups that contain the resources. 3 As <resources> , list all resources that this rule applies to. You can use * to represent all verbs. Delegate the roles to the OpenShift Dev Spaces Operator: Configure the OpenShift Dev Spaces Operator to delegate the roles to the che service account: Configure the OpenShift Dev Spaces server to delegate the roles to a user: Wait for the rollout of the OpenShift Dev Spaces server components to be completed. Ask the user to log out and log in to have the new roles applied. 3.11.3. Configuring advanced authorization You can determine which users and groups are allowed to access OpenShift Dev Spaces. Prerequisites An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Configure the CheCluster Custom Resource. See Section 3.1.2, "Using the CLI to configure the CheCluster Custom Resource" . spec: networking: auth: advancedAuthorization: allowUsers: - <allow_users> 1 allowGroups: - <allow_groups> 2 denyUsers: - <deny_users> 3 denyGroups: - <deny_groups> 4 1 List of users allowed to access Red Hat OpenShift Dev Spaces. 2 List of groups of users allowed to access Red Hat OpenShift Dev Spaces (for OpenShift Container Platform only). 3 List of users denied access to Red Hat OpenShift Dev Spaces. 4 List of groups of users denied to access Red Hat OpenShift Dev Spaces (for OpenShift Container Platform only). Wait for the rollout of the OpenShift Dev Spaces server components to be completed. Note To allow a user to access OpenShift Dev Spaces, add them to the allowUsers list. Alternatively, choose a group the user is a member of and add the group to the allowGroups list. To deny a user access to OpenShift Dev Spaces, add them to the denyUsers list. Alternatively, choose a group the user is a member of and add the group to the denyGroups list. If the user is on both allow and deny lists, they are denied access to OpenShift Dev Spaces. If allowUsers and allowGroups are empty, all users are allowed to access OpenShift Dev Spaces except the ones on the deny lists. If denyUsers and denyGroups are empty, only the users from allow lists are allowed to access OpenShift Dev Spaces. If both allow and deny lists are empty, all users are allowed to access OpenShift Dev Spaces. 3.11.4. Removing user data in compliance with the GDPR You can remove a user's data on OpenShift Container Platform in compliance with the General Data Protection Regulation (GDPR) that enforces the right of individuals to have their personal data erased. The process for other Kubernetes infrastructures might vary. Follow the user management best practices of the provider you are using for the Red Hat OpenShift Dev Spaces installation. Warning Removing user data as follows is irreversible! All removed data is deleted and unrecoverable! Prerequisites An active oc session with administrative permissions for the OpenShift Container Platform cluster. See Getting started with the OpenShift CLI . Procedure List all the users in the OpenShift cluster using the following command: USD oc get users Delete the user entry: Important If the user has any associated resources (such as projects, roles, or service accounts), you need to delete those first before deleting the user. USD oc delete user <username> Additional resources Chapter 6, Using the Dev Spaces server API Section 3.2.1, "Configuring project name" Chapter 8, Uninstalling Dev Spaces 3.12. Configuring fuse-overlayfs By default, the Universal Developer Image (UDI) contains Podman and Buildah which you can use to build and push container images within a workspace. However, Podman and Buildah in the UDI are configured to use the vfs storage driver which does not provide copy-on-write support. For more efficient image management, use the fuse-overlayfs storage driver which supports copy-on-write in rootless environments. To enable fuse-overlayfs for workspaces for OpenShift versions older than 4.15, the administrator must first enable /dev/fuse access on the cluster by following Section 3.12.1, "Enabling access to for OpenShift version older than 4.15" . This is not necessary for OpenShift versions 4.15 and later, since the /dev/fuse device is available by default. See Release Notes . After enabling /dev/fuse access, fuse-overlayfs can be enabled in two ways: For all user workspaces within the cluster. See Section 3.12.2, "Enabling fuse-overlayfs for all workspaces" . For workspaces belonging to certain users. See https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.15/html-single/user_guide/index#end-user-guide:using-the-fuse-overlay-storage-driver . 3.12.1. Enabling access to for OpenShift version older than 4.15 To use fuse-overlayfs, you must make /dev/fuse accessible to workspace containers first. Note This procedure is not necessary for OpenShift versions 4.15 and later, since the /dev/fuse device is available by default. See Release Notes . Warning Creating MachineConfig resources on an OpenShift cluster is a potentially dangerous task, as you are making advanced, system-level changes to the cluster. View the MachineConfig documentation for more details and possible risks. Prerequisites The Butane tool ( butane ) is installed in the operating system you are using. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Set the environment variable based on the type of your OpenShift cluster: a single node cluster, or a multi node cluster with separate control plane and worker nodes. For a single node cluster, set: For a multi node cluster, set: Set the environment variable for the OpenShift Butane config version. This variable is the major and minor version of the OpenShift cluster. For example, 4.12.0 , 4.13.0 , or 4.14.0 . Create a MachineConfig resource that creates a drop-in CRI-O configuration file named 99-podman-fuse in the NODE_ROLE nodes. This configuration file makes access to the /dev/fuse device possible for certain pods. 1 The absolute file path to the new drop-in configuration file for CRI-O. 2 The content of the new drop-in configuration file. 3 Define a podman-fuse workload. 4 The pod annotation that activates the podman-fuse workload settings. 5 List of annotations the podman-fuse workload is allowed to process. 6 List of devices on the host that a user can specify with the io.kubernetes.cri-o.Devices annotation. After applying the MachineConfig resource, scheduling will be temporarily disabled for each node with the worker role as changes are applied. View the nodes' statuses. Example output: Once all nodes with the worker role have a status Ready , /dev/fuse will be available to any pod with the following annotations. io.openshift.podman-fuse: '' io.kubernetes.cri-o.Devices: /dev/fuse Verification steps Get the name of a node with a worker role: Open an oc debug session to a worker node. Verify that a new CRI-O config file named 99-podman-fuse exists. 3.12.1.1. Using fuse-overlayfs for Podman and Buildah within a workspace Users can follow https://access.redhat.com/documentation/en-us/red_hat_openshift_dev_spaces/3.15/html-single/user_guide/index#end-user-guide:using-the-fuse-overlay-storage-driver to update existing workspaces to use the fuse-overlayfs storage driver for Podman and Buildah. 3.12.2. Enabling fuse-overlayfs for all workspaces Prerequisites The Section 3.12.1, "Enabling access to for OpenShift version older than 4.15" section has been completed. This is not required for OpenShift versions 4.15 and later. An active oc session with administrative permissions to the destination OpenShift cluster. See Getting started with the CLI . Procedure Create a ConfigMap that mounts the storage.conf file for all user workspaces. kind: ConfigMap apiVersion: v1 metadata: name: fuse-overlay namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.config/containers/ data: storage.conf: | [storage] driver = "overlay" [storage.options.overlay] mount_program="/usr/bin/fuse-overlayfs" Set the necessary annotation in the spec.devEnvironments.workspacesPodAnnotations field of the CheCluster custom resource. kind: CheCluster apiVersion: org.eclipse.che/v2 spec: devEnvironments: workspacesPodAnnotations: io.kubernetes.cri-o.Devices: /dev/fuse Note For OpenShift versions before 4.15, the io.openshift.podman-fuse: "" annotation is also required. Verification steps Start a workspace and verify that the storage driver is overlay . Example output: | [
"spec: <component> : <property_to_configure> : <value>",
"dsc server:deploy --che-operator-cr-patch-yaml=che-operator-cr-patch.yaml --platform <chosen_platform>",
"oc get configmap che -o jsonpath='{.data. <configured_property> }' -n openshift-devspaces",
"oc edit checluster/devspaces -n openshift-devspaces",
"oc get configmap che -o jsonpath='{.data. <configured_property> }' -n openshift-devspaces",
"apiVersion: org.eclipse.che/v2 kind: CheCluster metadata: name: devspaces namespace: openshift-devspaces spec: components: {} devEnvironments: {} networking: {}",
"spec: components: devEnvironments: defaultNamespace: template: <workspace_namespace_template_>",
"kind: Namespace apiVersion: v1 metadata: name: <project_name> 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-namespace annotations: che.eclipse.org/username: <username>",
"apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret",
"apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap",
"apiVersion: v1 kind: Secret metadata: name: custom-data annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret",
"apiVersion: v1 kind: ConfigMap metadata: name: custom-data annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap",
"apiVersion: v1 kind: Secret metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data data: ca.crt: <base64 encoded data content here>",
"apiVersion: v1 kind: ConfigMap metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap annotations: che.eclipse.org/mount-as: file che.eclipse.org/mount-path: /data data: ca.crt: <data content here>",
"apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret",
"apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap",
"apiVersion: v1 kind: Secret metadata: name: custom-data annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret",
"apiVersion: v1 kind: ConfigMap metadata: name: custom-data annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap",
"apiVersion: v1 kind: Secret metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data data: ca.crt: <base64 encoded data content here>",
"apiVersion: v1 kind: ConfigMap metadata: name: custom-data labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /data data: ca.crt: <data content here>",
"apiVersion: v1 kind: Secret metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret",
"apiVersion: v1 kind: ConfigMap metadata: name: custom-settings labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap",
"apiVersion: v1 kind: Secret metadata: name: custom-settings annotations: che.eclipse.org/env-name: FOO_ENV che.eclipse.org/mount-as: env labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret data: mykey: myvalue",
"apiVersion: v1 kind: ConfigMap metadata: name: custom-settings annotations: che.eclipse.org/env-name: FOO_ENV che.eclipse.org/mount-as: env labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap data: mykey: myvalue",
"apiVersion: v1 kind: Secret metadata: name: custom-settings annotations: che.eclipse.org/mount-as: env che.eclipse.org/mykey_env-name: FOO_ENV che.eclipse.org/otherkey_env-name: OTHER_ENV labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-secret stringData: mykey: <data_content_here> otherkey: <data_content_here>",
"apiVersion: v1 kind: ConfigMap metadata: name: custom-settings annotations: che.eclipse.org/mount-as: env che.eclipse.org/mykey_env-name: FOO_ENV che.eclipse.org/otherkey_env-name: OTHER_ENV labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: devspaces-configmap data: mykey: <data content here> otherkey: <data content here>",
"apiVersion: org.eclipse.che/v2 kind: CheCluster spec: components: cheServer: extraProperties: CHE_LOGS_APPENDERS_IMPL: json",
"apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: scaler namespace: openshift-devspaces spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: <deployment_name> 1",
"apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: devspaces-scaler namespace: openshift-devspaces spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: devspaces minReplicas: 2 maxReplicas: 5 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 75",
"spec: devEnvironments: startTimeoutSeconds: 600",
"config: workspace: ignoredUnrecoverableEvents: - FailedScheduling",
"spec: devEnvironments: workspacesPodAnnotations: cluster-autoscaler.kubernetes.io/safe-to-evict: \"false\"",
"oc get pod <workspace_pod_name> -o jsonpath='{.metadata.annotations.cluster-autoscaler\\.kubernetes\\.io/safe-to-evict}' false",
"spec: devEnvironments: maxNumberOfWorkspacesPerUser: <kept_workspaces_limit> 1",
"oc get checluster --all-namespaces -o=jsonpath=\"{.items[*].metadata.namespace}\"",
"oc patch checluster/devspaces -n openshift-devspaces \\ 1 --type='merge' -p '{\"spec\":{\"devEnvironments\":{\"maxNumberOfWorkspacesPerUser\": <kept_workspaces_limit> }}}' 2",
"spec: devEnvironments: maxNumberOfRunningWorkspacesPerUser: <running_workspaces_limit> 1",
"oc get checluster --all-namespaces -o=jsonpath=\"{.items[*].metadata.namespace}\"",
"oc patch checluster/devspaces -n openshift-devspaces \\ 1 --type='merge' -p '{\"spec\":{\"devEnvironments\":{\"maxNumberOfRunningWorkspacesPerUser\": <running_workspaces_limit> }}}' 2",
"oc create configmap che-git-self-signed-cert --from-file=ca.crt= <path_to_certificate> \\ 1 --from-literal=githost= <git_server_url> -n openshift-devspaces 2",
"oc label configmap che-git-self-signed-cert app.kubernetes.io/part-of=che.eclipse.org -n openshift-devspaces",
"spec: devEnvironments: trustedCerts: gitTrustedCertsConfigMapName: che-git-self-signed-cert",
"[http \"https://10.33.177.118:3000\"] sslCAInfo = /etc/config/che-git-tls-creds/certificate",
"spec: devEnvironments: nodeSelector: key: value",
"spec: devEnvironments: tolerations: - effect: NoSchedule key: key value: value operator: Equal",
"spec: components: [...] pluginRegistry: openVSXURL: <your_open_vsx_registy> [...]",
"kind: ConfigMap apiVersion: v1 metadata: name: user-configmap namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config data:",
"kind: ConfigMap apiVersion: v1 metadata: name: user-settings-xml namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.m2 data: settings.xml: | <settings xmlns=\"http://maven.apache.org/SETTINGS/1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/SETTINGS/1.0.0 https://maven.apache.org/xsd/settings-1.0.0.xsd\"> <localRepository>/home/user/.m2/repository</localRepository> <interactiveMode>true</interactiveMode> <offline>false</offline> </settings>",
"kind: Secret apiVersion: v1 metadata: name: user-secret namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config data:",
"kind: Secret apiVersion: v1 metadata: name: user-certificates namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /etc/pki/ca-trust/source/anchors stringData: trusted-certificates.crt: |",
"kind: Secret apiVersion: v1 metadata: name: user-env namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: env stringData: ENV_VAR_1: value_1 ENV_VAR_2: value_2",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: user-pvc namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config spec:",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: user-pvc namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config controller.devfile.io/mount-to-devworkspace: 'true' annotations: controller.devfile.io/mount-path: /home/user/data controller.devfile.io/read-only: 'true' spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi volumeMode: Filesystem",
"(memory limit) * (number of images) * (number of nodes in the cluster)",
"git clone https://github.com/che-incubator/kubernetes-image-puller cd kubernetes-image-puller/deploy/openshift",
"oc new-project <k8s-image-puller>",
"oc process -f serviceaccount.yaml | oc apply -f - oc process -f configmap.yaml | oc apply -f - oc process -f app.yaml | oc apply -f -",
"oc get deployment,daemonset,pod --namespace <k8s-image-puller>",
"oc get configmap <kubernetes-image-puller> --output yaml",
"patch checluster/devspaces --namespace openshift-devspaces --type='merge' --patch '{ \"spec\": { \"components\": { \"imagePuller\": { \"enable\": true } } } }'",
"patch checluster/devspaces --namespace openshift-devspaces --type='merge' --patch '{ \"spec\": { \"components\": { \"imagePuller\": { \"enable\": true, \"spec\": { \"images\": \" NAME-1 = IMAGE-1 ; NAME-2 = IMAGE-2 \" 1 } } } } }'",
"create namespace k8s-image-puller",
"apply -f - <<EOF apiVersion: che.eclipse.org/v1alpha1 kind: KubernetesImagePuller metadata: name: k8s-image-puller-images namespace: k8s-image-puller spec: images: \"__NAME-1__=__IMAGE-1__;__NAME-2__=__IMAGE-2__\" 1 EOF",
"spec: devEnvironments: defaultPlugins: - editor: eclipse/che-theia/next 1 plugins: 2 - 'https://your-web-server/plugin.yaml'",
"package main import ( \"io/ioutil\" \"net/http\" \"go.uber.org/zap\" ) var logger *zap.SugaredLogger func event(w http.ResponseWriter, req *http.Request) { switch req.Method { case \"GET\": logger.Info(\"GET /event\") case \"POST\": logger.Info(\"POST /event\") } body, err := req.GetBody() if err != nil { logger.With(\"err\", err).Info(\"error getting body\") return } responseBody, err := ioutil.ReadAll(body) if err != nil { logger.With(\"error\", err).Info(\"error reading response body\") return } logger.With(\"body\", string(responseBody)).Info(\"got event\") } func activity(w http.ResponseWriter, req *http.Request) { switch req.Method { case \"GET\": logger.Info(\"GET /activity, doing nothing\") case \"POST\": logger.Info(\"POST /activity\") body, err := req.GetBody() if err != nil { logger.With(\"error\", err).Info(\"error getting body\") return } responseBody, err := ioutil.ReadAll(body) if err != nil { logger.With(\"error\", err).Info(\"error reading response body\") return } logger.With(\"body\", string(responseBody)).Info(\"got activity\") } } func main() { log, _ := zap.NewProduction() logger = log.Sugar() http.HandleFunc(\"/event\", event) http.HandleFunc(\"/activity\", activity) logger.Info(\"Added Handlers\") logger.Info(\"Starting to serve\") http.ListenAndServe(\":8080\", nil) }",
"git clone https://github.com/che-incubator/telemetry-server-example cd telemetry-server-example podman build -t registry/organization/telemetry-server-example:latest . podman push registry/organization/telemetry-server-example:latest",
"kubectl apply -f manifest_with_[ingress|route].yaml -n openshift-devspaces",
"mvn io.quarkus:quarkus-maven-plugin:2.7.1.Final:create -DprojectGroupId=mygroup -DprojectArtifactId=devworkspace-telemetry-example-plugin -DprojectVersion=1.0.0-SNAPSHOT",
"<!-- Required --> <dependency> <groupId>org.eclipse.che.incubator.workspace-telemetry</groupId> <artifactId>backend-base</artifactId> <version>LATEST VERSION FROM PREVIOUS STEP</version> </dependency> <!-- Used to make http requests to the telemetry server --> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client-jackson</artifactId> </dependency>",
"<settings xmlns=\"http://maven.apache.org/SETTINGS/1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd\"> <servers> <server> <id>che-incubator</id> <username>YOUR GITHUB USERNAME</username> <password>YOUR GITHUB TOKEN</password> </server> </servers> <profiles> <profile> <id>github</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>central</id> <url>https://repo1.maven.org/maven2</url> <releases><enabled>true</enabled></releases> <snapshots><enabled>false</enabled></snapshots> </repository> <repository> <id>che-incubator</id> <url>https://maven.pkg.github.com/che-incubator/che-workspace-telemetry-client</url> </repository> </repositories> </profile> </profiles> </settings>",
"package org.my.group; import java.util.Optional; import javax.enterprise.context.Dependent; import javax.enterprise.inject.Alternative; import org.eclipse.che.incubator.workspace.telemetry.base.BaseConfiguration; import org.eclipse.microprofile.config.inject.ConfigProperty; @Dependent @Alternative public class MainConfiguration extends BaseConfiguration { @ConfigProperty(name = \"welcome.message\") 1 Optional<String> welcomeMessage; 2 }",
"package org.my.group; import java.util.HashMap; import java.util.Map; import javax.enterprise.context.Dependent; import javax.enterprise.inject.Alternative; import javax.inject.Inject; import org.eclipse.che.incubator.workspace.telemetry.base.AbstractAnalyticsManager; import org.eclipse.che.incubator.workspace.telemetry.base.AnalyticsEvent; import org.eclipse.che.incubator.workspace.telemetry.finder.DevWorkspaceFinder; import org.eclipse.che.incubator.workspace.telemetry.finder.UsernameFinder; import org.eclipse.microprofile.rest.client.inject.RestClient; import org.slf4j.Logger; import static org.slf4j.LoggerFactory.getLogger; @Dependent @Alternative public class AnalyticsManager extends AbstractAnalyticsManager { private static final Logger LOG = getLogger(AbstractAnalyticsManager.class); public AnalyticsManager(MainConfiguration mainConfiguration, DevWorkspaceFinder devworkspaceFinder, UsernameFinder usernameFinder) { super(mainConfiguration, devworkspaceFinder, usernameFinder); mainConfiguration.welcomeMessage.ifPresentOrElse( 1 (str) -> LOG.info(\"The welcome message is: {}\", str), () -> LOG.info(\"No welcome message provided\") ); } @Override public boolean isEnabled() { return true; } @Override public void destroy() {} @Override public void onEvent(AnalyticsEvent event, String ownerId, String ip, String userAgent, String resolution, Map<String, Object> properties) { LOG.info(\"The received event is: {}\", event); 2 } @Override public void increaseDuration(AnalyticsEvent event, Map<String, Object> properties) { } @Override public void onActivity() {} }",
"quarkus.arc.selected-alternatives=MainConfiguration,AnalyticsManager",
"spec: template: attributes: workspaceEnv: - name: DEVWORKSPACE_TELEMETRY_BACKEND_PORT value: '4167'",
"mvn --settings=settings.xml quarkus:dev -Dquarkus.http.port=USD{DEVWORKSPACE_TELEMETRY_BACKEND_PORT}",
"INFO [org.ecl.che.inc.AnalyticsManager] (Quarkus Main Thread) No welcome message provided INFO [io.quarkus] (Quarkus Main Thread) devworkspace-telemetry-example-plugin 1.0.0-SNAPSHOT on JVM (powered by Quarkus 2.7.2.Final) started in 0.323s. Listening on: http://localhost:4167 INFO [io.quarkus] (Quarkus Main Thread) Profile dev activated. Live Coding activated. INFO [io.quarkus] (Quarkus Main Thread) Installed features: [cdi, kubernetes-client, rest-client, rest-client-jackson, resteasy, resteasy-jsonb, smallrye-context-propagation, smallrye-openapi, swagger-ui, vertx]",
"INFO [io.qua.dep.dev.RuntimeUpdatesProcessor] (Aesh InputStream Reader) Live reload disabled INFO [org.ecl.che.inc.AnalyticsManager] (executor-thread-2) The received event is: Edit Workspace File in Che",
"@Override public boolean isEnabled() { return true; }",
"package org.my.group; import java.util.Map; import javax.ws.rs.Consumes; import javax.ws.rs.POST; import javax.ws.rs.Path; import javax.ws.rs.core.MediaType; import javax.ws.rs.core.Response; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; @RegisterRestClient public interface TelemetryService { @POST @Path(\"/event\") 1 @Consumes(MediaType.APPLICATION_JSON) Response sendEvent(Map<String, Object> payload); }",
"org.my.group.TelemetryService/mp-rest/url=http://little-telemetry-server-che.apps-crc.testing",
"@Dependent @Alternative public class AnalyticsManager extends AbstractAnalyticsManager { @Inject @RestClient TelemetryService telemetryService; @Override public void onEvent(AnalyticsEvent event, String ownerId, String ip, String userAgent, String resolution, Map<String, Object> properties) { Map<String, Object> payload = new HashMap<String, Object>(properties); payload.put(\"event\", event); telemetryService.sendEvent(payload); }",
"@Override public void increaseDuration(AnalyticsEvent event, Map<String, Object> properties) {}",
"public class AnalyticsManager extends AbstractAnalyticsManager { private long inactiveTimeLimit = 60000 * 3; @Override public void onActivity() { if (System.currentTimeMillis() - lastEventTime >= inactiveTimeLimit) { onEvent(WORKSPACE_INACTIVE, lastOwnerId, lastIp, lastUserAgent, lastResolution, commonProperties); } }",
"@Override public void destroy() { onEvent(WORKSPACE_STOPPED, lastOwnerId, lastIp, lastUserAgent, lastResolution, commonProperties); }",
"FROM registry.access.redhat.com/ubi8/openjdk-11:1.11 ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en' COPY --chown=185 target/quarkus-app/lib/ /deployments/lib/ COPY --chown=185 target/quarkus-app/*.jar /deployments/ COPY --chown=185 target/quarkus-app/app/ /deployments/app/ COPY --chown=185 target/quarkus-app/quarkus/ /deployments/quarkus/ EXPOSE 8080 USER 185 ENTRYPOINT [\"java\", \"-Dquarkus.http.host=0.0.0.0\", \"-Djava.util.logging.manager=org.jboss.logmanager.LogManager\", \"-Dquarkus.http.port=USD{DEVWORKSPACE_TELEMETRY_BACKEND_PORT}\", \"-jar\", \"/deployments/quarkus-run.jar\"]",
"mvn package && build -f src/main/docker/Dockerfile.jvm -t image:tag .",
"FROM registry.access.redhat.com/ubi8/ubi-minimal:8.5 WORKDIR /work/ RUN chown 1001 /work && chmod \"g+rwX\" /work && chown 1001:root /work COPY --chown=1001:root target/*-runner /work/application EXPOSE 8080 USER 1001 CMD [\"./application\", \"-Dquarkus.http.host=0.0.0.0\", \"-Dquarkus.http.port=USDDEVWORKSPACE_TELEMETRY_BACKEND_PORT}\"]",
"mvn package -Pnative -Dquarkus.native.container-build=true && build -f src/main/docker/Dockerfile.native -t image:tag .",
"schemaVersion: 2.1.0 metadata: name: devworkspace-telemetry-backend-plugin version: 0.0.1 description: A Demo telemetry backend displayName: Devworkspace Telemetry Backend components: - name: devworkspace-telemetry-backend-plugin attributes: workspaceEnv: - name: DEVWORKSPACE_TELEMETRY_BACKEND_PORT value: '4167' container: image: YOUR IMAGE 1 env: - name: WELCOME_MESSAGE 2 value: 'hello world!'",
"oc create configmap --from-file=plugin.yaml -n openshift-devspaces telemetry-plugin-yaml",
"kind: Deployment apiVersion: apps/v1 metadata: name: apache spec: replicas: 1 selector: matchLabels: app: apache template: metadata: labels: app: apache spec: volumes: - name: plugin-yaml configMap: name: telemetry-plugin-yaml defaultMode: 420 containers: - name: apache image: 'registry.redhat.io/rhscl/httpd-24-rhel7:latest' ports: - containerPort: 8080 protocol: TCP resources: {} volumeMounts: - name: plugin-yaml mountPath: /var/www/html strategy: type: RollingUpdate rollingUpdate: maxUnavailable: 25% maxSurge: 25% revisionHistoryLimit: 10 progressDeadlineSeconds: 600 --- kind: Service apiVersion: v1 metadata: name: apache spec: ports: - protocol: TCP port: 8080 targetPort: 8080 selector: app: apache type: ClusterIP --- kind: Route apiVersion: route.openshift.io/v1 metadata: name: apache spec: host: apache-che.apps-crc.testing to: kind: Service name: apache weight: 100 port: targetPort: 8080 wildcardPolicy: None",
"oc apply -f manifest.yaml",
"curl apache-che.apps-crc.testing/plugin.yaml",
"components: - name: telemetry-plugin plugin: uri: http://apache-che.apps-crc.testing/plugin.yaml",
"spec: devEnvironments: defaultPlugins: - editor: eclipse/che-theia/next 1 plugins: 2 - 'http://apache-che.apps-crc.testing/plugin.yaml'",
"spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: \" <key1=value1,key2=value2> \" 1",
"spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: \"org.eclipse.che.api.workspace.server.WorkspaceManager=DEBUG\"",
"spec: components: cheServer: extraProperties: CHE_LOGGER_CONFIG: \"che.infra.request-logging=TRACE\"",
"dsc server:logs -d /home/user/che-logs/",
"Red Hat OpenShift Dev Spaces logs will be available in '/tmp/chectl-logs/1648575098344'",
"dsc server:logs -n my-namespace",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: devworkspace-controller namespace: openshift-devspaces 1 spec: endpoints: - bearerTokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token interval: 10s 2 port: metrics scheme: https tlsConfig: insecureSkipVerify: true namespaceSelector: matchNames: - openshift-operators selector: matchLabels: app.kubernetes.io/name: devworkspace-controller",
"oc label namespace openshift-devspaces openshift.io/cluster-monitoring=true",
"oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}'",
"oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring",
"oc create configmap grafana-dashboard-dwo --from-literal=dwo-dashboard.json=\"USD(curl https://raw.githubusercontent.com/devfile/devworkspace-operator/main/docs/grafana/openshift-console-dashboard.json)\" -n openshift-config-managed",
"oc label configmap grafana-dashboard-dwo console.openshift.io/dashboard=true -n openshift-config-managed",
"spec: components: metrics: enable: <boolean> 1",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: che-host namespace: openshift-devspaces 1 spec: endpoints: - interval: 10s 2 port: metrics scheme: http namespaceSelector: matchNames: - openshift-devspaces 3 selector: matchLabels: app.kubernetes.io/name: devspaces",
"kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: prometheus-k8s namespace: openshift-devspaces 1 rules: - verbs: - get - list - watch apiGroups: - '' resources: - services - endpoints - pods",
"kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: view-devspaces-openshift-monitoring-prometheus-k8s namespace: openshift-devspaces 1 subjects: - kind: ServiceAccount name: prometheus-k8s namespace: openshift-monitoring roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: prometheus-k8s",
"oc label namespace openshift-devspaces openshift.io/cluster-monitoring=true",
"oc get pods -l app.kubernetes.io/name=prometheus -n openshift-monitoring -o=jsonpath='{.items[*].metadata.name}'",
"oc logs --tail=20 <prometheus_pod_name> -c prometheus -n openshift-monitoring",
"oc create configmap grafana-dashboard-devspaces-server --from-literal=devspaces-server-dashboard.json=\"USD(curl https://raw.githubusercontent.com/eclipse-che/che-server/main/docs/grafana/openshift-console-dashboard.json)\" -n openshift-config-managed",
"oc label configmap grafana-dashboard-devspaces-server console.openshift.io/dashboard=true -n openshift-config-managed",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-devspaces spec: ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-devspaces 1 podSelector: {} 2 policyTypes: - Ingress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-apiserver namespace: openshift-devspaces 1 spec: podSelector: matchLabels: app.kubernetes.io/name: devworkspace-webhook-server 2 ingress: - from: - podSelector: {} namespaceSelector: matchLabels: kubernetes.io/metadata.name: openshift-apiserver policyTypes: - Ingress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-workspaces-namespaces namespace: openshift-devspaces 1 spec: podSelector: {} 2 ingress: - from: - podSelector: {} namespaceSelector: matchLabels: app.kubernetes.io/component: workspaces-namespace policyTypes: - Ingress",
"oc create project openshift-devspaces",
"oc create secret TLS <tls_secret_name> \\ 1 --key <key_file> \\ 2 --cert <cert_file> \\ 3 -n openshift-devspaces",
"oc label secret <tls_secret_name> \\ 1 app.kubernetes.io/part-of=che.eclipse.org -n openshift-devspaces",
"spec: networking: hostname: <hostname> 1 tlsSecretName: <secret> 2",
"cat ca-cert-for-devspaces-*.pem | tr -d '\\r' > custom-ca-certificates.pem",
"oc create configmap custom-ca-certificates --from-file=custom-ca-certificates.pem --namespace=openshift-devspaces",
"oc label configmap custom-ca-certificates app.kubernetes.io/component=ca-bundle app.kubernetes.io/part-of=che.eclipse.org --namespace=openshift-devspaces",
"oc get configmap --namespace=openshift-devspaces --output='jsonpath={.items[0:].data.custom-ca-certificates\\.pem}' --selector=app.kubernetes.io/component=ca-bundle,app.kubernetes.io/part-of=che.eclipse.org",
"oc get pod --selector=app.kubernetes.io/component=devspaces --output='jsonpath={.items[0].spec.volumes[0:].configMap.name}' --namespace=openshift-devspaces | grep ca-certs-merged",
"oc exec -t deploy/devspaces --namespace=openshift-devspaces -- cat /public-certs/custom-ca-certificates.pem",
"oc logs deploy/devspaces --namespace=openshift-devspaces | grep custom-ca-certificates.pem",
"for certificate in ca-cert*.pem ; do openssl x509 -in USDcertificate -digest -sha256 -fingerprint -noout | cut -d= -f2; done",
"oc exec -t deploy/devspaces --namespace=openshift-devspaces -- keytool -list -keystore /home/user/cacerts | grep --after-context=1 custom-ca-certificates.pem",
"oc get configmap che-trusted-ca-certs --namespace= <workspace_namespace> --output='jsonpath={.data.custom-ca-certificates\\.custom-ca-certificates\\.pem}'",
"oc get pod --namespace= <workspace_namespace> --selector='controller.devfile.io/devworkspace_name= <workspace_name> ' --output='jsonpath={.items[0:].spec.volumes[0:].configMap.name}' | grep che-trusted-ca-certs",
"oc get pod --namespace= <workspace_namespace> --selector='controller.devfile.io/devworkspace_name= <workspace_name> ' --output='jsonpath={.items[0:].spec.containers[0:]}' | jq 'select (.volumeMounts[].name == \"che-trusted-ca-certs\") | .name'",
"oc get pod --namespace= <workspace_namespace> --selector='controller.devfile.io/devworkspace_name= <workspace_name> ' --output='jsonpath={.items[0:].metadata.name}' \\",
"oc exec <workspace_pod_name> --namespace= <workspace_namespace> -- cat /public-certs/custom-ca-certificates.custom-ca-certificates.pem",
"spec: networking: labels: <labels> 1 domain: <domain> 2 annotations: <annotations> 3",
"spec: devEnvironments: storage: perUserStrategyPvcConfig: claimSize: <claim_size> 1 storageClass: <storage_class_name> 2 perWorkspaceStrategyPvcConfig: claimSize: <claim_size> 3 storageClass: <storage_class_name> 4 pvcStrategy: <pvc_strategy> 5",
"spec: devEnvironments: storage: pvc: pvcStrategy: 'per-user' 1",
"per-user: 10Gi",
"per-workspace: 5Gi",
"spec: devEnvironments: storage: pvc: pvcStrategy: ' <strategy_name> ' 1 perUserStrategyPvcConfig: 2 claimSize: <resource_quantity> 3 perWorkspaceStrategyPvcConfig: 4 claimSize: <resource_quantity> 5",
"cat > my-samples.json <<EOF [ { \"displayName\": \" <display_name> \", 1 \"description\": \" <description> \", 2 \"tags\": <tags> , 3 \"url\": \" <url> \", 4 \"icon\": { \"base64data\": \" <base64data> \", 5 \"mediatype\": \" <mediatype> \" 6 } } ] EOF",
"create configmap getting-started-samples --from-file=my-samples.json -n openshift-devspaces",
"label configmap getting-started-samples app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=getting-started-samples -n openshift-devspaces",
"Version of the devile schema schemaVersion: 2.2.2 Meta information of the editor metadata: # (MANDATORY) The editor name # Must consist of lower case alphanumeric characters, '-' or '.' name: editor-name displayName: Display Name description: Run Editor Foo on top of Eclipse Che # (OPTIONAL) Array of tags of the current editor. The Tech-Preview tag means the option is considered experimental and is not recommended for production environments. While it can include new features and improvements, it may still contain bugs or undergo significant changes before reaching a stable version. tags: - Tech-Preview # Additional attributes attributes: title: This is my editor # (MANDATORY) The publisher name publisher: publisher # (MANDATORY) The editor version version: version repository: https://github.com/editor/repository/ firstPublicationDate: '2024-01-01' iconMediatype: image/svg+xml iconData: | <icon-content> List of editor components components: # Name of the component - name: che-code-injector # Configuration of devworkspace-related container container: # Image of the container image: 'quay.io/che-incubator/che-code:insiders' # The command to run in the dockerimage component instead of the default one provided in the image command: - /entrypoint-init-container.sh # (OPTIONAL) List of volumes mounts that should be mounted in this container volumeMounts: # The name of the mount - name: checode # The path of the mount path: /checode # (OPTIONAL) The memory limit of the container memoryLimit: 256Mi # (OPTIONAL) The memory request of the container memoryRequest: 32Mi # (OPTIONAL) The CPU limit of the container cpuLimit: 500m # (OPTIONAL) The CPU request of the container cpuRequest: 30m # Name of the component - name: che-code-runtime-description # (OPTIONAL) Map of implementation-dependant free-form YAML attributes attributes: # The component within the architecture app.kubernetes.io/component: che-code-runtime # The name of a higher level application this one is part of app.kubernetes.io/part-of: che-code.eclipse.org # Defines a container component as a \"container contribution\". If a flattened DevWorkspace has a container component with the merge-contribution attribute, then any container contributions are merged into that container component controller.devfile.io/container-contribution: true container: # Can be a dummy image because the component is expected to be injected into workspace dev component image: quay.io/devfile/universal-developer-image:latest # (OPTIONAL) List of volume mounts that should be mounted in this container volumeMounts: # The name of the mount - name: checode # (OPTIONAL) The path in the component container where the volume should be mounted. If no path is defined, the default path is the is /<name> path: /checode # (OPTIONAL) The memory limit of the container memoryLimit: 1024Mi # (OPTIONAL) The memory request of the container memoryRequest: 256Mi # (OPTIONAL) The CPU limit of the container cpuLimit: 500m # (OPTIONAL) The CPU request of the container cpuRequest: 30m # (OPTIONAL) Environment variables used in this container env: - name: ENV_NAME value: value # Component endpoints endpoints: # Name of the editor - name: che-code # (OPTIONAL) Map of implementation-dependant string-based free-form attributes attributes: # Type of the endpoint. You can only set its value to main, indicating that the endpoint should be used as the mainUrl in the workspace status (i.e. it should be the URL used to access the editor in this context) type: main # An attribute that instructs the service to automatically redirect the unauthenticated requests for current user authentication. Setting this attribute to true has security consequences because it makes Cross-site request forgery (CSRF) attacks possible. The default value of the attribute is false. cookiesAuthEnabled: true # Defines an endpoint as \"discoverable\", meaning that a service should be created using the endpoint name (i.e. instead of generating a service name for all endpoints, this endpoint should be statically accessible) discoverable: false # Used to secure the endpoint with authorization on OpenShift, so that not anyone on the cluster can access the endpoint, the attribute enables authentication. urlRewriteSupported: true # Port number to be used within the container component targetPort: 3100 # (OPTIONAL) Describes how the endpoint should be exposed on the network (public, internal, none) exposure: public # (OPTIONAL) Describes whether the endpoint should be secured and protected by some authentication process secure: true # (OPTIONAL) Describes the application and transport protocols of the traffic that will go through this endpoint protocol: https # Mandatory name that allows referencing the component from other elements - name: checode # (OPTIONAL) Allows specifying the definition of a volume shared by several other components. Ephemeral volumes are not stored persistently across restarts. Defaults to false volume: {ephemeral: true} (OPTIONAL) Bindings of commands to events. Each command is referred-to by its name events: # IDs of commands that should be executed before the devworkspace start. These commands would typically be executed in an init container preStart: - init-container-command # IDs of commands that should be executed after the devworkspace has completely started. In the case of Che-Code, these commands should be executed after all plugins and extensions have started, including project cloning. This means that those commands are not triggered until the user opens the IDE within the browser postStart: - init-che-code-command (OPTIONAL) Predefined, ready-to-use, devworkspace-related commands commands: # Mandatory identifier that allows referencing this command - id: init-container-command apply: # Describes the component for the apply command component: che-code-injector # Mandatory identifier that allows referencing this command - id: init-che-code-command # CLI Command executed in an existing component container exec: # Describes component for the exec command component: che-code-runtime-description # The actual command-line string commandLine: 'nohup /checode/entrypoint-volume.sh > /checode/entrypoint-logs.txt 2>&1 &'",
"create configmap my-editor-definition --from-file=my-editor-definition-devfile.yaml -n openshift-devspaces",
"label configmap my-editor-definition app.kubernetes.io/part-of=che.eclipse.org app.kubernetes.io/component=editor-definition -n openshift-devspaces",
"apply -f - <<EOF apiVersion: v1 kind: Secret metadata: name: devspaces-dashboard-customization namespace: openshift-devspaces annotations: che.eclipse.org/mount-as: subpath che.eclipse.org/mount-path: /public/dashboard/assets/branding labels: app.kubernetes.io/component: devspaces-dashboard-secret app.kubernetes.io/part-of: che.eclipse.org data: loader.svg: <Base64_encoded_content_of_the_image> 1 type: Opaque EOF",
"spec: components: cheServer: extraProperties: CHE_FORCE_REFRESH_PERSONAL_ACCESS_TOKEN: \"true\"",
"kind: Secret apiVersion: v1 metadata: name: github-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: github che.eclipse.org/scm-server-endpoint: <github_server_url> 2 che.eclipse.org/scm-github-disable-subdomain-isolation: 'false' 3 type: Opaque stringData: id: <GitHub_OAuth_Client_ID> 4 secret: <GitHub_OAuth_Client_Secret> 5",
"oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF",
"kind: Secret apiVersion: v1 metadata: name: gitlab-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: gitlab che.eclipse.org/scm-server-endpoint: <gitlab_server_url> 2 type: Opaque stringData: id: <GitLab_Application_ID> 3 secret: <GitLab_Client_Secret> 4",
"oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF",
"kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: bitbucket che.eclipse.org/scm-server-endpoint: <bitbucket_server_url> 2 type: Opaque stringData: id: <Bitbucket_Client_ID> 3 secret: <Bitbucket_Client_Secret> 4",
"oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF",
"kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: bitbucket type: Opaque stringData: id: <Bitbucket_Oauth_Consumer_Key> 2 secret: <Bitbucket_Oauth_Consumer_Secret> 3",
"oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF",
"openssl genrsa -out private.pem 2048 && openssl pkcs8 -topk8 -inform pem -outform pem -nocrypt -in private.pem -out privatepkcs8.pem && cat privatepkcs8.pem | sed 's/-----BEGIN PRIVATE KEY-----//g' | sed 's/-----END PRIVATE KEY-----//g' | tr -d '\\n' > privatepkcs8-stripped.pem && openssl rsa -in private.pem -pubout > public.pub && cat public.pub | sed 's/-----BEGIN PUBLIC KEY-----//g' | sed 's/-----END PUBLIC KEY-----//g' | tr -d '\\n' > public-stripped.pub && openssl rand -base64 24 > bitbucket-consumer-key && openssl rand -base64 24 > bitbucket-shared-secret",
"kind: Secret apiVersion: v1 metadata: name: bitbucket-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/component: oauth-scm-configuration app.kubernetes.io/part-of: che.eclipse.org annotations: che.eclipse.org/oauth-scm-server: bitbucket che.eclipse.org/scm-server-endpoint: <bitbucket_server_url> 2 type: Opaque stringData: private.key: <Content_of_privatepkcs8-stripped.pem> 3 consumer.key: <Content_of_bitbucket-consumer-key> 4 shared_secret: <Content_of_bitbucket-shared-secret> 5",
"oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF",
"kind: Secret apiVersion: v1 metadata: name: azure-devops-oauth-config namespace: openshift-devspaces 1 labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: oauth-scm-configuration annotations: che.eclipse.org/oauth-scm-server: azure-devops type: Opaque stringData: id: <Microsoft_Azure_DevOps_Services_OAuth_App_ID> 2 secret: <Microsoft_Azure_DevOps_Services_OAuth_Client_Secret> 3",
"oc apply -f - <<EOF <Secret_prepared_in_the_previous_step> EOF",
"USER_ROLES= <name> 1",
"OPERATOR_NAMESPACE=USD(oc get pods -l app.kubernetes.io/component=devspaces-operator -o jsonpath={\".items[0].metadata.namespace\"} --all-namespaces)",
"kubectl apply -f - <<EOF kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: USD{USER_ROLES} labels: app.kubernetes.io/part-of: che.eclipse.org rules: - verbs: - <verbs> 1 apiGroups: - <apiGroups> 2 resources: - <resources> 3 EOF",
"kubectl apply -f - <<EOF kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: USD{USER_ROLES} labels: app.kubernetes.io/part-of: che.eclipse.org subjects: - kind: ServiceAccount name: devspaces-operator namespace: USD{OPERATOR_NAMESPACE} roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: USD{USER_ROLES} EOF",
"kubectl patch checluster devspaces --patch '{\"spec\": {\"components\": {\"cheServer\": {\"clusterRoles\": [\"'USD{USER_ROLES}'\"]}}}}' --type=merge -n openshift-devspaces",
"kubectl patch checluster devspaces --patch '{\"spec\": {\"devEnvironments\": {\"user\": {\"clusterRoles\": [\"'USD{USER_ROLES}'\"]}}}}' --type=merge -n openshift-devspaces",
"spec: networking: auth: advancedAuthorization: allowUsers: - <allow_users> 1 allowGroups: - <allow_groups> 2 denyUsers: - <deny_users> 3 denyGroups: - <deny_groups> 4",
"oc get users",
"oc delete user <username>",
"NODE_ROLE=master",
"NODE_ROLE=worker",
"VERSION=4.12.0",
"cat << EOF | butane | oc apply -f - variant: openshift version: USD{VERSION} metadata: labels: machineconfiguration.openshift.io/role: USD{NODE_ROLE} name: 99-podman-dev-fuse-USD{NODE_ROLE} storage: files: - path: /etc/crio/crio.conf.d/99-podman-fuse 1 mode: 0644 overwrite: true contents: 2 inline: | [crio.runtime.workloads.podman-fuse] 3 activation_annotation = \"io.openshift.podman-fuse\" 4 allowed_annotations = [ \"io.kubernetes.cri-o.Devices\" 5 ] [crio.runtime] allowed_devices = [\"/dev/fuse\"] 6 EOF",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.27.9 ip-10-0-136-243.ec2.internal Ready master 34m v1.27.9 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.27.9 ip-10-0-142-249.ec2.internal Ready master 34m v1.27.9 ip-10-0-153-11.ec2.internal Ready worker 28m v1.27.9 ip-10-0-153-150.ec2.internal Ready master 34m v1.27.9",
"io.openshift.podman-fuse: '' io.kubernetes.cri-o.Devices: /dev/fuse",
"oc get nodes",
"oc debug node/ <nodename>",
"sh-4.4# stat /host/etc/crio/crio.conf.d/99-podman-fuse",
"kind: ConfigMap apiVersion: v1 metadata: name: fuse-overlay namespace: openshift-devspaces labels: app.kubernetes.io/part-of: che.eclipse.org app.kubernetes.io/component: workspaces-config annotations: controller.devfile.io/mount-as: subpath controller.devfile.io/mount-path: /home/user/.config/containers/ data: storage.conf: | [storage] driver = \"overlay\" [storage.options.overlay] mount_program=\"/usr/bin/fuse-overlayfs\"",
"kind: CheCluster apiVersion: org.eclipse.che/v2 spec: devEnvironments: workspacesPodAnnotations: io.kubernetes.cri-o.Devices: /dev/fuse",
"podman info | grep overlay",
"graphDriverName: overlay overlay.mount_program: Executable: /usr/bin/fuse-overlayfs Package: fuse-overlayfs-1.12-1.module+el8.9.0+20326+387084d0.x86_64 fuse-overlayfs: version 1.12 Backing Filesystem: overlayfs"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.15/html/administration_guide/configuring-devspaces |
Preface | Preface You can install Red Hat Developer Hub on Google Kubernetes Engine (GKE) using one of the following methods: The Red Hat Developer Hub Operator The Red Hat Developer Hub Helm chart | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/installing_red_hat_developer_hub_on_google_kubernetes_engine/pr01 |
Chapter 3. User-provisioned infrastructure | Chapter 3. User-provisioned infrastructure 3.1. vSphere installation requirements for user-provisioned infrastructure Before you begin an installation on infrastructure that you provision, be sure that your vSphere environment meets the following installation requirements. 3.1.1. VMware vSphere infrastructure requirements You must install an OpenShift Container Platform cluster on one of the following versions of a VMware vSphere instance that meets the requirements for the components that you use: Version 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later Version 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Both of these releases support Container Storage Interface (CSI) migration, which is enabled by default on OpenShift Container Platform 4.18. You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following tables: Table 3.1. Version requirements for vSphere virtual environments Virtual environment product Required version VMware virtual hardware 15 or later vSphere ESXi hosts 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter host 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Table 3.2. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later with virtual hardware version 15 This hypervisor version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Optional: Networking (NSX-T) vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . CPU micro-architecture x86-64-v2 or higher OpenShift Container Platform version 4.13 and later are based on the RHEL 9.2 host operating system, which raised the microarchitecture requirements to x86-64-v2. See Architectures in the RHEL documentation. Important To ensure the best performance conditions for your cluster workloads that operate on Oracle(R) Cloud Infrastructure (OCI) and on the Oracle(R) Cloud VMware Solution (OCVS) service, ensure volume performance units (VPUs) for your block volume are sized for your workloads. The following list provides some guidance in selecting the VPUs needed for specific performance needs: Test or proof of concept environment: 100 GB, and 20 to 30 VPUs. Base-production environment: 500 GB, and 60 VPUs. Heavy-use production environment: More than 500 GB, and 100 or more VPUs. Consider allocating additional VPUs to give enough capacity for updates and scaling activities. See Block Volume Performance Levels (Oracle documentation) . 3.1.2. VMware vSphere CSI Driver Operator requirements To install the vSphere Container Storage Interface (CSI) Driver Operator, the following requirements must be met: VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later. Note The VMware vSphere CSI Driver Operator is supported only on clusters deployed with platform: vsphere in the installation manifest. You can create a custom role for the Container Storage Interface (CSI) driver, the vSphere CSI Driver Operator, and the vSphere Problem Detector Operator. The custom role can include privilege sets that assign a minimum set of permissions to each vSphere object. This means that the CSI driver, the vSphere CSI Driver Operator, and the vSphere Problem Detector Operator can establish a basic interaction with these objects. Important Installing an OpenShift Container Platform cluster in a vCenter is tested against a full list of privileges as described in the "Required vCenter account privileges" section. By adhering to the full list of privileges, you can reduce the possibility of unexpected and unsupported behaviors that might occur when creating a custom role with a set of restricted privileges. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . Minimum permissions for the storage components 3.1.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 3.1.3.1. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that you provided, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, your vSphere account must include privileges for reading and creating the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. Example 3.1. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename Host.Config.Storage VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter data center If the installation program creates the virtual machine folder. For user-provisioned infrastructure, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. See the "Minimum permissions for the Machine API" table. InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 3.2. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter data center If the installation program creates the virtual machine folder. For user-provisioned infrastructure, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 3.3. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter data center Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Minimum required vCenter account privileges After you create a custom role and assign privileges to it, you can create permissions by selecting specific vSphere objects and then assigning the custom role to a user or group for each object. Before you create permissions or request for the creation of permissions for a vSphere object, determine what minimum permissions apply to the vSphere object. By doing this task, you can ensure a basic interaction exists between a vSphere object and OpenShift Container Platform architecture. Important If you create a custom role and you do not assign privileges to it, the vSphere Server by default assigns a Read Only role to the custom role. Note that for the cloud provider API, the custom role only needs to inherit the privileges of the Read Only role. Consider creating a custom role when an account with global administrative privileges does not meet your needs. Important Accounts that are not configured with the required privileges are unsupported. Installing an OpenShift Container Platform cluster in a vCenter is tested against a full list of privileges as described in the "Required vCenter account privileges" section. By adhering to the full list of privileges, you can reduce the possibility of unexpected behaviors that might occur when creating a custom role with a restricted set of privileges. The following tables list the minimum permissions for a vSphere object that interacts with specific OpenShift Container Platform architecture. Example 3.4. Minimum permissions for post-installation management of components vSphere object for role When required Required privileges vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If you intend to create VMs in the cluster root Host.Config.Storage Resource.AssignVMToPool vSphere vCenter Resource Pool If you provide an existing resource pool in the install-config.yaml file Host.Config.Storage vSphere datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.Memory VirtualMachine.Config.Settings VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate vSphere vCenter data center If the installation program creates the virtual machine folder. For user-provisioned infrastructure, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. If your cluster does use the Machine API and you want to set the minimum set of permissions for the API, see the "Minimum permissions for the Machine API" table. Resource.AssignVMToPool VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Provisioning.DeployTemplate Example 3.5. Minimum permissions for the storage components vSphere object for role When required Required privileges vSphere vCenter Always Cns.Searchable InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If you intend to create VMs in the cluster root Host.Config.Storage vSphere vCenter Resource Pool If you provide an existing resource pool in the install-config.yaml file Host.Config.Storage vSphere datastore Always Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Read Only Virtual Machine Folder Always VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddRemoveDevice vSphere vCenter data center If the installation program creates the virtual machine folder. For user-provisioned infrastructure, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. If your cluster does use the Machine API and you want to set the minimum set of permissions for the API, see the "Minimum permissions for the Machine API" table. VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddRemoveDevice Example 3.6. Minimum permissions for the Machine API vSphere object for role When required Required privileges vSphere vCenter Always InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If you intend to create VMs in the cluster root Resource.AssignVMToPool vSphere vCenter Resource Pool If you provide an existing resource pool in the install-config.yaml file Read Only vSphere datastore Always Datastore.AllocateSpace Datastore.Browse vSphere Port Group Always Network.Assign Virtual Machine Folder Always VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.Memory VirtualMachine.Config.Settings VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate vSphere vCenter data center If the installation program creates the virtual machine folder. For user-provisioned infrastructure, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. Resource.AssignVMToPool VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Provisioning.DeployTemplate Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing an OpenShift Container Platform cluster. Using Storage vMotion can cause issues and is not supported. Using VMware compute vMotion to migrate the workloads for both OpenShift Container Platform compute machines and control plane machines is generally supported, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using VMware vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Important You can specify the path of any datastore that exists in a datastore cluster. By default, Storage Distributed Resource Scheduler (SDRS), which uses Storage vMotion, is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage DRS to avoid data loss issues for your OpenShift Container Platform cluster. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". Cluster resources When you deploy an OpenShift Container Platform cluster that uses infrastructure that you provided, you must create the following resources in your vCenter instance: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You can use Dynamic Host Configuration Protocol (DHCP) for the network and configure the DHCP server to set persistent IP addresses to machines in your cluster. In the DHCP lease, you must configure the DHCP to use the default gateway. Note You do not need to use the DHCP for the network if you want to provision nodes with static IP addresses. If you specify nodes or groups of nodes on different VLANs for a cluster that you want to install on user-provisioned infrastructure, you must ensure that machines in your cluster meet the requirements outlined in the "Network connectivity requirements" section of the Networking requirements for user-provisioned infrastructure document. If you are installing to a restricted environment, the VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Note Ensure that each OpenShift Container Platform node in the cluster has access to a Network Time Protocol (NTP) server that is discoverable by DHCP. Installation is possible without an NTP server. However, asynchronous server clocks can cause errors, which the NTP server prevents. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 3.3. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME (Canonical Name) record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Additional resources Creating a compute machine set on vSphere 3.1.3.2. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 3.4. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap and control plane machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. However, the compute machines can choose between Red Hat Enterprise Linux CoreOS (RHCOS), Red Hat Enterprise Linux (RHEL) 8.6 and later. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 3.1.3.3. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 3.5. Minimum resource requirements Machine Operating System vCPU Virtual RAM Storage Input/Output Per Second (IOPS) [1] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [2] 2 8 GB 100 GB 300 OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note For OpenShift Container Platform version 4.18, RHCOS is based on RHEL version 9.4, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Important Do not use memory ballooning in OpenShift Container Platform clusters. Memory ballooning can cause cluster-wide instabilities, service degradation, or other undefined behaviors. Control plane machines should have committed memory equal to or greater than the published minimum resource requirements for a cluster installation. Compute machines should have a minimum reservation equal to or greater than the published minimum resource requirements for a cluster installation. These minimum CPU and memory requirements do not account for resources required by user workloads. For more information, see the Red Hat Knowledgebase article Memory Ballooning and OpenShift . Additional resources Optimizing storage 3.1.3.4. Requirements for encrypting virtual machines You can encrypt your virtual machines prior to installing OpenShift Container Platform 4.18 by meeting the following requirements. You have configured a Standard key provider in vSphere. For more information, see Adding a KMS to vCenter Server . Important The Native key provider in vCenter is not supported. For more information, see vSphere Native Key Provider Overview . You have enabled host encryption mode on all of the ESXi hosts that are hosting the cluster. For more information, see Enabling host encryption mode . You have a vSphere account which has all cryptographic privileges enabled. For more information, see Cryptographic Operations Privileges . When you deploy the OVF template in the section titled "Installing RHCOS and starting the OpenShift Container Platform bootstrap process", select the option to "Encrypt this virtual machine" when you are selecting storage for the OVF template. After completing cluster installation, create a storage class that uses the encryption storage policy you used to encrypt the virtual machines. Additional resources Creating an encrypted storage class 3.1.3.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 3.1.3.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 3.1.3.6.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 3.1.3.6.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 3.6. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 3.7. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 3.8. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports Ethernet adaptor hardware address requirements When provisioning VMs for the cluster, the ethernet interfaces configured for each VM must use a MAC address from the VMware Organizationally Unique Identifier (OUI) allocation ranges: 00:05:69:00:00:00 to 00:05:69:FF:FF:FF 00:0c:29:00:00:00 to 00:0c:29:FF:FF:FF 00:1c:14:00:00:00 to 00:1c:14:FF:FF:FF 00:50:56:00:00:00 to 00:50:56:FF:FF:FF If a MAC address outside the VMware OUI is used, the cluster installation will not succeed. NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 3.1.3.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 3.9. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 3.1.3.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 3.7. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 3.8. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 3.1.3.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 3.10. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 3.11. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 3.1.3.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 3.9. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 3.2. Preparing to install a cluster using user-provisioned infrastructure You prepare to install an OpenShift Container Platform cluster on vSphere by completing the following steps: Downloading the installation program. Note If you are installing in a disconnected environment, you extract the installation program from the mirrored content. For more information, see Mirroring images for a disconnected installation . Installing the OpenShift CLI ( oc ). Note If you are installing in a disconnected environment, install oc to the mirror host. Generating an SSH key pair. You can use this key pair to authenticate into the OpenShift Container Platform cluster's nodes after it is deployed. Preparing the user-provisioned infrastructure. Validating DNS resolution. 3.2.1. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.2.2. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.18. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.18 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.2.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. If you install a cluster on infrastructure that you provision, you must provide the key to the installation program. 3.2.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 3.2.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 3.3. Installing a cluster on vSphere with user-provisioned infrastructure In OpenShift Container Platform version 4.18, you can install a cluster on VMware vSphere infrastructure that you provision. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. 3.3.1. Prerequisites You have completed the tasks in Preparing to install a cluster using user-provisioned infrastructure . You reviewed your VMware platform licenses. Red Hat does not place any restrictions on your VMware licenses, but some VMware infrastructure components require licensing. You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. Completing the installation requires that you upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA on vSphere hosts. The machine from which you complete this process requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 3.3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.3.3. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere data centers. Each data center can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important The VMware vSphere region and zone enablement feature requires the vSphere Container Storage Interface (CSI) driver as the default storage driver in the cluster. As a result, the feature is only available on a newly installed cluster. For a cluster that was upgraded from a release, you must enable CSI automatic migration for the cluster. You can then configure multiple regions and zones for the upgraded cluster. The default installation configuration deploys a cluster to a single vSphere data center. If you want to deploy a cluster to multiple vSphere data centers, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere data centers and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single data center. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter data center. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter data center, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a data center, which represents a zone. After you create the tags, you must attach each tag to their respective data centers and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere data centers running in a single VMware vCenter. Data center (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b Additional resources Additional VMware vSphere configuration parameters Deprecated VMware vSphere configuration parameters vSphere automatic migration VMware vSphere CSI Driver Operator 3.3.4. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . If you are installing a three-node cluster, modify the install-config.yaml file by setting the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on vSphere". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters 3.3.4.1. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. additionalTrustBundlePolicy: Proxyonly apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 0 3 controlPlane: 4 architecture: amd64 name: <parent_node> platform: {} replicas: 3 5 metadata: creationTimestamp: null name: test 6 networking: --- platform: vsphere: failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: "/<data_center>/host/<cluster>" datacenter: <data_center> 8 datastore: "/<data_center>/datastore/<datastore>" 9 networks: - <VM_Network_name> resourcePool: "/<data_center>/host/<cluster>/Resources/<resourcePool>" 10 folder: "/<data_center_name>/vm/<folder_name>/<subfolder_name>" 11 zone: <default_zone_name> vcenters: - datacenters: - <data_center> password: <password> 12 port: 443 server: <fully_qualified_domain_name> 13 user: [email protected] diskType: thin 14 fips: false 15 pullSecret: '{"auths": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Both sections define a single machine pool, so only one control plane is used. OpenShift Container Platform does not support defining multiple compute pools. 3 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 5 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 6 The cluster name that you specified in your DNS records. 7 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. 8 The vSphere data center. 9 The path to the vSphere datastore that holds virtual machine files, templates, and ISO images. Important You can specify the path of any datastore that exists in a datastore cluster. By default, Storage vMotion is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage vMotion to avoid data loss issues for your OpenShift Container Platform cluster. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". 10 Optional: For installer-provisioned infrastructure, the absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<data_center_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, resources are installed in the root of the cluster /example_data_center/host/example_cluster/Resources . 11 Optional: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<data_center_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the data center virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster and you do not want to use the default StorageClass object, named thin , you can omit the folder parameter from the install-config.yaml file. 12 The password associated with the vSphere user. 13 The fully-qualified hostname or IP address of the vCenter server. Important The Cloud Controller Manager Operator performs a connectivity check on a provided hostname or IP address. Ensure that you specify a hostname or an IP address to a reachable vCenter server. If you provide metadata to a non-existent vCenter server, installation of the cluster fails at the bootstrap stage. 14 The vSphere disk provisioning method. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 16 The pull secret that you obtained from OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). 3.3.4.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.3.4.3. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file, so that you can deploy an OpenShift Container Platform cluster to multiple vSphere data centers. The default install-config.yaml file configuration from the release of OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. Important The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter data center object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<data_center_1> Attach the zone tags to each vCenter cluster object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<data_center_1>/host/<cluster1> Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple data centers defined in a vSphere center --- compute: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- controlPlane: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- platform: vsphere: vcenters: --- datacenters: - <data_center_1_name> - <data_center_2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <data_center_1> computeCluster: "/<data_center_1>/host/<cluster1>" networks: - <VM_Network1_name> datastore: "/<data_center_1>/datastore/<datastore1>" resourcePool: "/<data_center_1>/host/<cluster1>/Resources/<resourcePool1>" folder: "/<data_center_1>/vm/<folder1>" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <data_center_2> computeCluster: "/<data_center_2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<data_center_2>/datastore/<datastore2>" resourcePool: "/<data_center_2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<data_center_2>/vm/<folder2>" --- 3.3.5. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines, compute machine sets, and control plane machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the compute machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 3.3.6. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware vSphere. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 3.3.7. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on user-provisioned infrastructure on VMware vSphere, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on vSphere hosts. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your data center to store your VMs. Click the VMs and Templates view. Right-click the name of your data center. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. If you want to encrypt your virtual machines, select Encrypt this virtual machine . See the section titled "Requirements for encrypting virtual machines" for more information. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that compute machine sets can apply configurations to. Optional: Update the configured virtual hardware version in the VM template, if necessary. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Important It is recommended that you update the hardware version of the VM template to version 15 before creating VMs from it, if necessary. Using hardware version 13 for your cluster nodes running on vSphere is now deprecated. If your imported template defaults to hardware version 13, you must ensure that your ESXi host is on 6.7U3 or later before upgrading the VM template to hardware version 15. If your vSphere version is less than 6.7U3, you can skip this upgrade step; however, a future version of OpenShift Container Platform is scheduled to remove support for hardware version 13 and vSphere versions less than 6.7U3. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your data center. On the Select clone options tab, select Customize this virtual machine's hardware . On the Customize hardware tab, click Advanced Parameters . Important The following configuration suggestions are for example purposes only. As a cluster administrator, you must configure resources according to the resource demands placed on your cluster. To best manage cluster resources, consider creating a resource pool from the cluster's root resource pool. Optional: Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: Example command USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Set the guestinfo.afterburn.initrd.network-kargs property before you boot a VM from an OVA in vSphere: Example command USD govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}" Add the following configuration parameter names and values by specifying data in the Attribute and Values fields. Ensure that you select the Add button for each parameter that you create. guestinfo.ignition.config.data : Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . stealclock.enable : If this parameter was not defined, add it and specify TRUE . Create a child resource pool from the cluster's root resource pool. Perform resource allocation in this child resource pool. In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the remaining configuration steps. On clicking the Finish button, you have completed the cloning operation. From the Virtual Machines tab, right-click on your VM and then select Power Power On . Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied steps Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 3.3.8. Adding more compute machines to a cluster in vSphere You can add more compute machines to a user-provisioned OpenShift Container Platform cluster on VMware vSphere. After your vSphere template deploys in your OpenShift Container Platform cluster, you can deploy a virtual machine (VM) for a machine in that cluster. Note If you are installing a three-node cluster, skip this step. A three-node cluster consists of three control plane machines, which also act as compute machines. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your data center. On the Select storage tab, select storage for your configuration and disk files. On the Select clone options tab, select Customize this virtual machine's hardware . On the Customize hardware tab, click Advanced Parameters . Add the following configuration parameter names and values by specifying data in the Attribute and Values fields. Ensure that you select the Add button for each parameter that you create. guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. If many networks exist, select Add New Device > Network Adapter , and then enter your network information in the fields provided by the New Network menu item. Complete the remaining configuration steps. On clicking the Finish button, you have completed the cloning operation. From the Virtual Machines tab, right-click on your VM and then select Power Power On . steps Continue to create more compute machines for your cluster. 3.3.9. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Important Kubernetes supports only two file system partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.18.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 3.3.10. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 3.3.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.3.12. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 3.3.13. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Configure the Operators that are not available. 3.3.13.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 3.3.13.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.3.13.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 3.3.13.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 3.3.13.2.3. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 3.3.14. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 3.3.15. Configuring vSphere DRS anti-affinity rules for control plane nodes vSphere Distributed Resource Scheduler (DRS) anti-affinity rules can be configured to support higher availability of OpenShift Container Platform Control Plane nodes. Anti-affinity rules ensure that the vSphere Virtual Machines for the OpenShift Container Platform Control Plane nodes are not scheduled to the same vSphere Host. Important The following information applies to compute DRS only and does not apply to storage DRS. The govc command is an open-source command available from VMware; it is not available from Red Hat. The govc command is not supported by the Red Hat support. Instructions for downloading and installing govc are found on the VMware documentation website. Create an anti-affinity rule by running the following command: Example command USD govc cluster.rule.create \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyCluster \ -enable \ -anti-affinity master-0 master-1 master-2 After creating the rule, your control plane nodes are automatically migrated by vSphere so they are not running on the same hosts. This might take some time while vSphere reconciles the new rule. Successful command completion is shown in the following procedure. Note The migration occurs automatically and might cause brief OpenShift API outage or latency until the migration finishes. The vSphere DRS anti-affinity rules need to be updated manually in the event of a control plane VM name change or migration to a new vSphere Cluster. Procedure Remove any existing DRS anti-affinity rule by running the following command: USD govc cluster.rule.remove \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyCluster Example Output [13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK Create the rule again with updated names by running the following command: USD govc cluster.rule.create \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyOtherCluster \ -enable \ -anti-affinity master-0 master-1 master-2 3.3.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.3.17. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. Optional: if you created encrypted virtual machines, create an encrypted storage class . 3.4. Installing a cluster on vSphere with network customizations In OpenShift Container Platform version 4.18, you can install a cluster on VMware vSphere infrastructure that you provision with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. 3.4.1. Prerequisites You have completed the tasks in Preparing to install a cluster using user-provisioned infrastructure . You reviewed your VMware platform licenses. Red Hat does not place any restrictions on your VMware licenses, but some VMware infrastructure components require licensing. You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Completing the installation requires that you upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA on vSphere hosts. The machine from which you complete this process requires access to port 443 on the vCenter and ESXi hosts. Verify that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.4.3. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere data centers. Each data center can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important The VMware vSphere region and zone enablement feature requires the vSphere Container Storage Interface (CSI) driver as the default storage driver in the cluster. As a result, the feature is only available on a newly installed cluster. For a cluster that was upgraded from a release, you must enable CSI automatic migration for the cluster. You can then configure multiple regions and zones for the upgraded cluster. The default installation configuration deploys a cluster to a single vSphere data center. If you want to deploy a cluster to multiple vSphere data centers, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere data centers and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single data center. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter data center. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter data center, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a data center, which represents a zone. After you create the tags, you must attach each tag to their respective data centers and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere data centers running in a single VMware vCenter. Data center (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b Additional resources Additional VMware vSphere configuration parameters Deprecated VMware vSphere configuration parameters vSphere automatic migration VMware vSphere CSI Driver Operator 3.4.4. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Important The Cloud Controller Manager Operator performs a connectivity check on a provided hostname or IP address. Ensure that you specify a hostname or an IP address to a reachable vCenter server. If you provide metadata to a non-existent vCenter server, installation of the cluster fails at the bootstrap stage. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters 3.4.4.1. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. additionalTrustBundlePolicy: Proxyonly apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 0 3 controlPlane: 4 architecture: amd64 name: <parent_node> platform: {} replicas: 3 5 metadata: creationTimestamp: null name: test 6 networking: --- platform: vsphere: failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: "/<data_center>/host/<cluster>" datacenter: <data_center> 8 datastore: "/<data_center>/datastore/<datastore>" 9 networks: - <VM_Network_name> resourcePool: "/<data_center>/host/<cluster>/Resources/<resourcePool>" 10 folder: "/<data_center_name>/vm/<folder_name>/<subfolder_name>" 11 zone: <default_zone_name> vcenters: - datacenters: - <data_center> password: <password> 12 port: 443 server: <fully_qualified_domain_name> 13 user: [email protected] diskType: thin 14 fips: false 15 pullSecret: '{"auths": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Both sections define a single machine pool, so only one control plane is used. OpenShift Container Platform does not support defining multiple compute pools. 3 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 5 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 6 The cluster name that you specified in your DNS records. 7 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. 8 The vSphere data center. 9 The path to the vSphere datastore that holds virtual machine files, templates, and ISO images. Important You can specify the path of any datastore that exists in a datastore cluster. By default, Storage vMotion is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage vMotion to avoid data loss issues for your OpenShift Container Platform cluster. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". 10 Optional: For installer-provisioned infrastructure, the absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<data_center_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, resources are installed in the root of the cluster /example_data_center/host/example_cluster/Resources . 11 Optional: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<data_center_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the data center virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster and you do not want to use the default StorageClass object, named thin , you can omit the folder parameter from the install-config.yaml file. 12 The password associated with the vSphere user. 13 The fully-qualified hostname or IP address of the vCenter server. Important The Cloud Controller Manager Operator performs a connectivity check on a provided hostname or IP address. Ensure that you specify a hostname or an IP address to a reachable vCenter server. If you provide metadata to a non-existent vCenter server, installation of the cluster fails at the bootstrap stage. 14 The vSphere disk provisioning method. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 16 The pull secret that you obtained from OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). 3.4.4.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.4.4.3. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file, so that you can deploy an OpenShift Container Platform cluster to multiple vSphere data centers. The default install-config.yaml file configuration from the release of OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. Important The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter data center object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<data_center_1> Attach the zone tags to each vCenter cluster object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<data_center_1>/host/<cluster1> Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple data centers defined in a vSphere center --- compute: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- controlPlane: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- platform: vsphere: vcenters: --- datacenters: - <data_center_1_name> - <data_center_2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <data_center_1> computeCluster: "/<data_center_1>/host/<cluster1>" networks: - <VM_Network1_name> datastore: "/<data_center_1>/datastore/<datastore1>" resourcePool: "/<data_center_1>/host/<cluster1>/Resources/<resourcePool1>" folder: "/<data_center_1>/vm/<folder1>" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <data_center_2> computeCluster: "/<data_center_2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<data_center_2>/datastore/<datastore2>" resourcePool: "/<data_center_2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<data_center_2>/vm/<folder2>" --- 3.4.5. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork nodeNetworking For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 3.4.6. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following example: Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. Remove the Kubernetes manifest files that define the control plane machines and compute MachineSets : USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the MachineSet files to create compute machines by using the machine API, but you must update references to them to match your environment. 3.4.6.1. Specifying multiple subnets for your network Before you install an OpenShift Container Platform cluster on a vSphere host, you can specify multiple subnets for a networking implementation so that the vSphere cloud controller manager (CCM) can select the appropriate subnet for a given networking situation. vSphere can use the subnet for managing pods and services on your cluster. For this configuration, you must specify internal and external Classless Inter-Domain Routing (CIDR) implementations in the vSphere CCM configuration. Each CIDR implementation lists an IP address range that the CCM uses to decide what subnets interact with traffic from internal and external networks. Important Failure to configure internal and external CIDR implementations in the vSphere CCM configuration can cause the vSphere CCM to select the wrong subnet. This situation causes the following error: This configuration can cause new nodes that associate with a MachineSet object with a single subnet to become unusable as each new node receives the node.cloudprovider.kubernetes.io/uninitialized taint. These situations can cause communication issues with the Kubernetes API server that can cause installation of the cluster to fail. Prerequisites You created Kubernetes manifest files for your OpenShift Container Platform cluster. Procedure From the directory where you store your OpenShift Container Platform cluster manifest files, open the manifests/cluster-infrastructure-02-config.yml manifest file. Add a nodeNetworking object to the file and specify internal and external network subnet CIDR implementations for the object. Tip For most networking situations, consider setting the standard multiple-subnet configuration. This configuration requires that you set the same IP address ranges in the nodeNetworking.internal.networkSubnetCidr and nodeNetworking.external.networkSubnetCidr parameters. Example of a configured cluster-infrastructure-02-config.yml manifest file apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: generated-failure-domain ... nodeNetworking: external: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> internal: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> # ... Additional resources Cluster Network Operator configuration .spec.platformSpec.vsphere.nodeNetworking 3.4.7. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 3.4.7.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 3.12. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OVN-Kubernetes network plugin supports only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 3.13. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 3.14. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 3.15. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 3.16. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd97::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is fd97::/64 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 3.17. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 3.18. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . Note The default value of Restricted sets the IP forwarding to drop. ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 3.19. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Important For OpenShift Container Platform 4.17 and later versions, clusters use 169.254.0.0/17 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 3.20. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Important For OpenShift Container Platform 4.17 and later versions, clusters use fd69::/112 as the default masquerade subnet. For upgraded clusters, there is no change to the default masquerade subnet. Table 3.21. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full 3.4.8. Creating the Ignition config files Because you must manually start the cluster machines, you must generate the Ignition config files that the cluster needs to make its machines. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Obtain the Ignition config files: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. Important If you created an install-config.yaml file, specify the directory that contains it. Otherwise, specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. The following files are generated in the directory: 3.4.9. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware vSphere. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 3.4.10. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on user-provisioned infrastructure on VMware vSphere, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on vSphere hosts. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your data center to store your VMs. Click the VMs and Templates view. Right-click the name of your data center. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. If you want to encrypt your virtual machines, select Encrypt this virtual machine . See the section titled "Requirements for encrypting virtual machines" for more information. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that compute machine sets can apply configurations to. Optional: Update the configured virtual hardware version in the VM template, if necessary. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Important It is recommended that you update the hardware version of the VM template to version 15 before creating VMs from it, if necessary. Using hardware version 13 for your cluster nodes running on vSphere is now deprecated. If your imported template defaults to hardware version 13, you must ensure that your ESXi host is on 6.7U3 or later before upgrading the VM template to hardware version 15. If your vSphere version is less than 6.7U3, you can skip this upgrade step; however, a future version of OpenShift Container Platform is scheduled to remove support for hardware version 13 and vSphere versions less than 6.7U3. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your data center. On the Select clone options tab, select Customize this virtual machine's hardware . On the Customize hardware tab, click Advanced Parameters . Important The following configuration suggestions are for example purposes only. As a cluster administrator, you must configure resources according to the resource demands placed on your cluster. To best manage cluster resources, consider creating a resource pool from the cluster's root resource pool. Optional: Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: Example command USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Set the guestinfo.afterburn.initrd.network-kargs property before you boot a VM from an OVA in vSphere: Example command USD govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}" Add the following configuration parameter names and values by specifying data in the Attribute and Values fields. Ensure that you select the Add button for each parameter that you create. guestinfo.ignition.config.data : Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . stealclock.enable : If this parameter was not defined, add it and specify TRUE . Create a child resource pool from the cluster's root resource pool. Perform resource allocation in this child resource pool. In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the remaining configuration steps. On clicking the Finish button, you have completed the cloning operation. From the Virtual Machines tab, right-click on your VM and then select Power Power On . Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied steps Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 3.4.11. Adding more compute machines to a cluster in vSphere You can add more compute machines to a user-provisioned OpenShift Container Platform cluster on VMware vSphere. After your vSphere template deploys in your OpenShift Container Platform cluster, you can deploy a virtual machine (VM) for a machine in that cluster. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your data center. On the Select storage tab, select storage for your configuration and disk files. On the Select clone options tab, select Customize this virtual machine's hardware . On the Customize hardware tab, click Advanced Parameters . Add the following configuration parameter names and values by specifying data in the Attribute and Values fields. Ensure that you select the Add button for each parameter that you create. guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. If many networks exist, select Add New Device > Network Adapter , and then enter your network information in the fields provided by the New Network menu item. Complete the remaining configuration steps. On clicking the Finish button, you have completed the cloning operation. From the Virtual Machines tab, right-click on your VM and then select Power Power On . steps Continue to create more compute machines for your cluster. 3.4.12. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Important Kubernetes supports only two file system partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.18.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 3.4.13. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 3.4.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.4.15. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 3.4.15.1. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Configure the Operators that are not available. 3.4.15.2. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 3.4.15.3. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.4.15.3.1. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 3.4.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 3.4.17. Configuring vSphere DRS anti-affinity rules for control plane nodes vSphere Distributed Resource Scheduler (DRS) anti-affinity rules can be configured to support higher availability of OpenShift Container Platform Control Plane nodes. Anti-affinity rules ensure that the vSphere Virtual Machines for the OpenShift Container Platform Control Plane nodes are not scheduled to the same vSphere Host. Important The following information applies to compute DRS only and does not apply to storage DRS. The govc command is an open-source command available from VMware; it is not available from Red Hat. The govc command is not supported by the Red Hat support. Instructions for downloading and installing govc are found on the VMware documentation website. Create an anti-affinity rule by running the following command: Example command USD govc cluster.rule.create \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyCluster \ -enable \ -anti-affinity master-0 master-1 master-2 After creating the rule, your control plane nodes are automatically migrated by vSphere so they are not running on the same hosts. This might take some time while vSphere reconciles the new rule. Successful command completion is shown in the following procedure. Note The migration occurs automatically and might cause brief OpenShift API outage or latency until the migration finishes. The vSphere DRS anti-affinity rules need to be updated manually in the event of a control plane VM name change or migration to a new vSphere Cluster. Procedure Remove any existing DRS anti-affinity rule by running the following command: USD govc cluster.rule.remove \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyCluster Example Output [13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK Create the rule again with updated names by running the following command: USD govc cluster.rule.create \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyOtherCluster \ -enable \ -anti-affinity master-0 master-1 master-2 3.4.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.4.19. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. Optional: if you created encrypted virtual machines, create an encrypted storage class . 3.5. Installing a cluster on vSphere in a disconnected environment with user-provisioned infrastructure In OpenShift Container Platform version 4.18, you can install a cluster on VMware vSphere infrastructure that you provision in a restricted network. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the vSphere platform and the installation process of OpenShift Container Platform. Use the user-provisioned infrastructure installation instructions as a guide; you are free to create the required resources through other methods. 3.5.1. Prerequisites You have completed the tasks in Preparing to install a cluster using user-provisioned infrastructure . You reviewed your VMware platform licenses. Red Hat does not place any restrictions on your VMware licenses, but some VMware infrastructure components require licensing. You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. Completing the installation requires that you upload the Red Hat Enterprise Linux CoreOS (RHCOS) OVA on vSphere hosts. The machine from which you complete this process requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 3.5.2. About installations in restricted networks In OpenShift Container Platform 4.18, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. Important Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network. 3.5.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 3.5.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 3.5.4. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere data centers. Each data center can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important The VMware vSphere region and zone enablement feature requires the vSphere Container Storage Interface (CSI) driver as the default storage driver in the cluster. As a result, the feature is only available on a newly installed cluster. For a cluster that was upgraded from a release, you must enable CSI automatic migration for the cluster. You can then configure multiple regions and zones for the upgraded cluster. The default installation configuration deploys a cluster to a single vSphere data center. If you want to deploy a cluster to multiple vSphere data centers, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere data centers and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single data center. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter data center. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter data center, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a data center, which represents a zone. After you create the tags, you must attach each tag to their respective data centers and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere data centers running in a single VMware vCenter. Data center (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b Additional resources Additional VMware vSphere configuration parameters Deprecated VMware vSphere configuration parameters vSphere automatic migration VMware vSphere CSI Driver Operator 3.5.5. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Important The Cloud Controller Manager Operator performs a connectivity check on a provided hostname or IP address. Ensure that you specify a hostname or an IP address to a reachable vCenter server. If you provide metadata to a non-existent vCenter server, installation of the cluster fails at the bootstrap stage. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain the imageContentSources section from the output of the command to mirror the repository. Obtain the contents of the certificate for your mirror registry. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Unless you use a registry that RHCOS trusts by default, such as docker.io , you must provide the contents of the certificate for your mirror repository in the additionalTrustBundle section. In most cases, you must provide the certificate for your mirror. You must include the imageContentSources section from the output of the command to mirror the repository. Important The ImageContentSourcePolicy file is generated as an output of oc mirror after the mirroring process is finished. The oc mirror command generates an ImageContentSourcePolicy file which contains the YAML needed to define ImageContentSourcePolicy . Copy the text from this file and paste it into your install-config.yaml file. You must run the 'oc mirror' command twice. The first time you run the oc mirror command, you get a full ImageContentSourcePolicy file. The second time you run the oc mirror command, you only get the difference between the first and second run. Because of this behavior, you must always keep a backup of these files in case you need to merge them into one complete ImageContentSourcePolicy file. Keeping a backup of these two output files ensures that you have a complete ImageContentSourcePolicy file. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters 3.5.5.1. Sample install-config.yaml file for VMware vSphere You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. additionalTrustBundlePolicy: Proxyonly apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 0 3 controlPlane: 4 architecture: amd64 name: <parent_node> platform: {} replicas: 3 5 metadata: creationTimestamp: null name: test 6 networking: --- platform: vsphere: failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: "/<data_center>/host/<cluster>" datacenter: <data_center> 8 datastore: "/<data_center>/datastore/<datastore>" 9 networks: - <VM_Network_name> resourcePool: "/<data_center>/host/<cluster>/Resources/<resourcePool>" 10 folder: "/<data_center_name>/vm/<folder_name>/<subfolder_name>" 11 zone: <default_zone_name> vcenters: - datacenters: - <data_center> password: <password> 12 port: 443 server: <fully_qualified_domain_name> 13 user: [email protected] diskType: thin 14 fips: false 15 pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "[email protected]"}}}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 additionalTrustBundle: | 18 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 19 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2> 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Both sections define a single machine pool, so only one control plane is used. OpenShift Container Platform does not support defining multiple compute pools. 3 You must set the value of the replicas parameter to 0 . This parameter controls the number of workers that the cluster creates and manages for you, which are functions that the cluster does not perform when you use user-provisioned infrastructure. You must manually deploy worker machines for the cluster to use before you finish installing OpenShift Container Platform. 5 The number of control plane machines that you add to the cluster. Because the cluster uses this values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 6 The cluster name that you specified in your DNS records. 7 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. 8 The vSphere data center. 9 The path to the vSphere datastore that holds virtual machine files, templates, and ISO images. Important You can specify the path of any datastore that exists in a datastore cluster. By default, Storage vMotion is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage vMotion to avoid data loss issues for your OpenShift Container Platform cluster. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". 10 Optional: For installer-provisioned infrastructure, the absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<data_center_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, resources are installed in the root of the cluster /example_data_center/host/example_cluster/Resources . 11 Optional: For installer-provisioned infrastructure, the absolute path of an existing folder where the installation program creates the virtual machines, for example, /<data_center_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the data center virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster and you do not want to use the default StorageClass object, named thin , you can omit the folder parameter from the install-config.yaml file. 12 The password associated with the vSphere user. 13 The fully-qualified hostname or IP address of the vCenter server. Important The Cloud Controller Manager Operator performs a connectivity check on a provided hostname or IP address. Ensure that you specify a hostname or an IP address to a reachable vCenter server. If you provide metadata to a non-existent vCenter server, installation of the cluster fails at the bootstrap stage. 14 The vSphere disk provisioning method. 15 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 16 For <local_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000 . For <credentials> , specify the base64-encoded user name and password for your mirror registry. 17 The public portion of the default SSH key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 18 Provide the contents of the certificate file that you used for your mirror registry. 19 Provide the imageContentSources section from the output of the command to mirror the repository. 3.5.5.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.5.5.3. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file, so that you can deploy an OpenShift Container Platform cluster to multiple vSphere data centers. The default install-config.yaml file configuration from the release of OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. Important The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere data center where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter data center object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<data_center_1> Attach the zone tags to each vCenter cluster object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<data_center_1>/host/<cluster1> Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple data centers defined in a vSphere center --- compute: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- controlPlane: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- platform: vsphere: vcenters: --- datacenters: - <data_center_1_name> - <data_center_2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <data_center_1> computeCluster: "/<data_center_1>/host/<cluster1>" networks: - <VM_Network1_name> datastore: "/<data_center_1>/datastore/<datastore1>" resourcePool: "/<data_center_1>/host/<cluster1>/Resources/<resourcePool1>" folder: "/<data_center_1>/vm/<folder1>" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <data_center_2> computeCluster: "/<data_center_2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<data_center_2>/datastore/<datastore2>" resourcePool: "/<data_center_2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<data_center_2>/vm/<folder2>" --- 3.5.6. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines, compute machine sets, and control plane machine sets: USD rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml Because you create and manage these resources yourself, you do not have to initialize them. You can preserve the compute machine set files to create compute machines by using the machine API, but you must update references to them to match your environment. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 3.5.7. Configuring chrony time service You must set the time server and related settings used by the chrony time service ( chronyd ) by modifying the contents of the chrony.conf file and passing those contents to your nodes as a machine config. Procedure Create a Butane config including the contents of the chrony.conf file. For example, to configure chrony on worker nodes, create a 99-worker-chrony.bu file. Note See "Creating machine configs with Butane" for information about Butane. variant: openshift version: 4.18.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony 1 2 On control plane nodes, substitute master for worker in both of these locations. 3 Specify an octal value mode for the mode field in the machine config file. After creating the file and applying the changes, the mode is converted to a decimal value. You can check the YAML file with the command oc get mc <mc-name> -o yaml . 4 Specify any valid, reachable time source, such as the one provided by your DHCP server. Note For all-machine to all-machine communication, the Network Time Protocol (NTP) on UDP is port 123 . If an external NTP time server is configured, you must open UDP port 123 . Use Butane to generate a MachineConfig object file, 99-worker-chrony.yaml , containing the configuration to be delivered to the nodes: USD butane 99-worker-chrony.bu -o 99-worker-chrony.yaml Apply the configurations in one of two ways: If the cluster is not running yet, after you generate manifest files, add the MachineConfig object file to the <installation_directory>/openshift directory, and then continue to create the cluster. If the cluster is already running, apply the file: USD oc apply -f ./99-worker-chrony.yaml 3.5.8. Extracting the infrastructure name The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in VMware vSphere. If you plan to use the cluster identifier as the name of your virtual machine folder, you must extract it. Prerequisites You obtained the OpenShift Container Platform installation program and the pull secret for your cluster. You generated the Ignition config files for your cluster. You installed the jq package. Procedure To extract and view the infrastructure name from the Ignition config file metadata, run the following command: USD jq -r .infraID <installation_directory>/metadata.json 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output openshift-vw9j6 1 1 The output of this command is your cluster name and a random string. 3.5.9. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on user-provisioned infrastructure on VMware vSphere, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on vSphere hosts. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Prerequisites You have obtained the Ignition config files for your cluster. You have access to an HTTP server that you can access from your computer and that the machines that you create can access. You have created a vSphere cluster . Procedure Upload the bootstrap Ignition config file, which is named <installation_directory>/bootstrap.ign , that the installation program created to your HTTP server. Note the URL of this file. Save the following secondary Ignition config file for your bootstrap node to your computer as <installation_directory>/merge-bootstrap.ign : { "ignition": { "config": { "merge": [ { "source": "<bootstrap_ignition_config_url>", 1 "verification": {} } ] }, "timeouts": {}, "version": "3.2.0" }, "networkd": {}, "passwd": {}, "storage": {}, "systemd": {} } 1 Specify the URL of the bootstrap Ignition config file that you hosted. When you create the virtual machine (VM) for the bootstrap machine, you use this Ignition config file. Locate the following Ignition config files that the installation program created: <installation_directory>/master.ign <installation_directory>/worker.ign <installation_directory>/merge-bootstrap.ign Convert the Ignition config files to Base64 encoding. Later in this procedure, you must add these files to the extra configuration parameter guestinfo.ignition.config.data in your VM. For example, if you use a Linux operating system, you can use the base64 command to encode the files. USD base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64 USD base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64 USD base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64 Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Obtain the RHCOS OVA image. Images are available from the RHCOS image mirror page. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. The filename contains the OpenShift Container Platform version number in the format rhcos-vmware.<architecture>.ova . In the vSphere Client, create a folder in your data center to store your VMs. Click the VMs and Templates view. Right-click the name of your data center. Click New Folder New VM and Template Folder . In the window that is displayed, enter the folder name. If you did not specify an existing folder in the install-config.yaml file, then create a folder with the same name as the infrastructure ID. You use this folder name so vCenter dynamically provisions storage in the appropriate location for its Workspace configuration. In the vSphere Client, create a template for the OVA image and then clone the template as needed. Note In the following steps, you create a template and then clone the template for all of your cluster machines. You then provide the location for the Ignition config file for that cloned machine type when you provision the VMs. From the Hosts and Clusters tab, right-click your cluster name and select Deploy OVF Template . On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded. On the Select a name and folder tab, set a Virtual machine name for your template, such as Template-RHCOS . Click the name of your vSphere cluster and select the folder you created in the step. On the Select a compute resource tab, click the name of your vSphere cluster. On the Select storage tab, configure the storage options for your VM. Select Thin Provision or Thick Provision , based on your storage preferences. Select the datastore that you specified in your install-config.yaml file. If you want to encrypt your virtual machines, select Encrypt this virtual machine . See the section titled "Requirements for encrypting virtual machines" for more information. On the Select network tab, specify the network that you configured for the cluster, if available. When creating the OVF template, do not specify values on the Customize template tab or configure the template any further. Important Do not start the original VM template. The VM template must remain off and must be cloned for new RHCOS machines. Starting the VM template configures the VM template as a VM on the platform, which prevents it from being used as a template that compute machine sets can apply configurations to. Optional: Update the configured virtual hardware version in the VM template, if necessary. Follow Upgrading a virtual machine to the latest hardware version in the VMware documentation for more information. Important It is recommended that you update the hardware version of the VM template to version 15 before creating VMs from it, if necessary. Using hardware version 13 for your cluster nodes running on vSphere is now deprecated. If your imported template defaults to hardware version 13, you must ensure that your ESXi host is on 6.7U3 or later before upgrading the VM template to hardware version 15. If your vSphere version is less than 6.7U3, you can skip this upgrade step; however, a future version of OpenShift Container Platform is scheduled to remove support for hardware version 13 and vSphere versions less than 6.7U3. After the template deploys, deploy a VM for a machine in the cluster. Right-click the template name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as control-plane-0 or compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your data center. On the Select clone options tab, select Customize this virtual machine's hardware . On the Customize hardware tab, click Advanced Parameters . Important The following configuration suggestions are for example purposes only. As a cluster administrator, you must configure resources according to the resource demands placed on your cluster. To best manage cluster resources, consider creating a resource pool from the cluster's root resource pool. Optional: Override default DHCP networking in vSphere. To enable static IP networking: Set your static IP configuration: Example command USD export IPCFG="ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]" Example command USD export IPCFG="ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8" Set the guestinfo.afterburn.initrd.network-kargs property before you boot a VM from an OVA in vSphere: Example command USD govc vm.change -vm "<vm_name>" -e "guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}" Add the following configuration parameter names and values by specifying data in the Attribute and Values fields. Ensure that you select the Add button for each parameter that you create. guestinfo.ignition.config.data : Locate the base-64 encoded files that you created previously in this procedure, and paste the contents of the base64-encoded Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . stealclock.enable : If this parameter was not defined, add it and specify TRUE . Create a child resource pool from the cluster's root resource pool. Perform resource allocation in this child resource pool. In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. Complete the remaining configuration steps. On clicking the Finish button, you have completed the cloning operation. From the Virtual Machines tab, right-click on your VM and then select Power Power On . Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied steps Create the rest of the machines for your cluster by following the preceding steps for each machine. Important You must create the bootstrap and control plane machines at this time. Because some pods are deployed on compute machines by default, also create at least two compute machines before you install the cluster. 3.5.10. Adding more compute machines to a cluster in vSphere You can add more compute machines to a user-provisioned OpenShift Container Platform cluster on VMware vSphere. After your vSphere template deploys in your OpenShift Container Platform cluster, you can deploy a virtual machine (VM) for a machine in that cluster. Prerequisites Obtain the base64-encoded Ignition file for your compute machines. You have access to the vSphere template that you created for your cluster. Procedure Right-click the template's name and click Clone Clone to Virtual Machine . On the Select a name and folder tab, specify a name for the VM. You might include the machine type in the name, such as compute-1 . Note Ensure that all virtual machine names across a vSphere installation are unique. On the Select a name and folder tab, select the name of the folder that you created for the cluster. On the Select a compute resource tab, select the name of a host in your data center. On the Select storage tab, select storage for your configuration and disk files. On the Select clone options tab, select Customize this virtual machine's hardware . On the Customize hardware tab, click Advanced Parameters . Add the following configuration parameter names and values by specifying data in the Attribute and Values fields. Ensure that you select the Add button for each parameter that you create. guestinfo.ignition.config.data : Paste the contents of the base64-encoded compute Ignition config file for this machine type. guestinfo.ignition.config.data.encoding : Specify base64 . disk.EnableUUID : Specify TRUE . In the Virtual Hardware panel of the Customize hardware tab, modify the specified values as required. Ensure that the amount of RAM, CPU, and disk storage meets the minimum requirements for the machine type. If many networks exist, select Add New Device > Network Adapter , and then enter your network information in the fields provided by the New Network menu item. Complete the remaining configuration steps. On clicking the Finish button, you have completed the cloning operation. From the Virtual Machines tab, right-click on your VM and then select Power Power On . steps Continue to create more compute machines for your cluster. 3.5.11. Disk partitioning In most cases, data partitions are originally created by installing RHCOS, rather than by installing another operating system. In such cases, the OpenShift Container Platform installer should be allowed to configure your disk partitions. However, there are two cases where you might want to intervene to override the default partitioning when installing an OpenShift Container Platform node: Create separate partitions: For greenfield installations on an empty disk, you might want to add separate storage to a partition. This is officially supported for making /var or a subdirectory of /var , such as /var/lib/etcd , a separate partition, but not both. Important For disk sizes larger than 100GB, and especially disk sizes larger than 1TB, create a separate /var partition. See "Creating a separate /var partition" and this Red Hat Knowledgebase article for more information. Important Kubernetes supports only two file system partitions. If you add more than one partition to the original configuration, Kubernetes cannot monitor all of them. Retain existing partitions: For a brownfield installation where you are reinstalling OpenShift Container Platform on an existing node and want to retain data partitions installed from your operating system, there are both boot arguments and options to coreos-installer that allow you to retain existing data partitions. Creating a separate /var partition In general, disk partitioning for OpenShift Container Platform should be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Important For disk sizes larger than 100GB, and especially larger than 1TB, create a separate /var partition. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ... USD ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.18.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 mebibytes is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the vSphere installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 3.5.12. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 3.5.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.5.14. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information Certificate Signing Requests 3.5.15. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Configure the Operators that are not available. 3.5.15.1. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 3.5.15.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.5.15.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 3.5.15.2.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 3.5.15.2.3. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 3.5.16. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. Register your cluster on the Cluster registration page. You can add extra compute machines after the cluster installation is completed by following Adding compute machines to vSphere . 3.5.17. Configuring vSphere DRS anti-affinity rules for control plane nodes vSphere Distributed Resource Scheduler (DRS) anti-affinity rules can be configured to support higher availability of OpenShift Container Platform Control Plane nodes. Anti-affinity rules ensure that the vSphere Virtual Machines for the OpenShift Container Platform Control Plane nodes are not scheduled to the same vSphere Host. Important The following information applies to compute DRS only and does not apply to storage DRS. The govc command is an open-source command available from VMware; it is not available from Red Hat. The govc command is not supported by the Red Hat support. Instructions for downloading and installing govc are found on the VMware documentation website. Create an anti-affinity rule by running the following command: Example command USD govc cluster.rule.create \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyCluster \ -enable \ -anti-affinity master-0 master-1 master-2 After creating the rule, your control plane nodes are automatically migrated by vSphere so they are not running on the same hosts. This might take some time while vSphere reconciles the new rule. Successful command completion is shown in the following procedure. Note The migration occurs automatically and might cause brief OpenShift API outage or latency until the migration finishes. The vSphere DRS anti-affinity rules need to be updated manually in the event of a control plane VM name change or migration to a new vSphere Cluster. Procedure Remove any existing DRS anti-affinity rule by running the following command: USD govc cluster.rule.remove \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyCluster Example Output [13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK Create the rule again with updated names by running the following command: USD govc cluster.rule.create \ -name openshift4-control-plane-group \ -dc MyDatacenter -cluster MyOtherCluster \ -enable \ -anti-affinity master-0 master-1 master-2 3.5.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.5.19. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues. Optional: if you created encrypted virtual machines, create an encrypted storage class . | [
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server compute0 compute0.ocp4.example.com:443 check inter 1s server compute1 compute1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server compute0 compute0.ocp4.example.com:80 check inter 1s server compute1 compute1.ocp4.example.com:80 check inter 1s",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"mkdir <installation_directory>",
"additionalTrustBundlePolicy: Proxyonly apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 0 3 controlPlane: 4 architecture: amd64 name: <parent_node> platform: {} replicas: 3 5 metadata: creationTimestamp: null name: test 6 networking: --- platform: vsphere: failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<data_center>/host/<cluster>\" datacenter: <data_center> 8 datastore: \"/<data_center>/datastore/<datastore>\" 9 networks: - <VM_Network_name> resourcePool: \"/<data_center>/host/<cluster>/Resources/<resourcePool>\" 10 folder: \"/<data_center_name>/vm/<folder_name>/<subfolder_name>\" 11 zone: <default_zone_name> vcenters: - datacenters: - <data_center> password: <password> 12 port: 443 server: <fully_qualified_domain_name> 13 user: [email protected] diskType: thin 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<data_center_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<data_center_1>/host/<cluster1>",
"--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <data_center_1_name> - <data_center_2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <data_center_1> computeCluster: \"/<data_center_1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<data_center_1>/datastore/<datastore1>\" resourcePool: \"/<data_center_1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<data_center_1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <data_center_2> computeCluster: \"/<data_center_2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<data_center_2>/datastore/<datastore2>\" resourcePool: \"/<data_center_2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<data_center_2>/vm/<folder2>\" ---",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.18.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster -enable -anti-affinity master-0 master-1 master-2",
"govc cluster.rule.remove -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster",
"[13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyOtherCluster -enable -anti-affinity master-0 master-1 master-2",
"mkdir <installation_directory>",
"additionalTrustBundlePolicy: Proxyonly apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 0 3 controlPlane: 4 architecture: amd64 name: <parent_node> platform: {} replicas: 3 5 metadata: creationTimestamp: null name: test 6 networking: --- platform: vsphere: failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<data_center>/host/<cluster>\" datacenter: <data_center> 8 datastore: \"/<data_center>/datastore/<datastore>\" 9 networks: - <VM_Network_name> resourcePool: \"/<data_center>/host/<cluster>/Resources/<resourcePool>\" 10 folder: \"/<data_center_name>/vm/<folder_name>/<subfolder_name>\" 11 zone: <default_zone_name> vcenters: - datacenters: - <data_center> password: <password> 12 port: 443 server: <fully_qualified_domain_name> 13 user: [email protected] diskType: thin 14 fips: false 15 pullSecret: '{\"auths\": ...}' 16 sshKey: 'ssh-ed25519 AAAA...' 17",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<data_center_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<data_center_1>/host/<cluster1>",
"--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <data_center_1_name> - <data_center_2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <data_center_1> computeCluster: \"/<data_center_1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<data_center_1>/datastore/<datastore1>\" resourcePool: \"/<data_center_1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<data_center_1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <data_center_2> computeCluster: \"/<data_center_2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<data_center_2>/datastore/<datastore2>\" resourcePool: \"/<data_center_2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<data_center_2>/vm/<folder2>\" ---",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml",
"ERROR Bootstrap failed to complete: timed out waiting for the condition ERROR Failed to wait for bootstrapping to complete. This error usually happens when there is a problem with control plane hosts that prevents the control plane operators from creating the control plane.",
"apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: cloudConfig: key: config name: cloud-provider-config platformSpec: type: VSphere vsphere: failureDomains: - name: generated-failure-domain nodeNetworking: external: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6> internal: networkSubnetCidr: - <machine_network_cidr_ipv4> - <machine_network_cidr_ipv6>",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.18.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster -enable -anti-affinity master-0 master-1 master-2",
"govc cluster.rule.remove -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster",
"[13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyOtherCluster -enable -anti-affinity master-0 master-1 master-2",
"mkdir <installation_directory>",
"additionalTrustBundlePolicy: Proxyonly apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 0 3 controlPlane: 4 architecture: amd64 name: <parent_node> platform: {} replicas: 3 5 metadata: creationTimestamp: null name: test 6 networking: --- platform: vsphere: failureDomains: 7 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<data_center>/host/<cluster>\" datacenter: <data_center> 8 datastore: \"/<data_center>/datastore/<datastore>\" 9 networks: - <VM_Network_name> resourcePool: \"/<data_center>/host/<cluster>/Resources/<resourcePool>\" 10 folder: \"/<data_center_name>/vm/<folder_name>/<subfolder_name>\" 11 zone: <default_zone_name> vcenters: - datacenters: - <data_center> password: <password> 12 port: 443 server: <fully_qualified_domain_name> 13 user: [email protected] diskType: thin 14 fips: false 15 pullSecret: '{\"auths\":{\"<local_registry>\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}' 16 sshKey: 'ssh-ed25519 AAAA...' 17 additionalTrustBundle: | 18 -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: 19 - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release source: <source_image_1> - mirrors: - <mirror_host_name>:<mirror_port>/<repo_name>/release-images source: <source_image_2>",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"govc tags.category.create -d \"OpenShift region\" openshift-region",
"govc tags.category.create -d \"OpenShift zone\" openshift-zone",
"govc tags.create -c <region_tag_category> <region_tag>",
"govc tags.create -c <zone_tag_category> <zone_tag>",
"govc tags.attach -c <region_tag_category> <region_tag_1> /<data_center_1>",
"govc tags.attach -c <zone_tag_category> <zone_tag_1> /<data_center_1>/host/<cluster1>",
"--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <data_center_1_name> - <data_center_2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <data_center_1> computeCluster: \"/<data_center_1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<data_center_1>/datastore/<datastore1>\" resourcePool: \"/<data_center_1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<data_center_1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <data_center_2> computeCluster: \"/<data_center_2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<data_center_2>/datastore/<datastore2>\" resourcePool: \"/<data_center_2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<data_center_2>/vm/<folder2>\" ---",
"./openshift-install create manifests --dir <installation_directory> 1",
"rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yaml openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"variant: openshift version: 4.18.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml",
"jq -r .infraID <installation_directory>/metadata.json 1",
"openshift-vw9j6 1",
"{ \"ignition\": { \"config\": { \"merge\": [ { \"source\": \"<bootstrap_ignition_config_url>\", 1 \"verification\": {} } ] }, \"timeouts\": {}, \"version\": \"3.2.0\" }, \"networkd\": {}, \"passwd\": {}, \"storage\": {}, \"systemd\": {} }",
"base64 -w0 <installation_directory>/master.ign > <installation_directory>/master.64",
"base64 -w0 <installation_directory>/worker.ign > <installation_directory>/worker.64",
"base64 -w0 <installation_directory>/merge-bootstrap.ign > <installation_directory>/merge-bootstrap.64",
"export IPCFG=\"ip=<ip>::<gateway>:<netmask>:<hostname>:<iface>:none nameserver=srv1 [nameserver=srv2 [nameserver=srv3 [...]]]\"",
"export IPCFG=\"ip=192.168.100.101::192.168.100.254:255.255.255.0:::none nameserver=8.8.8.8\"",
"govc vm.change -vm \"<vm_name>\" -e \"guestinfo.afterburn.initrd.network-kargs=USD{IPCFG}\"",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"mkdir USDHOME/clusterconfig",
"openshift-install create manifests --dir USDHOME/clusterconfig ? SSH Public Key ls USDHOME/clusterconfig/openshift/ 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml",
"variant: openshift version: 4.18.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true",
"butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml",
"openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.31.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.31.3 master-1 Ready master 63m v1.31.3 master-2 Ready master 64m v1.31.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.31.3 master-1 Ready master 73m v1.31.3 master-2 Ready master 74m v1.31.3 worker-0 Ready worker 11m v1.31.3 worker-1 Ready worker 11m v1.31.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resourses found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim: 1",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4",
"oc create -f pvc.yaml -n openshift-image-registry",
"oc edit config.imageregistry.operator.openshift.io -o yaml",
"storage: pvc: claim: 1",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.18.0 True False False 19m baremetal 4.18.0 True False False 37m cloud-credential 4.18.0 True False False 40m cluster-autoscaler 4.18.0 True False False 37m config-operator 4.18.0 True False False 38m console 4.18.0 True False False 26m csi-snapshot-controller 4.18.0 True False False 37m dns 4.18.0 True False False 37m etcd 4.18.0 True False False 36m image-registry 4.18.0 True False False 31m ingress 4.18.0 True False False 30m insights 4.18.0 True False False 31m kube-apiserver 4.18.0 True False False 26m kube-controller-manager 4.18.0 True False False 36m kube-scheduler 4.18.0 True False False 36m kube-storage-version-migrator 4.18.0 True False False 37m machine-api 4.18.0 True False False 29m machine-approver 4.18.0 True False False 37m machine-config 4.18.0 True False False 36m marketplace 4.18.0 True False False 37m monitoring 4.18.0 True False False 29m network 4.18.0 True False False 38m node-tuning 4.18.0 True False False 37m openshift-apiserver 4.18.0 True False False 32m openshift-controller-manager 4.18.0 True False False 30m openshift-samples 4.18.0 True False False 32m operator-lifecycle-manager 4.18.0 True False False 37m operator-lifecycle-manager-catalog 4.18.0 True False False 37m operator-lifecycle-manager-packageserver 4.18.0 True False False 32m service-ca 4.18.0 True False False 38m storage 4.18.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster -enable -anti-affinity master-0 master-1 master-2",
"govc cluster.rule.remove -name openshift4-control-plane-group -dc MyDatacenter -cluster MyCluster",
"[13-10-22 09:33:24] Reconfigure /MyDatacenter/host/MyCluster...OK",
"govc cluster.rule.create -name openshift4-control-plane-group -dc MyDatacenter -cluster MyOtherCluster -enable -anti-affinity master-0 master-1 master-2"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_vmware_vsphere/user-provisioned-infrastructure |
Chapter 10. Creating quick start tutorials in the web console | Chapter 10. Creating quick start tutorials in the web console If you are creating quick start tutorials for the OpenShift Container Platform web console, follow these guidelines to maintain a consistent user experience across all quick starts. 10.1. Understanding quick starts A quick start is a guided tutorial with user tasks. In the web console, you can access quick starts under the Help menu. They are especially useful for getting oriented with an application, Operator, or other product offering. A quick start primarily consists of tasks and steps. Each task has multiple steps, and each quick start has multiple tasks. For example: Task 1 Step 1 Step 2 Step 3 Task 2 Step 1 Step 2 Step 3 Task 3 Step 1 Step 2 Step 3 10.2. Quick start user workflow When you interact with an existing quick start tutorial, this is the expected workflow experience: In the Administrator or Developer perspective, click the Help icon and select Quick Starts . Click a quick start card. In the panel that appears, click Start . Complete the on-screen instructions, then click . In the Check your work module that appears, answer the question to confirm that you successfully completed the task. If you select Yes , click to continue to the task. If you select No , repeat the task instructions and check your work again. Repeat steps 1 through 6 above to complete the remaining tasks in the quick start. After completing the final task, click Close to close the quick start. 10.3. Quick start components A quick start consists of the following sections: Card : The catalog tile that provides the basic information of the quick start, including title, description, time commitment, and completion status Introduction : A brief overview of the goal and tasks of the quick start Task headings : Hyper-linked titles for each task in the quick start Check your work module : A module for a user to confirm that they completed a task successfully before advancing to the task in the quick start Hints : An animation to help users identify specific areas of the product Buttons and back buttons : Buttons for navigating the steps and modules within each task of a quick start Final screen buttons : Buttons for closing the quick start, going back to tasks within the quick start, and viewing all quick starts The main content area of a quick start includes the following sections: Card copy Introduction Task steps Modals and in-app messaging Check your work module 10.4. Contributing quick starts OpenShift Container Platform introduces the quick start custom resource, which is defined by a ConsoleQuickStart object. Operators and administrators can use this resource to contribute quick starts to the cluster. Prerequisites You must have cluster administrator privileges. Procedure To create a new quick start, run: USD oc get -o yaml consolequickstart spring-with-s2i > my-quick-start.yaml Run: USD oc create -f my-quick-start.yaml Update the YAML file using the guidance outlined in this documentation. Save your edits. 10.4.1. Viewing the quick start API documentation Procedure To see the quick start API documentation, run: USD oc explain consolequickstarts Run oc explain -h for more information about oc explain usage. 10.4.2. Mapping the elements in the quick start to the quick start CR This section helps you visually map parts of the quick start custom resource (CR) with where they appear in the quick start within the web console. 10.4.2.1. conclusion element Viewing the conclusion element in the YAML file ... summary: failed: Try the steps again. success: Your Spring application is running. title: Run the Spring application conclusion: >- Your Spring application is deployed and ready. 1 1 conclusion text Viewing the conclusion element in the web console The conclusion appears in the last section of the quick start. 10.4.2.2. description element Viewing the description element in the YAML file apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' 1 ... 1 description text Viewing the description element in the web console The description appears on the introductory tile of the quick start on the Quick Starts page. 10.4.2.3. displayName element Viewing the displayName element in the YAML file apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring 1 durationMinutes: 10 1 displayName text. Viewing the displayName element in the web console The display name appears on the introductory tile of the quick start on the Quick Starts page. 10.4.2.4. durationMinutes element Viewing the durationMinutes element in the YAML file apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring durationMinutes: 10 1 1 durationMinutes value, in minutes. This value defines how long the quick start should take to complete. Viewing the durationMinutes element in the web console The duration minutes element appears on the introductory tile of the quick start on the Quick Starts page. 10.4.2.5. icon element Viewing the icon element in the YAML file ... spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring durationMinutes: 10 icon: >- 1 data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGlkPSJMYXllcl8xIiBkYXRhLW5hbWU9IkxheWVyIDEiIHZpZXdCb3g9IjAgMCAxMDI0IDEwMjQiPjxkZWZzPjxzdHlsZT4uY2xzLTF7ZmlsbDojMTUzZDNjO30uY2xzLTJ7ZmlsbDojZDhkYTlkO30uY2xzLTN7ZmlsbDojNThjMGE4O30uY2xzLTR7ZmlsbDojZmZmO30uY2xzLTV7ZmlsbDojM2Q5MTkxO308L3N0eWxlPjwvZGVmcz48dGl0bGU+c25vd2Ryb3BfaWNvbl9yZ2JfZGVmYXVsdDwvdGl0bGU+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMTAxMi42OSw1OTNjLTExLjEyLTM4LjA3LTMxLTczLTU5LjIxLTEwMy44LTkuNS0xMS4zLTIzLjIxLTI4LjI5LTM5LjA2LTQ3Ljk0QzgzMy41MywzNDEsNzQ1LjM3LDIzNC4xOCw2NzQsMTY4Ljk0Yy01LTUuMjYtMTAuMjYtMTAuMzEtMTUuNjUtMTUuMDdhMjQ2LjQ5LDI0Ni40OSwwLDAsMC0zNi41NS0yNi44LDE4Mi41LDE4Mi41LDAsMCwwLTIwLjMtMTEuNzcsMjAxLjUzLDIwMS41MywwLDAsMC00My4xOS0xNUExNTUuMjQsMTU1LjI0LDAsMCwwLDUyOCw5NS4yYy02Ljc2LS42OC0xMS43NC0uODEtMTQuMzktLjgxaDBsLTEuNjIsMC0xLjYyLDBhMTc3LjMsMTc3LjMsMCwwLDAtMzEuNzcsMy4zNSwyMDguMjMsMjA4LjIzLDAsMCwwLTU2LjEyLDE3LjU2LDE4MSwxODEsMCwwLDAtMjAuMjcsMTEuNzUsMjQ3LjQzLDI0Ny40MywwLDAsMC0zNi41NywyNi44MUMzNjAuMjUsMTU4LjYyLDM1NSwxNjMuNjgsMzUwLDE2OWMtNzEuMzUsNjUuMjUtMTU5LjUsMTcyLTI0MC4zOSwyNzIuMjhDOTMuNzMsNDYwLjg4LDgwLDQ3Ny44Nyw3MC41Miw0ODkuMTcsNDIuMzUsNTIwLDIyLjQzLDU1NC45LDExLjMxLDU5MywuNzIsNjI5LjIyLTEuNzMsNjY3LjY5LDQsNzA3LjMxLDE1LDc4Mi40OSw1NS43OCw4NTkuMTIsMTE4LjkzLDkyMy4wOWEyMiwyMiwwLDAsMCwxNS41OSw2LjUyaDEuODNsMS44Ny0uMzJjODEuMDYtMTMuOTEsMTEwLTc5LjU3LDE0My40OC0xNTUuNiwzLjkxLTguODgsNy45NS0xOC4wNSwxMi4yLTI3LjQzcTUuNDIsOC41NCwxMS4zOSwxNi4yM2MzMS44NSw0MC45MSw3NS4xMiw2NC42NywxMzIuMzIsNzIuNjNsMTguOCwyLjYyLDQuOTUtMTguMzNjMTMuMjYtNDkuMDcsMzUuMy05MC44NSw1MC42NC0xMTYuMTksMTUuMzQsMjUuMzQsMzcuMzgsNjcuMTIsNTAuNjQsMTE2LjE5bDUsMTguMzMsMTguOC0yLjYyYzU3LjItOCwxMDAuNDctMzEuNzIsMTMyLjMyLTcyLjYzcTYtNy42OCwxMS4zOS0xNi4yM2M0LjI1LDkuMzgsOC4yOSwxOC41NSwxMi4yLDI3LjQzLDMzLjQ5LDc2LDYyLjQyLDE0MS42OSwxNDMuNDgsMTU1LjZsMS44MS4zMWgxLjg5YTIyLDIyLDAsMCwwLDE1LjU5LTYuNTJjNjMuMTUtNjQsMTAzLjk1LTE0MC42LDExNC44OS0yMTUuNzhDMTAyNS43Myw2NjcuNjksMTAyMy4yOCw2MjkuMjIsMTAxMi42OSw1OTNaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNMzY0LjE1LDE4NS4yM2MxNy44OS0xNi40LDM0LjctMzAuMTUsNDkuNzctNDAuMTFhMjEyLDIxMiwwLDAsMSw2NS45My0yNS43M0ExOTgsMTk4LDAsMCwxLDUxMiwxMTYuMjdhMTk2LjExLDE5Ni4xMSwwLDAsMSwzMiwzLjFjNC41LjkxLDkuMzYsMi4wNiwxNC41MywzLjUyLDYwLjQxLDIwLjQ4LDg0LjkyLDkxLjA1LTQ3LjQ0LDI0OC4wNi0yOC43NSwzNC4xMi0xNDAuNywxOTQuODQtMTg0LjY2LDI2OC40MmE2MzAuODYsNjMwLjg2LDAsMCwwLTMzLjIyLDU4LjMyQzI3Niw2NTUuMzQsMjY1LjQsNTk4LDI2NS40LDUyMC4yOSwyNjUuNCwzNDAuNjEsMzExLjY5LDI0MC43NCwzNjQuMTUsMTg1LjIzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTUyNy41NCwzODQuODNjODQuMDYtOTkuNywxMTYuMDYtMTc3LjI4LDk1LjIyLTIzMC43NCwxMS42Miw4LjY5LDI0LDE5LjIsMzcuMDYsMzEuMTMsNTIuNDgsNTUuNSw5OC43OCwxNTUuMzgsOTguNzgsMzM1LjA3LDAsNzcuNzEtMTAuNiwxMzUuMDUtMjcuNzcsMTc3LjRhNjI4LjczLDYyOC43MywwLDAsMC0zMy4yMy01OC4zMmMtMzktNjUuMjYtMTMxLjQ1LTE5OS0xNzEuOTMtMjUyLjI3QzUyNi4zMywzODYuMjksNTI3LDM4NS41Miw1MjcuNTQsMzg0LjgzWiIvPjxwYXRoIGNsYXNzPSJjbHMtNCIgZD0iTTEzNC41OCw5MDguMDdoLS4wNmEuMzkuMzksMCwwLDEtLjI3LS4xMWMtMTE5LjUyLTEyMS4wNy0xNTUtMjg3LjQtNDcuNTQtNDA0LjU4LDM0LjYzLTQxLjE0LDEyMC0xNTEuNiwyMDIuNzUtMjQyLjE5LTMuMTMsNy02LjEyLDE0LjI1LTguOTIsMjEuNjktMjQuMzQsNjQuNDUtMzYuNjcsMTQ0LjMyLTM2LjY3LDIzNy40MSwwLDU2LjUzLDUuNTgsMTA2LDE2LjU5LDE0Ny4xNEEzMDcuNDksMzA3LjQ5LDAsMCwwLDI4MC45MSw3MjNDMjM3LDgxNi44OCwyMTYuOTMsODkzLjkzLDEzNC41OCw5MDguMDdaIi8+PHBhdGggY2xhc3M9ImNscy01IiBkPSJNNTgzLjQzLDgxMy43OUM1NjAuMTgsNzI3LjcyLDUxMiw2NjQuMTUsNTEyLDY2NC4xNXMtNDguMTcsNjMuNTctNzEuNDMsMTQ5LjY0Yy00OC40NS02Ljc0LTEwMC45MS0yNy41Mi0xMzUuNjYtOTEuMThhNjQ1LjY4LDY0NS42OCwwLDAsMSwzOS41Ny03MS41NGwuMjEtLjMyLjE5LS4zM2MzOC02My42MywxMjYuNC0xOTEuMzcsMTY3LjEyLTI0NS42Niw0MC43MSw1NC4yOCwxMjkuMSwxODIsMTY3LjEyLDI0NS42NmwuMTkuMzMuMjEuMzJhNjQ1LjY4LDY0NS42OCwwLDAsMSwzOS41Nyw3MS41NEM2ODQuMzQsNzg2LjI3LDYzMS44OCw4MDcuMDUsNTgzLjQzLDgxMy43OVoiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik04ODkuNzUsOTA4YS4zOS4zOSwwLDAsMS0uMjcuMTFoLS4wNkM4MDcuMDcsODkzLjkzLDc4Nyw4MTYuODgsNzQzLjA5LDcyM2EzMDcuNDksMzA3LjQ5LDAsMCwwLDIwLjQ1LTU1LjU0YzExLTQxLjExLDE2LjU5LTkwLjYxLDE2LjU5LTE0Ny4xNCwwLTkzLjA4LTEyLjMzLTE3My0zNi42Ni0yMzcuNHEtNC4yMi0xMS4xNi04LjkzLTIxLjdjODIuNzUsOTAuNTksMTY4LjEyLDIwMS4wNSwyMDIuNzUsMjQyLjE5QzEwNDQuNzksNjIwLjU2LDEwMDkuMjcsNzg2Ljg5LDg4OS43NSw5MDhaIi8+PC9zdmc+Cg== ... 1 The icon defined as a base64 value. Viewing the icon element in the web console The icon appears on the introductory tile of the quick start on the Quick Starts page. 10.4.2.6. introduction element Viewing the introduction element in the YAML file ... introduction: >- 1 **Spring** is a Java framework for building applications based on a distributed microservices architecture. - Spring enables easy packaging and configuration of Spring applications into a self-contained executable application which can be easily deployed as a container to OpenShift. - Spring applications can integrate OpenShift capabilities to provide a natural "Spring on OpenShift" developer experience for both existing and net-new Spring applications. For example: - Externalized configuration using Kubernetes ConfigMaps and integration with Spring Cloud Kubernetes - Service discovery using Kubernetes Services - Load balancing with Replication Controllers - Kubernetes health probes and integration with Spring Actuator - Metrics: Prometheus, Grafana, and integration with Spring Cloud Sleuth - Distributed tracing with Istio & Jaeger tracing - Developer tooling through Red Hat OpenShift and Red Hat CodeReady developer tooling to quickly scaffold new Spring projects, gain access to familiar Spring APIs in your favorite IDE, and deploy to Red Hat OpenShift ... 1 The introduction introduces the quick start and lists the tasks within it. Viewing the introduction element in the web console After clicking a quick start card, a side panel slides in that introduces the quick start and lists the tasks within it. 10.4.3. Adding a custom icon to a quick start A default icon is provided for all quick starts. You can provide your own custom icon. Procedure Find the .svg file that you want to use as your custom icon. Use an online tool to convert the text to base64 . In the YAML file, add icon: >- , then on the line include data:image/svg+xml;base64 followed by the output from the base64 conversion. For example: icon: >- data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHJvbGU9ImltZyIgdmlld. 10.4.4. Limiting access to a quick start Not all quick starts should be available for everyone. The accessReviewResources section of the YAML file provides the ability to limit access to the quick start. To only allow the user to access the quick start if they have the ability to create HelmChartRepository resources, use the following configuration: accessReviewResources: - group: helm.openshift.io resource: helmchartrepositories verb: create To only allow the user to access the quick start if they have the ability to list Operator groups and package manifests, thus ability to install Operators, use the following configuration: accessReviewResources: - group: operators.coreos.com resource: operatorgroups verb: list - group: packages.operators.coreos.com resource: packagemanifests verb: list 10.4.5. Linking to other quick starts Procedure In the nextQuickStart section of the YAML file, provide the name , not the displayName , of the quick start to which you want to link. For example: nextQuickStart: - add-healthchecks 10.4.6. Supported tags for quick starts Write your quick start content in markdown using these tags. The markdown is converted to HTML. Tag Description 'b', Defines bold text. 'img', Embeds an image. 'i', Defines italic text. 'strike', Defines strike-through text. 's', Defines smaller text 'del', Defines smaller text. 'em', Defines emphasized text. 'strong', Defines important text. 'a', Defines an anchor tag. 'p', Defines paragraph text. 'h1', Defines a level 1 heading. 'h2', Defines a level 2 heading. 'h3', Defines a level 3 heading. 'h4', Defines a level 4 heading. 'ul', Defines an unordered list. 'ol', Defines an ordered list. 'li', Defines a list item. 'code', Defines a text as code. 'pre', Defines a block of preformatted text. 'button', Defines a button in text. 10.4.7. Quick start highlighting markdown reference The highlighting, or hint, feature enables Quick Starts to contain a link that can highlight and animate a component of the web console. The markdown syntax contains: Bracketed link text The highlight keyword, followed by the ID of the element that you want to animate 10.4.7.1. Perspective switcher [Perspective switcher]{{highlight qs-perspective-switcher}} 10.4.7.2. Administrator perspective navigation links [Home]{{highlight qs-nav-home}} [Operators]{{highlight qs-nav-operators}} [Workloads]{{highlight qs-nav-workloads}} [Serverless]{{highlight qs-nav-serverless}} [Networking]{{highlight qs-nav-networking}} [Storage]{{highlight qs-nav-storage}} [Service catalog]{{highlight qs-nav-servicecatalog}} [Compute]{{highlight qs-nav-compute}} [User management]{{highlight qs-nav-usermanagement}} [Administration]{{highlight qs-nav-administration}} 10.4.7.3. Developer perspective navigation links [Add]{{highlight qs-nav-add}} [Topology]{{highlight qs-nav-topology}} [Search]{{highlight qs-nav-search}} [Project]{{highlight qs-nav-project}} [Helm]{{highlight qs-nav-helm}} 10.4.7.4. Common navigation links [Builds]{{highlight qs-nav-builds}} [Pipelines]{{highlight qs-nav-pipelines}} [Monitoring]{{highlight qs-nav-monitoring}} 10.4.7.5. Masthead links [CloudShell]{{highlight qs-masthead-cloudshell}} [Utility Menu]{{highlight qs-masthead-utilitymenu}} [User Menu]{{highlight qs-masthead-usermenu}} [Applications]{{highlight qs-masthead-applications}} [Import]{{highlight qs-masthead-import}} [Help]{{highlight qs-masthead-help}} [Notifications]{{highlight qs-masthead-notifications}} 10.4.8. Code snippet markdown reference You can execute a CLI code snippet when it is included in a quick start from the web console. To use this feature, you must first install the Web Terminal Operator. The web terminal and code snippet actions that execute in the web terminal are not present if you do not install the Web Terminal Operator. Alternatively, you can copy a code snippet to the clipboard regardless of whether you have the Web Terminal Operator installed or not. 10.4.8.1. Syntax for inline code snippets Note If the execute syntax is used, the Copy to clipboard action is present whether you have the Web Terminal Operator installed or not. 10.4.8.2. Syntax for multi-line code snippets 10.5. Quick start content guidelines 10.5.1. Card copy You can customize the title and description on a quick start card, but you cannot customize the status. Keep your description to one to two sentences. Start with a verb and communicate the goal of the user. Correct example: 10.5.2. Introduction After clicking a quick start card, a side panel slides in that introduces the quick start and lists the tasks within it. Make your introduction content clear, concise, informative, and friendly. State the outcome of the quick start. A user should understand the purpose of the quick start before they begin. Give action to the user, not the quick start. Correct example : Incorrect example : The introduction should be a maximum of four to five sentences, depending on the complexity of the feature. A long introduction can overwhelm the user. List the quick start tasks after the introduction content, and start each task with a verb. Do not specify the number of tasks because the copy would need to be updated every time a task is added or removed. Correct example : Incorrect example : 10.5.3. Task steps After the user clicks Start , a series of steps appears that they must perform to complete the quick start. Follow these general guidelines when writing task steps: Use "Click" for buttons and labels. Use "Select" for checkboxes, radio buttons, and drop-down menus. Use "Click" instead of "Click on" Correct example : Incorrect example : Tell users how to navigate between Administrator and Developer perspectives. Even if you think a user might already be in the appropriate perspective, give them instructions on how to get there so that they are definitely where they need to be. Examples: Use the "Location, action" structure. Tell a user where to go before telling them what to do. Correct example : Incorrect example : Keep your product terminology capitalization consistent. If you must specify a menu type or list as a dropdown, write "dropdown" as one word without a hyphen. Clearly distinguish between a user action and additional information on product functionality. User action : Additional information : Avoid directional language, like "In the top-right corner, click the icon". Directional language becomes outdated every time UI layouts change. Also, a direction for desktop users might not be accurate for users with a different screen size. Instead, identify something using its name. Correct example : Incorrect example : Do not identify items by color alone, like "Click the gray circle". Color identifiers are not useful for sight-limited users, especially colorblind users. Instead, identify an item using its name or copy, like button copy. Correct example : Incorrect example : Use the second-person point of view, you, consistently: Correct example : Incorrect example : 10.5.4. Check your work module After a user completes a step, a Check your work module appears. This module prompts the user to answer a yes or no question about the step results, which gives them the opportunity to review their work. For this module, you only need to write a single yes or no question. If the user answers Yes , a check mark will appear. If the user answers No , an error message appears with a link to relevant documentation, if necessary. The user then has the opportunity to go back and try again. 10.5.5. Formatting UI elements Format UI elements using these guidelines: Copy for buttons, dropdowns, tabs, fields, and other UI controls: Write the copy as it appears in the UI and bold it. All other UI elements-including page, window, and panel names: Write the copy as it appears in the UI and bold it. Code or user-entered text: Use monospaced font. Hints: If a hint to a navigation or masthead element is included, style the text as you would a link. CLI commands: Use monospaced font. In running text, use a bold, monospaced font for a command. If a parameter or option is a variable value, use an italic monospaced font. Use a bold, monospaced font for the parameter and a monospaced font for the option. 10.6. Additional resources For voice and tone requirements, refer to PatternFly's brand voice and tone guidelines . For other UX content guidance, refer to all areas of PatternFly's UX writing style guide . | [
"oc get -o yaml consolequickstart spring-with-s2i > my-quick-start.yaml",
"oc create -f my-quick-start.yaml",
"oc explain consolequickstarts",
"summary: failed: Try the steps again. success: Your Spring application is running. title: Run the Spring application conclusion: >- Your Spring application is deployed and ready. 1",
"apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' 1",
"apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring 1 durationMinutes: 10",
"apiVersion: console.openshift.io/v1 kind: ConsoleQuickStart metadata: name: spring-with-s2i spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring durationMinutes: 10 1",
"spec: description: 'Import a Spring Application from git, build, and deploy it onto OpenShift.' displayName: Get started with Spring durationMinutes: 10 icon: >- 1 data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIGlkPSJMYXllcl8xIiBkYXRhLW5hbWU9IkxheWVyIDEiIHZpZXdCb3g9IjAgMCAxMDI0IDEwMjQiPjxkZWZzPjxzdHlsZT4uY2xzLTF7ZmlsbDojMTUzZDNjO30uY2xzLTJ7ZmlsbDojZDhkYTlkO30uY2xzLTN7ZmlsbDojNThjMGE4O30uY2xzLTR7ZmlsbDojZmZmO30uY2xzLTV7ZmlsbDojM2Q5MTkxO308L3N0eWxlPjwvZGVmcz48dGl0bGU+c25vd2Ryb3BfaWNvbl9yZ2JfZGVmYXVsdDwvdGl0bGU+PHBhdGggY2xhc3M9ImNscy0xIiBkPSJNMTAxMi42OSw1OTNjLTExLjEyLTM4LjA3LTMxLTczLTU5LjIxLTEwMy44LTkuNS0xMS4zLTIzLjIxLTI4LjI5LTM5LjA2LTQ3Ljk0QzgzMy41MywzNDEsNzQ1LjM3LDIzNC4xOCw2NzQsMTY4Ljk0Yy01LTUuMjYtMTAuMjYtMTAuMzEtMTUuNjUtMTUuMDdhMjQ2LjQ5LDI0Ni40OSwwLDAsMC0zNi41NS0yNi44LDE4Mi41LDE4Mi41LDAsMCwwLTIwLjMtMTEuNzcsMjAxLjUzLDIwMS41MywwLDAsMC00My4xOS0xNUExNTUuMjQsMTU1LjI0LDAsMCwwLDUyOCw5NS4yYy02Ljc2LS42OC0xMS43NC0uODEtMTQuMzktLjgxaDBsLTEuNjIsMC0xLjYyLDBhMTc3LjMsMTc3LjMsMCwwLDAtMzEuNzcsMy4zNSwyMDguMjMsMjA4LjIzLDAsMCwwLTU2LjEyLDE3LjU2LDE4MSwxODEsMCwwLDAtMjAuMjcsMTEuNzUsMjQ3LjQzLDI0Ny40MywwLDAsMC0zNi41NywyNi44MUMzNjAuMjUsMTU4LjYyLDM1NSwxNjMuNjgsMzUwLDE2OWMtNzEuMzUsNjUuMjUtMTU5LjUsMTcyLTI0MC4zOSwyNzIuMjhDOTMuNzMsNDYwLjg4LDgwLDQ3Ny44Nyw3MC41Miw0ODkuMTcsNDIuMzUsNTIwLDIyLjQzLDU1NC45LDExLjMxLDU5MywuNzIsNjI5LjIyLTEuNzMsNjY3LjY5LDQsNzA3LjMxLDE1LDc4Mi40OSw1NS43OCw4NTkuMTIsMTE4LjkzLDkyMy4wOWEyMiwyMiwwLDAsMCwxNS41OSw2LjUyaDEuODNsMS44Ny0uMzJjODEuMDYtMTMuOTEsMTEwLTc5LjU3LDE0My40OC0xNTUuNiwzLjkxLTguODgsNy45NS0xOC4wNSwxMi4yLTI3LjQzcTUuNDIsOC41NCwxMS4zOSwxNi4yM2MzMS44NSw0MC45MSw3NS4xMiw2NC42NywxMzIuMzIsNzIuNjNsMTguOCwyLjYyLDQuOTUtMTguMzNjMTMuMjYtNDkuMDcsMzUuMy05MC44NSw1MC42NC0xMTYuMTksMTUuMzQsMjUuMzQsMzcuMzgsNjcuMTIsNTAuNjQsMTE2LjE5bDUsMTguMzMsMTguOC0yLjYyYzU3LjItOCwxMDAuNDctMzEuNzIsMTMyLjMyLTcyLjYzcTYtNy42OCwxMS4zOS0xNi4yM2M0LjI1LDkuMzgsOC4yOSwxOC41NSwxMi4yLDI3LjQzLDMzLjQ5LDc2LDYyLjQyLDE0MS42OSwxNDMuNDgsMTU1LjZsMS44MS4zMWgxLjg5YTIyLDIyLDAsMCwwLDE1LjU5LTYuNTJjNjMuMTUtNjQsMTAzLjk1LTE0MC42LDExNC44OS0yMTUuNzhDMTAyNS43Myw2NjcuNjksMTAyMy4yOCw2MjkuMjIsMTAxMi42OSw1OTNaIi8+PHBhdGggY2xhc3M9ImNscy0yIiBkPSJNMzY0LjE1LDE4NS4yM2MxNy44OS0xNi40LDM0LjctMzAuMTUsNDkuNzctNDAuMTFhMjEyLDIxMiwwLDAsMSw2NS45My0yNS43M0ExOTgsMTk4LDAsMCwxLDUxMiwxMTYuMjdhMTk2LjExLDE5Ni4xMSwwLDAsMSwzMiwzLjFjNC41LjkxLDkuMzYsMi4wNiwxNC41MywzLjUyLDYwLjQxLDIwLjQ4LDg0LjkyLDkxLjA1LTQ3LjQ0LDI0OC4wNi0yOC43NSwzNC4xMi0xNDAuNywxOTQuODQtMTg0LjY2LDI2OC40MmE2MzAuODYsNjMwLjg2LDAsMCwwLTMzLjIyLDU4LjMyQzI3Niw2NTUuMzQsMjY1LjQsNTk4LDI2NS40LDUyMC4yOSwyNjUuNCwzNDAuNjEsMzExLjY5LDI0MC43NCwzNjQuMTUsMTg1LjIzWiIvPjxwYXRoIGNsYXNzPSJjbHMtMyIgZD0iTTUyNy41NCwzODQuODNjODQuMDYtOTkuNywxMTYuMDYtMTc3LjI4LDk1LjIyLTIzMC43NCwxMS42Miw4LjY5LDI0LDE5LjIsMzcuMDYsMzEuMTMsNTIuNDgsNTUuNSw5OC43OCwxNTUuMzgsOTguNzgsMzM1LjA3LDAsNzcuNzEtMTAuNiwxMzUuMDUtMjcuNzcsMTc3LjRhNjI4LjczLDYyOC43MywwLDAsMC0zMy4yMy01OC4zMmMtMzktNjUuMjYtMTMxLjQ1LTE5OS0xNzEuOTMtMjUyLjI3QzUyNi4zMywzODYuMjksNTI3LDM4NS41Miw1MjcuNTQsMzg0LjgzWiIvPjxwYXRoIGNsYXNzPSJjbHMtNCIgZD0iTTEzNC41OCw5MDguMDdoLS4wNmEuMzkuMzksMCwwLDEtLjI3LS4xMWMtMTE5LjUyLTEyMS4wNy0xNTUtMjg3LjQtNDcuNTQtNDA0LjU4LDM0LjYzLTQxLjE0LDEyMC0xNTEuNiwyMDIuNzUtMjQyLjE5LTMuMTMsNy02LjEyLDE0LjI1LTguOTIsMjEuNjktMjQuMzQsNjQuNDUtMzYuNjcsMTQ0LjMyLTM2LjY3LDIzNy40MSwwLDU2LjUzLDUuNTgsMTA2LDE2LjU5LDE0Ny4xNEEzMDcuNDksMzA3LjQ5LDAsMCwwLDI4MC45MSw3MjNDMjM3LDgxNi44OCwyMTYuOTMsODkzLjkzLDEzNC41OCw5MDguMDdaIi8+PHBhdGggY2xhc3M9ImNscy01IiBkPSJNNTgzLjQzLDgxMy43OUM1NjAuMTgsNzI3LjcyLDUxMiw2NjQuMTUsNTEyLDY2NC4xNXMtNDguMTcsNjMuNTctNzEuNDMsMTQ5LjY0Yy00OC40NS02Ljc0LTEwMC45MS0yNy41Mi0xMzUuNjYtOTEuMThhNjQ1LjY4LDY0NS42OCwwLDAsMSwzOS41Ny03MS41NGwuMjEtLjMyLjE5LS4zM2MzOC02My42MywxMjYuNC0xOTEuMzcsMTY3LjEyLTI0NS42Niw0MC43MSw1NC4yOCwxMjkuMSwxODIsMTY3LjEyLDI0NS42NmwuMTkuMzMuMjEuMzJhNjQ1LjY4LDY0NS42OCwwLDAsMSwzOS41Nyw3MS41NEM2ODQuMzQsNzg2LjI3LDYzMS44OCw4MDcuMDUsNTgzLjQzLDgxMy43OVoiLz48cGF0aCBjbGFzcz0iY2xzLTQiIGQ9Ik04ODkuNzUsOTA4YS4zOS4zOSwwLDAsMS0uMjcuMTFoLS4wNkM4MDcuMDcsODkzLjkzLDc4Nyw4MTYuODgsNzQzLjA5LDcyM2EzMDcuNDksMzA3LjQ5LDAsMCwwLDIwLjQ1LTU1LjU0YzExLTQxLjExLDE2LjU5LTkwLjYxLDE2LjU5LTE0Ny4xNCwwLTkzLjA4LTEyLjMzLTE3My0zNi42Ni0yMzcuNHEtNC4yMi0xMS4xNi04LjkzLTIxLjdjODIuNzUsOTAuNTksMTY4LjEyLDIwMS4wNSwyMDIuNzUsMjQyLjE5QzEwNDQuNzksNjIwLjU2LDEwMDkuMjcsNzg2Ljg5LDg4OS43NSw5MDhaIi8+PC9zdmc+Cg==",
"introduction: >- 1 **Spring** is a Java framework for building applications based on a distributed microservices architecture. - Spring enables easy packaging and configuration of Spring applications into a self-contained executable application which can be easily deployed as a container to OpenShift. - Spring applications can integrate OpenShift capabilities to provide a natural \"Spring on OpenShift\" developer experience for both existing and net-new Spring applications. For example: - Externalized configuration using Kubernetes ConfigMaps and integration with Spring Cloud Kubernetes - Service discovery using Kubernetes Services - Load balancing with Replication Controllers - Kubernetes health probes and integration with Spring Actuator - Metrics: Prometheus, Grafana, and integration with Spring Cloud Sleuth - Distributed tracing with Istio & Jaeger tracing - Developer tooling through Red Hat OpenShift and Red Hat CodeReady developer tooling to quickly scaffold new Spring projects, gain access to familiar Spring APIs in your favorite IDE, and deploy to Red Hat OpenShift",
"icon: >- data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHJvbGU9ImltZyIgdmlld.",
"accessReviewResources: - group: helm.openshift.io resource: helmchartrepositories verb: create",
"accessReviewResources: - group: operators.coreos.com resource: operatorgroups verb: list - group: packages.operators.coreos.com resource: packagemanifests verb: list",
"nextQuickStart: - add-healthchecks",
"[Perspective switcher]{{highlight qs-perspective-switcher}}",
"[Home]{{highlight qs-nav-home}} [Operators]{{highlight qs-nav-operators}} [Workloads]{{highlight qs-nav-workloads}} [Serverless]{{highlight qs-nav-serverless}} [Networking]{{highlight qs-nav-networking}} [Storage]{{highlight qs-nav-storage}} [Service catalog]{{highlight qs-nav-servicecatalog}} [Compute]{{highlight qs-nav-compute}} [User management]{{highlight qs-nav-usermanagement}} [Administration]{{highlight qs-nav-administration}}",
"[Add]{{highlight qs-nav-add}} [Topology]{{highlight qs-nav-topology}} [Search]{{highlight qs-nav-search}} [Project]{{highlight qs-nav-project}} [Helm]{{highlight qs-nav-helm}}",
"[Builds]{{highlight qs-nav-builds}} [Pipelines]{{highlight qs-nav-pipelines}} [Monitoring]{{highlight qs-nav-monitoring}}",
"[CloudShell]{{highlight qs-masthead-cloudshell}} [Utility Menu]{{highlight qs-masthead-utilitymenu}} [User Menu]{{highlight qs-masthead-usermenu}} [Applications]{{highlight qs-masthead-applications}} [Import]{{highlight qs-masthead-import}} [Help]{{highlight qs-masthead-help}} [Notifications]{{highlight qs-masthead-notifications}}",
"`code block`{{copy}} `code block`{{execute}}",
"``` multi line code block ```{{copy}} ``` multi line code block ```{{execute}}",
"Create a serverless application.",
"In this quick start, you will deploy a sample application to {product-title}.",
"This quick start shows you how to deploy a sample application to {product-title}.",
"Tasks to complete: Create a serverless application; Connect an event source; Force a new revision",
"You will complete these 3 tasks: Creating a serverless application; Connecting an event source; Forcing a new revision",
"Click OK.",
"Click on the OK button.",
"Enter the Developer perspective: In the main navigation, click the dropdown menu and select Developer. Enter the Administrator perspective: In the main navigation, click the dropdown menu and select Admin.",
"In the node.js deployment, hover over the icon.",
"Hover over the icon in the node.js deployment.",
"Change the time range of the dashboard by clicking the dropdown menu and selecting time range.",
"To look at data in a specific time frame, you can change the time range of the dashboard.",
"In the navigation menu, click Settings.",
"In the left-hand menu, click Settings.",
"The success message indicates a connection.",
"The message with a green icon indicates a connection.",
"Set up your environment.",
"Let's set up our environment."
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/web_console/creating-quick-start-tutorials |
User guide | User guide Red Hat OpenShift Dev Spaces 3.15 Using Red Hat OpenShift Dev Spaces 3.15 Jana Vrbkova [email protected] Red Hat Developer Group Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.15/html/user_guide/index |
Chapter 3. CustomResourceDefinition [apiextensions.k8s.io/v1] | Chapter 3. CustomResourceDefinition [apiextensions.k8s.io/v1] Description CustomResourceDefinition represents a resource that should be exposed on the API server. Its name MUST be in the format <.spec.name>.<.spec.group>. Type object Required spec 3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object CustomResourceDefinitionSpec describes how a user wants their resource to appear status object CustomResourceDefinitionStatus indicates the state of the CustomResourceDefinition 3.1.1. .spec Description CustomResourceDefinitionSpec describes how a user wants their resource to appear Type object Required group names scope versions Property Type Description conversion object CustomResourceConversion describes how to convert different versions of a CR. group string group is the API group of the defined custom resource. The custom resources are served under /apis/<group>/... . Must match the name of the CustomResourceDefinition (in the form <names.plural>.<group> ). names object CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition preserveUnknownFields boolean preserveUnknownFields indicates that object fields which are not specified in the OpenAPI schema should be preserved when persisting to storage. apiVersion, kind, metadata and known fields inside metadata are always preserved. This field is deprecated in favor of setting x-preserve-unknown-fields to true in spec.versions[*].schema.openAPIV3Schema . See https://kubernetes.io/docs/tasks/extend-kubernetes/custom-resources/custom-resource-definitions/#field-pruning for details. scope string scope indicates whether the defined custom resource is cluster- or namespace-scoped. Allowed values are Cluster and Namespaced . versions array versions is the list of all API versions of the defined custom resource. Version names are used to compute the order in which served versions are listed in API discovery. If the version string is "kube-like", it will sort above non "kube-like" version strings, which are ordered lexicographically. "Kube-like" versions start with a "v", then are followed by a number (the major version), then optionally the string "alpha" or "beta" and another number (the minor version). These are sorted first by GA > beta > alpha (where GA is a version with no suffix such as beta or alpha), and then by comparing major version, then minor version. An example sorted list of versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2, foo1, foo10. versions[] object CustomResourceDefinitionVersion describes a version for CRD. 3.1.2. .spec.conversion Description CustomResourceConversion describes how to convert different versions of a CR. Type object Required strategy Property Type Description strategy string strategy specifies how custom resources are converted between versions. Allowed values are: - "None" : The converter only change the apiVersion and would not touch any other field in the custom resource. - "Webhook" : API Server will call to an external webhook to do the conversion. Additional information is needed for this option. This requires spec.preserveUnknownFields to be false, and spec.conversion.webhook to be set. webhook object WebhookConversion describes how to call a conversion webhook 3.1.3. .spec.conversion.webhook Description WebhookConversion describes how to call a conversion webhook Type object Required conversionReviewVersions Property Type Description clientConfig object WebhookClientConfig contains the information to make a TLS connection with the webhook. conversionReviewVersions array (string) conversionReviewVersions is an ordered list of preferred ConversionReview versions the Webhook expects. The API server will use the first version in the list which it supports. If none of the versions specified in this list are supported by API server, conversion will fail for the custom resource. If a persisted Webhook configuration specifies allowed versions and does not include any versions known to the API Server, calls to the webhook will fail. 3.1.4. .spec.conversion.webhook.clientConfig Description WebhookClientConfig contains the information to make a TLS connection with the webhook. Type object Property Type Description caBundle string caBundle is a PEM encoded CA bundle which will be used to validate the webhook's server certificate. If unspecified, system trust roots on the apiserver are used. service object ServiceReference holds a reference to Service.legacy.k8s.io url string url gives the location of the webhook, in standard URL form ( scheme://host:port/path ). Exactly one of url or service must be specified. The host should not refer to a service running in the cluster; use the service field instead. The host might be resolved via external DNS in some apiservers (e.g., kube-apiserver cannot resolve in-cluster DNS as that would be a layering violation). host may also be an IP address. Please note that using localhost or 127.0.0.1 as a host is risky unless you take great care to run this webhook on all hosts which run an apiserver which might need to make calls to this webhook. Such installs are likely to be non-portable, i.e., not easy to turn up in a new cluster. The scheme must be "https"; the URL must begin with "https://". A path is optional, and if present may be any string permissible in a URL. You may use the path to pass an arbitrary string to the webhook, for example, a cluster identifier. Attempting to use a user or basic auth e.g. "user:password@" is not allowed. Fragments ("#... ") and query parameters ("?... ") are not allowed, either. 3.1.5. .spec.conversion.webhook.clientConfig.service Description ServiceReference holds a reference to Service.legacy.k8s.io Type object Required namespace name Property Type Description name string name is the name of the service. Required namespace string namespace is the namespace of the service. Required path string path is an optional URL path at which the webhook will be contacted. port integer port is an optional service port at which the webhook will be contacted. port should be a valid port number (1-65535, inclusive). Defaults to 443 for backward compatibility. 3.1.6. .spec.names Description CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition Type object Required plural kind Property Type Description categories array (string) categories is a list of grouped resources this custom resource belongs to (e.g. 'all'). This is published in API discovery documents, and used by clients to support invocations like kubectl get all . kind string kind is the serialized kind of the resource. It is normally CamelCase and singular. Custom resource instances will use this value as the kind attribute in API calls. listKind string listKind is the serialized kind of the list for this resource. Defaults to "`kind`List". plural string plural is the plural name of the resource to serve. The custom resources are served under /apis/<group>/<version>/... /<plural> . Must match the name of the CustomResourceDefinition (in the form <names.plural>.<group> ). Must be all lowercase. shortNames array (string) shortNames are short names for the resource, exposed in API discovery documents, and used by clients to support invocations like kubectl get <shortname> . It must be all lowercase. singular string singular is the singular name of the resource. It must be all lowercase. Defaults to lowercased kind . 3.1.7. .spec.versions Description versions is the list of all API versions of the defined custom resource. Version names are used to compute the order in which served versions are listed in API discovery. If the version string is "kube-like", it will sort above non "kube-like" version strings, which are ordered lexicographically. "Kube-like" versions start with a "v", then are followed by a number (the major version), then optionally the string "alpha" or "beta" and another number (the minor version). These are sorted first by GA > beta > alpha (where GA is a version with no suffix such as beta or alpha), and then by comparing major version, then minor version. An example sorted list of versions: v10, v2, v1, v11beta2, v10beta3, v3beta1, v12alpha1, v11alpha2, foo1, foo10. Type array 3.1.8. .spec.versions[] Description CustomResourceDefinitionVersion describes a version for CRD. Type object Required name served storage Property Type Description additionalPrinterColumns array additionalPrinterColumns specifies additional columns returned in Table output. See https://kubernetes.io/docs/reference/using-api/api-concepts/#receiving-resources-as-tables for details. If no columns are specified, a single column displaying the age of the custom resource is used. additionalPrinterColumns[] object CustomResourceColumnDefinition specifies a column for server side printing. deprecated boolean deprecated indicates this version of the custom resource API is deprecated. When set to true, API requests to this version receive a warning header in the server response. Defaults to false. deprecationWarning string deprecationWarning overrides the default warning returned to API clients. May only be set when deprecated is true. The default warning indicates this version is deprecated and recommends use of the newest served version of equal or greater stability, if one exists. name string name is the version name, e.g. "v1", "v2beta1", etc. The custom resources are served under this version at /apis/<group>/<version>/... if served is true. schema object CustomResourceValidation is a list of validation methods for CustomResources. served boolean served is a flag enabling/disabling this version from being served via REST APIs storage boolean storage indicates this version should be used when persisting custom resources to storage. There must be exactly one version with storage=true. subresources object CustomResourceSubresources defines the status and scale subresources for CustomResources. 3.1.9. .spec.versions[].additionalPrinterColumns Description additionalPrinterColumns specifies additional columns returned in Table output. See https://kubernetes.io/docs/reference/using-api/api-concepts/#receiving-resources-as-tables for details. If no columns are specified, a single column displaying the age of the custom resource is used. Type array 3.1.10. .spec.versions[].additionalPrinterColumns[] Description CustomResourceColumnDefinition specifies a column for server side printing. Type object Required name type jsonPath Property Type Description description string description is a human readable description of this column. format string format is an optional OpenAPI type definition for this column. The 'name' format is applied to the primary identifier column to assist in clients identifying column is the resource name. See https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types for details. jsonPath string jsonPath is a simple JSON path (i.e. with array notation) which is evaluated against each custom resource to produce the value for this column. name string name is a human readable name for the column. priority integer priority is an integer defining the relative importance of this column compared to others. Lower numbers are considered higher priority. Columns that may be omitted in limited space scenarios should be given a priority greater than 0. type string type is an OpenAPI type definition for this column. See https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#data-types for details. 3.1.11. .spec.versions[].schema Description CustomResourceValidation is a list of validation methods for CustomResources. Type object Property Type Description openAPIV3Schema `` openAPIV3Schema is the OpenAPI v3 schema to use for validation and pruning. 3.1.12. .spec.versions[].subresources Description CustomResourceSubresources defines the status and scale subresources for CustomResources. Type object Property Type Description scale object CustomResourceSubresourceScale defines how to serve the scale subresource for CustomResources. status object CustomResourceSubresourceStatus defines how to serve the status subresource for CustomResources. Status is represented by the .status JSON path inside of a CustomResource. When set, * exposes a /status subresource for the custom resource * PUT requests to the /status subresource take a custom resource object, and ignore changes to anything except the status stanza * PUT/POST/PATCH requests to the custom resource ignore changes to the status stanza 3.1.13. .spec.versions[].subresources.scale Description CustomResourceSubresourceScale defines how to serve the scale subresource for CustomResources. Type object Required specReplicasPath statusReplicasPath Property Type Description labelSelectorPath string labelSelectorPath defines the JSON path inside of a custom resource that corresponds to Scale status.selector . Only JSON paths without the array notation are allowed. Must be a JSON Path under .status or .spec . Must be set to work with HorizontalPodAutoscaler. The field pointed by this JSON path must be a string field (not a complex selector struct) which contains a serialized label selector in string form. More info: https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions#scale-subresource If there is no value under the given path in the custom resource, the status.selector value in the /scale subresource will default to the empty string. specReplicasPath string specReplicasPath defines the JSON path inside of a custom resource that corresponds to Scale spec.replicas . Only JSON paths without the array notation are allowed. Must be a JSON Path under .spec . If there is no value under the given path in the custom resource, the /scale subresource will return an error on GET. statusReplicasPath string statusReplicasPath defines the JSON path inside of a custom resource that corresponds to Scale status.replicas . Only JSON paths without the array notation are allowed. Must be a JSON Path under .status . If there is no value under the given path in the custom resource, the status.replicas value in the /scale subresource will default to 0. 3.1.14. .spec.versions[].subresources.status Description CustomResourceSubresourceStatus defines how to serve the status subresource for CustomResources. Status is represented by the .status JSON path inside of a CustomResource. When set, * exposes a /status subresource for the custom resource * PUT requests to the /status subresource take a custom resource object, and ignore changes to anything except the status stanza * PUT/POST/PATCH requests to the custom resource ignore changes to the status stanza Type object 3.1.15. .status Description CustomResourceDefinitionStatus indicates the state of the CustomResourceDefinition Type object Property Type Description acceptedNames object CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition conditions array conditions indicate state for particular aspects of a CustomResourceDefinition conditions[] object CustomResourceDefinitionCondition contains details for the current condition of this pod. storedVersions array (string) storedVersions lists all versions of CustomResources that were ever persisted. Tracking these versions allows a migration path for stored versions in etcd. The field is mutable so a migration controller can finish a migration to another version (ensuring no old objects are left in storage), and then remove the rest of the versions from this list. Versions may not be removed from spec.versions while they exist in this list. 3.1.16. .status.acceptedNames Description CustomResourceDefinitionNames indicates the names to serve this CustomResourceDefinition Type object Required plural kind Property Type Description categories array (string) categories is a list of grouped resources this custom resource belongs to (e.g. 'all'). This is published in API discovery documents, and used by clients to support invocations like kubectl get all . kind string kind is the serialized kind of the resource. It is normally CamelCase and singular. Custom resource instances will use this value as the kind attribute in API calls. listKind string listKind is the serialized kind of the list for this resource. Defaults to "`kind`List". plural string plural is the plural name of the resource to serve. The custom resources are served under /apis/<group>/<version>/... /<plural> . Must match the name of the CustomResourceDefinition (in the form <names.plural>.<group> ). Must be all lowercase. shortNames array (string) shortNames are short names for the resource, exposed in API discovery documents, and used by clients to support invocations like kubectl get <shortname> . It must be all lowercase. singular string singular is the singular name of the resource. It must be all lowercase. Defaults to lowercased kind . 3.1.17. .status.conditions Description conditions indicate state for particular aspects of a CustomResourceDefinition Type array 3.1.18. .status.conditions[] Description CustomResourceDefinitionCondition contains details for the current condition of this pod. Type object Required type status Property Type Description lastTransitionTime Time lastTransitionTime last time the condition transitioned from one status to another. message string message is a human-readable message indicating details about last transition. reason string reason is a unique, one-word, CamelCase reason for the condition's last transition. status string status is the status of the condition. Can be True, False, Unknown. type string type is the type of the condition. Types include Established, NamesAccepted and Terminating. 3.2. API endpoints The following API endpoints are available: /apis/apiextensions.k8s.io/v1/customresourcedefinitions DELETE : delete collection of CustomResourceDefinition GET : list or watch objects of kind CustomResourceDefinition POST : create a CustomResourceDefinition /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions GET : watch individual changes to a list of CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name} DELETE : delete a CustomResourceDefinition GET : read the specified CustomResourceDefinition PATCH : partially update the specified CustomResourceDefinition PUT : replace the specified CustomResourceDefinition /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions/{name} GET : watch changes to an object of kind CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name}/status GET : read status of the specified CustomResourceDefinition PATCH : partially update status of the specified CustomResourceDefinition PUT : replace status of the specified CustomResourceDefinition 3.2.1. /apis/apiextensions.k8s.io/v1/customresourcedefinitions Table 3.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CustomResourceDefinition Table 3.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 3.3. Body parameters Parameter Type Description body DeleteOptions schema Table 3.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CustomResourceDefinition Table 3.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 3.6. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinitionList schema 401 - Unauthorized Empty HTTP method POST Description create a CustomResourceDefinition Table 3.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.8. Body parameters Parameter Type Description body CustomResourceDefinition schema Table 3.9. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 202 - Accepted CustomResourceDefinition schema 401 - Unauthorized Empty 3.2.2. /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions Table 3.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead. Table 3.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.3. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name} Table 3.12. Global path parameters Parameter Type Description name string name of the CustomResourceDefinition Table 3.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CustomResourceDefinition Table 3.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 3.15. Body parameters Parameter Type Description body DeleteOptions schema Table 3.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CustomResourceDefinition Table 3.17. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CustomResourceDefinition Table 3.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.19. Body parameters Parameter Type Description body Patch schema Table 3.20. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CustomResourceDefinition Table 3.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.22. Body parameters Parameter Type Description body CustomResourceDefinition schema Table 3.23. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty 3.2.4. /apis/apiextensions.k8s.io/v1/watch/customresourcedefinitions/{name} Table 3.24. Global path parameters Parameter Type Description name string name of the CustomResourceDefinition Table 3.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean sendInitialEvents=true may be set together with watch=true . In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with "k8s.io/initial-events-end": "true" annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When sendInitialEvents option is set, we require resourceVersionMatch option to also be set. The semantic of the watch request is as following: - resourceVersionMatch = NotOlderThan is interpreted as "data at least as new as the provided resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion at least as fresh as the one provided by the ListOptions. If resourceVersion is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - resourceVersionMatch set to any other value or unset Invalid error is returned. Defaults to true if resourceVersion="" or resourceVersion="0" (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind CustomResourceDefinition. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 3.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 3.2.5. /apis/apiextensions.k8s.io/v1/customresourcedefinitions/{name}/status Table 3.27. Global path parameters Parameter Type Description name string name of the CustomResourceDefinition Table 3.28. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified CustomResourceDefinition Table 3.29. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CustomResourceDefinition Table 3.30. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 3.31. Body parameters Parameter Type Description body Patch schema Table 3.32. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CustomResourceDefinition Table 3.33. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 3.34. Body parameters Parameter Type Description body CustomResourceDefinition schema Table 3.35. HTTP responses HTTP code Reponse body 200 - OK CustomResourceDefinition schema 201 - Created CustomResourceDefinition schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/extension_apis/customresourcedefinition-apiextensions-k8s-io-v1 |
Installation configuration | Installation configuration OpenShift Container Platform 4.12 Cluster-wide configuration during installations Red Hat OpenShift Documentation Team | [
"curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane --output butane",
"curl https://mirror.openshift.com/pub/openshift-v4/clients/butane/latest/butane-aarch64 --output butane",
"chmod +x butane",
"echo USDPATH",
"butane <butane_file>",
"variant: openshift version: 4.12.0 metadata: name: 99-worker-custom labels: machineconfiguration.openshift.io/role: worker openshift: kernel_arguments: - loglevel=7 storage: files: - path: /etc/chrony.conf mode: 0644 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-custom.bu -o ./99-worker-custom.yaml",
"oc create -f 99-worker-custom.yaml",
"./openshift-install create manifests --dir <installation_directory>",
"cat << EOF > 99-openshift-machineconfig-master-kargs.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-openshift-machineconfig-master-kargs spec: kernelArguments: - loglevel=7 EOF",
"subscription-manager register",
"subscription-manager attach --auto",
"yum install podman make git -y",
"mkdir kmods; cd kmods",
"git clone https://github.com/kmods-via-containers/kmods-via-containers",
"cd kmods-via-containers/",
"sudo make install",
"sudo systemctl daemon-reload",
"cd .. ; git clone https://github.com/kmods-via-containers/kvc-simple-kmod",
"cd kvc-simple-kmod",
"cat simple-kmod.conf",
"KMOD_CONTAINER_BUILD_CONTEXT=\"https://github.com/kmods-via-containers/kvc-simple-kmod.git\" KMOD_CONTAINER_BUILD_FILE=Dockerfile.rhel KMOD_SOFTWARE_VERSION=dd1a7d4 KMOD_NAMES=\"simple-kmod simple-procfs-kmod\"",
"sudo make install",
"sudo kmods-via-containers build simple-kmod USD(uname -r)",
"sudo systemctl enable [email protected] --now",
"sudo systemctl status [email protected]",
"● [email protected] - Kmods Via Containers - simple-kmod Loaded: loaded (/etc/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (exited) since Sun 2020-01-12 23:49:49 EST; 5s ago",
"lsmod | grep simple_",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"dmesg | grep 'Hello world'",
"[ 6420.761332] Hello world from simple_kmod.",
"sudo cat /proc/simple-procfs-kmod",
"simple-procfs-kmod number = 0",
"sudo spkut 44",
"KVC: wrapper simple-kmod for 4.18.0-147.3.1.el8_1.x86_64 Running userspace wrapper using the kernel module container + podman run -i --rm --privileged simple-kmod-dd1a7d4:4.18.0-147.3.1.el8_1.x86_64 spkut 44 simple-procfs-kmod number = 0 simple-procfs-kmod number = 44",
"subscription-manager register",
"subscription-manager attach --auto",
"yum install podman make git -y",
"mkdir kmods; cd kmods",
"git clone https://github.com/kmods-via-containers/kmods-via-containers",
"git clone https://github.com/kmods-via-containers/kvc-simple-kmod",
"FAKEROOT=USD(mktemp -d)",
"cd kmods-via-containers",
"make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/",
"cd ../kvc-simple-kmod",
"make install DESTDIR=USD{FAKEROOT}/usr/local CONFDIR=USD{FAKEROOT}/etc/",
"cd .. && rm -rf kmod-tree && cp -Lpr USD{FAKEROOT} kmod-tree",
"variant: openshift version: 4.12.0 metadata: name: 99-simple-kmod labels: machineconfiguration.openshift.io/role: worker 1 storage: trees: - local: kmod-tree systemd: units: - name: [email protected] enabled: true",
"butane 99-simple-kmod.bu --files-dir . -o 99-simple-kmod.yaml",
"oc create -f 99-simple-kmod.yaml",
"lsmod | grep simple_",
"simple_procfs_kmod 16384 0 simple_kmod 16384 0",
"variant: openshift version: 4.12.0 metadata: name: worker-storage labels: machineconfiguration.openshift.io/role: worker boot_device: layout: x86_64 1 luks: tpm2: true 2 tang: 3 - url: http://tang1.example.com:7500 thumbprint: jwGN5tRFK-kF6pIX89ssF3khxxX - url: http://tang2.example.com:7500 thumbprint: VCJsvZFjBSIHSldw78rOrq7h2ZF threshold: 2 4 openshift: fips: true",
"sudo yum install clevis",
"clevis-encrypt-tang '{\"url\":\"http://tang.example.com:7500\"}' < /dev/null > /dev/null 1",
"The advertisement contains the following signing keys: PLjNyRdGw03zlRoGjQYMahSZGu9 1",
"./openshift-install create manifests --dir <installation_directory> 1",
"variant: openshift version: 4.12.0 metadata: name: worker-storage 1 labels: machineconfiguration.openshift.io/role: worker 2 boot_device: layout: x86_64 3 luks: 4 tpm2: true 5 tang: 6 - url: http://tang.example.com:7500 7 thumbprint: PLjNyRdGw03zlRoGjQYMahSZGu9 8 threshold: 1 9 mirror: 10 devices: 11 - /dev/sda - /dev/sdb openshift: fips: true 12",
"butane USDHOME/clusterconfig/worker-storage.bu -o <installation_directory>/openshift/99-worker-storage.yaml",
"oc debug node/compute-1",
"chroot /host",
"cryptsetup status root",
"/dev/mapper/root is active and is in use. type: LUKS2 1 cipher: aes-xts-plain64 2 keysize: 512 bits key location: keyring device: /dev/sda4 3 sector size: 512 offset: 32768 sectors size: 15683456 sectors mode: read/write",
"clevis luks list -d /dev/sda4 1",
"1: sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\"http://tang.example.com:7500\"}]}}' 1",
"cat /proc/mdstat",
"Personalities : [raid1] md126 : active raid1 sdb3[1] sda3[0] 1 393152 blocks super 1.0 [2/2] [UU] md127 : active raid1 sda4[0] sdb4[1] 2 51869632 blocks super 1.2 [2/2] [UU] unused devices: <none>",
"mdadm --detail /dev/md126",
"/dev/md126: Version : 1.0 Creation Time : Wed Jul 7 11:07:36 2021 Raid Level : raid1 1 Array Size : 393152 (383.94 MiB 402.59 MB) Used Dev Size : 393152 (383.94 MiB 402.59 MB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Update Time : Wed Jul 7 11:18:24 2021 State : clean 2 Active Devices : 2 3 Working Devices : 2 4 Failed Devices : 0 5 Spare Devices : 0 Consistency Policy : resync Name : any:md-boot 6 UUID : ccfa3801:c520e0b5:2bee2755:69043055 Events : 19 Number Major Minor RaidDevice State 0 252 3 0 active sync /dev/sda3 7 1 252 19 1 active sync /dev/sdb3 8",
"mount | grep /dev/md",
"/dev/md127 on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /etc type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /usr type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /sysroot type xfs (ro,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/containers/storage/overlay type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/1 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/2 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/3 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/4 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md127 on /var/lib/kubelet/pods/e5054ed5-f882-4d14-b599-99c050d4e0c0/volume-subpaths/etc/tuned/5 type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,prjquota) /dev/md126 on /boot type ext4 (rw,relatime,seclabel)",
"variant: openshift version: 4.12.0 metadata: name: raid1-storage labels: machineconfiguration.openshift.io/role: worker boot_device: mirror: devices: - /dev/sda - /dev/sdb storage: disks: - device: /dev/sda partitions: - label: root-1 size_mib: 25000 1 - label: var-1 - device: /dev/sdb partitions: - label: root-2 size_mib: 25000 2 - label: var-2 raid: - name: md-var level: raid1 devices: - /dev/disk/by-partlabel/var-1 - /dev/disk/by-partlabel/var-2 filesystems: - device: /dev/md/md-var path: /var format: xfs wipe_filesystem: true with_mount_unit: true",
"variant: openshift version: 4.12.0 metadata: name: raid1-alt-storage labels: machineconfiguration.openshift.io/role: worker storage: disks: - device: /dev/sdc wipe_table: true partitions: - label: data-1 - device: /dev/sdd wipe_table: true partitions: - label: data-2 raid: - name: md-var-lib-containers level: raid1 devices: - /dev/disk/by-partlabel/data-1 - /dev/disk/by-partlabel/data-2 filesystems: - device: /dev/md/md-var-lib-containers path: /var/lib/containers format: xfs wipe_filesystem: true with_mount_unit: true",
"butane USDHOME/clusterconfig/<butane_config>.bu -o <installation_directory>/openshift/<manifest_name>.yaml 1",
"variant: openshift version: 4.12.0 metadata: name: 99-worker-chrony 1 labels: machineconfiguration.openshift.io/role: worker 2 storage: files: - path: /etc/chrony.conf mode: 0644 3 overwrite: true contents: inline: | pool 0.rhel.pool.ntp.org iburst 4 driftfile /var/lib/chrony/drift makestep 1.0 3 rtcsync logdir /var/log/chrony",
"butane 99-worker-chrony.bu -o 99-worker-chrony.yaml",
"oc apply -f ./99-worker-chrony.yaml",
"apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: cgroupMode: \"v1\"",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: \"TechPreviewNoUpgrade\""
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/installation_configuration/index |
22.2. remote-viewer | 22.2. remote-viewer The remote-viewer is a simple remote desktop display client that supports SPICE and VNC. It shares most of the features and limitations with virt-viewer . However, unlike virt-viewer , remote-viewer does not require libvirt to connect to the remote guest display. As such, remote-viewer can be used for example to connect to a virtual machine on a remote host that does not provide permissions to interact with libvirt or to use SSH connections. To install the remote-viewer utility, run: Syntax The basic remote-viewer command-line syntax is as follows: To see the full list of options available for use with remote-viewer, see the remote-viewer man page. Connecting to a guest virtual machine If used without any options, remote-viewer lists guests that it can connect to on the default URI of the local system. To connect to a specific guest using remote-viewer, use the VNC/SPICE URI. For information about obtaining the URI, see Section 20.14, "Displaying a URI for Connection to a Graphical Display" . Example 22.3. Connecting to a guest display using SPICE Use the following to connect to a SPICE server on a machine called "testguest" that uses port 5900 for SPICE communication: Example 22.4. Connecting to a guest display using VNC Use the following to connect to a VNC server on a machine called testguest2 that uses port 5900 for VNC communication: Interface By default, the remote-viewer interface provides only the basic tools for interacting with the guest: Figure 22.2. Sample remote-viewer interface | [
"yum install virt-viewer",
"remote-viewer [OPTIONS] {guest-name|id|uuid}",
"remote-viewer spice:// testguest : 5900",
"remote-viewer vnc:// testguest2 : 5900"
] | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-graphic_user_interface_tools_for_guest_virtual_machine_management-remote_viewer |
Backup and restore | Backup and restore OpenShift Container Platform 4.15 Backing up and restoring your OpenShift Container Platform cluster Red Hat OpenShift Documentation Team | [
"oc -n openshift-kube-apiserver-operator get secret kube-apiserver-to-kubelet-signer -o jsonpath='{.metadata.annotations.auth\\.openshift\\.io/certificate-not-after}'",
"2022-08-05T14:37:50Zuser@user:~ USD 1",
"for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do echo USD{node} ; oc adm cordon USD{node} ; done",
"ci-ln-mgdnf4b-72292-n547t-master-0 node/ci-ln-mgdnf4b-72292-n547t-master-0 cordoned ci-ln-mgdnf4b-72292-n547t-master-1 node/ci-ln-mgdnf4b-72292-n547t-master-1 cordoned ci-ln-mgdnf4b-72292-n547t-master-2 node/ci-ln-mgdnf4b-72292-n547t-master-2 cordoned ci-ln-mgdnf4b-72292-n547t-worker-a-s7ntl node/ci-ln-mgdnf4b-72292-n547t-worker-a-s7ntl cordoned ci-ln-mgdnf4b-72292-n547t-worker-b-cmc9k node/ci-ln-mgdnf4b-72292-n547t-worker-b-cmc9k cordoned ci-ln-mgdnf4b-72292-n547t-worker-c-vcmtn node/ci-ln-mgdnf4b-72292-n547t-worker-c-vcmtn cordoned",
"for node in USD(oc get nodes -l node-role.kubernetes.io/worker -o jsonpath='{.items[*].metadata.name}'); do echo USD{node} ; oc adm drain USD{node} --delete-emptydir-data --ignore-daemonsets=true --timeout=15s --force ; done",
"for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do oc debug node/USD{node} -- chroot /host shutdown -h 1; done 1",
"Starting pod/ip-10-0-130-169us-east-2computeinternal-debug To use host binaries, run `chroot /host` Shutdown scheduled for Mon 2021-09-13 09:36:17 UTC, use 'shutdown -c' to cancel. Removing debug pod Starting pod/ip-10-0-150-116us-east-2computeinternal-debug To use host binaries, run `chroot /host` Shutdown scheduled for Mon 2021-09-13 09:36:29 UTC, use 'shutdown -c' to cancel.",
"oc get nodes -l node-role.kubernetes.io/master",
"NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 75m v1.28.5 ip-10-0-170-223.ec2.internal Ready master 75m v1.28.5 ip-10-0-211-16.ec2.internal Ready master 75m v1.28.5",
"oc get csr",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"oc get nodes -l node-role.kubernetes.io/worker",
"NAME STATUS ROLES AGE VERSION ip-10-0-179-95.ec2.internal Ready worker 64m v1.28.5 ip-10-0-182-134.ec2.internal Ready worker 64m v1.28.5 ip-10-0-250-100.ec2.internal Ready worker 64m v1.28.5",
"oc get csr",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.15.0 True False False 59m cloud-credential 4.15.0 True False False 85m cluster-autoscaler 4.15.0 True False False 73m config-operator 4.15.0 True False False 73m console 4.15.0 True False False 62m csi-snapshot-controller 4.15.0 True False False 66m dns 4.15.0 True False False 76m etcd 4.15.0 True False False 76m",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION ip-10-0-168-251.ec2.internal Ready master 82m v1.28.5 ip-10-0-170-223.ec2.internal Ready master 82m v1.28.5 ip-10-0-179-95.ec2.internal Ready worker 70m v1.28.5 ip-10-0-182-134.ec2.internal Ready worker 70m v1.28.5 ip-10-0-211-16.ec2.internal Ready master 82m v1.28.5 ip-10-0-250-100.ec2.internal Ready worker 69m v1.28.5",
"for node in USD(oc get nodes -o jsonpath='{.items[*].metadata.name}'); do echo USD{node} ; oc adm uncordon USD{node} ; done",
"Requests specifying Server Side Encryption with Customer provided keys must provide the client calculated MD5 of the secret key.",
"found a podvolumebackup with status \"InProgress\" during the server starting, mark it as \"Failed\".",
"data path restore failed: Failed to run kopia restore: Unable to load snapshot : snapshot not found",
"The generated label name is too long.",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --exclude-resources=deployment.apps",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --include-resources=deployment.apps",
"oc get dpa -n openshift-adp -o yaml > dpa.orig.backup",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/restic-9cq4q 1/1 Running 0 94s pod/restic-m4lts 1/1 Running 0 94s pod/restic-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/restic 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"spec: configuration: nodeAgent: enable: true uploaderType: kopia",
"spec: configuration: nodeAgent: enable: true uploaderType: restic",
"oc get dpa -n openshift-adp -o yaml > dpa.orig.backup",
"spec: configuration: features: dataMover: enable: true credentialName: dm-credentials velero: defaultPlugins: - vsm - csi - openshift",
"spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - csi - openshift",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"velero backup create example-backup --include-namespaces mysql-persistent --snapshot-move-data=true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: example-backup namespace: openshift-adp spec: snapshotMoveData: true includedNamespaces: - mysql-persistent storageLocation: dpa-sample-1 ttl: 720h0m0s",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - aws - azure - gcp",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: velero: defaultPlugins: - openshift - azure - gcp customPlugins: - name: custom-plugin-example image: quay.io/example-repo/custom-velero-plugin",
"024-02-27T10:46:50.028951744Z time=\"2024-02-27T10:46:50Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/<backup name> error=\"error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94...",
"oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl",
"oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2",
"oc create -f <obc_file_name> 1",
"oc extract --to=- cm/test-obc 1",
"BUCKET_NAME backup-c20...41fd BUCKET_PORT 443 BUCKET_REGION BUCKET_SUBREGION BUCKET_HOST s3.openshift-storage.svc",
"oc extract --to=- secret/test-obc",
"AWS_ACCESS_KEY_ID ebYR....xLNMc AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym",
"oc get route s3 -n openshift-storage",
"[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=cloud-credentials",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true 1 backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc 2 s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 3 prefix: oadp",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backup test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"apiVersion: velero.io/v1 kind: Restore metadata: name: test-restore 1 namespace: openshift-adp spec: backupName: <backup_name> 2 restorePVs: true namespaceMapping: <application_namespace>: test-restore-application 3",
"oc apply -f <restore_cr_filename>",
"oc describe restores.velero.io <restore_name> -n openshift-adp",
"oc project test-restore-application",
"oc get pvc,svc,deployment,secret,configmap",
"NAME STATUS VOLUME persistentvolumeclaim/mysql Bound pvc-9b3583db-...-14b86 NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mysql ClusterIP 172....157 <none> 3306/TCP 2m56s service/todolist ClusterIP 172.....15 <none> 8000/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/mysql 0/1 1 0 2m55s NAME TYPE DATA AGE secret/builder-dockercfg-6bfmd kubernetes.io/dockercfg 1 2m57s secret/default-dockercfg-hz9kz kubernetes.io/dockercfg 1 2m57s secret/deployer-dockercfg-86cvd kubernetes.io/dockercfg 1 2m57s secret/mysql-persistent-sa-dockercfg-rgp9b kubernetes.io/dockercfg 1 2m57s NAME DATA AGE configmap/kube-root-ca.crt 1 2m57s configmap/openshift-service-ca.crt 1 2m57s",
"apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc 1 namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucket 2",
"oc create -f <obc_file_name>",
"oc extract --to=- cm/test-obc 1",
"BUCKET_NAME backup-c20...41fd BUCKET_PORT 443 BUCKET_REGION BUCKET_SUBREGION BUCKET_HOST s3.openshift-storage.svc",
"oc extract --to=- secret/test-obc",
"AWS_ACCESS_KEY_ID ebYR....xLNMc AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym",
"[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=cloud-credentials",
"oc get cm/openshift-service-ca.crt -o jsonpath='{.data.service-ca\\.crt}' | base64 -w0; echo",
"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0 ....gpwOHMwaG9CRmk5a3....FLS0tLS0K",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"false\" 1 provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp caCert: <ca_cert> 3",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backup test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - legacy-aws 1 - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: \"default\" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: \"true\" insecureSkipTLSVerify: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> 2 prefix: oadp",
"oc apply -f <dpa_filename>",
"oc get dpa -o yaml",
"apiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: \"20....9:54:02Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled kind: List metadata: resourceVersion: \"\"",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1",
"oc apply -f <backup_cr_filename>",
"oc describe backups.velero.io test-backup -n openshift-adp",
"Name: test-backup Namespace: openshift-adp ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>",
"resources: mds: limits: cpu: \"3\" memory: 128Gi requests: cpu: \"3\" memory: 8Gi",
"BUCKET=<your_bucket>",
"REGION=<your_region>",
"aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1",
"aws iam create-user --user-name velero 1",
"cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF",
"aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json",
"aws iam create-access-key --user-name velero",
"{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }",
"cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"[backupStorage] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> [volumeSnapshot] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY>",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> prefix: <prefix> config: region: us-east-1 profile: \"backupStorage\" credential: key: cloud name: cloud-credentials snapshotLocations: - velero: provider: aws config: region: us-west-2 profile: \"volumeSnapshot\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: BackupStorageLocation metadata: name: default namespace: openshift-adp spec: provider: aws 1 objectStorage: bucket: <bucket_name> 2 prefix: <bucket_prefix> 3 credential: 4 key: cloud 5 name: cloud-credentials 6 config: region: <bucket_region> 7 s3ForcePathStyle: \"true\" 8 s3Url: <s3_url> 9 publicUrl: <public_s3_url> 10 serverSideEncryption: AES256 11 kmsKeyId: \"50..c-4da1-419f-a16e-ei...49f\" 12 customerKeyEncryptionFile: \"/credentials/customer-key\" 13 signatureVersion: \"1\" 14 profile: \"default\" 15 insecureSkipTLSVerify: \"true\" 16 enableSharedConfig: \"true\" 17 tagging: \"\" 18 checksumAlgorithm: \"CRC32\" 19",
"snapshotLocations: - velero: config: profile: default region: <region> provider: aws",
"dd if=/dev/urandom bs=1 count=32 > sse.key",
"cat sse.key | base64 > sse_encoded.key",
"ln -s sse_encoded.key customer-key",
"oc create secret generic cloud-credentials --namespace openshift-adp --from-file cloud=<path>/openshift_aws_credentials,customer-key=<path>/sse_encoded.key",
"apiVersion: v1 data: cloud: W2Rfa2V5X2lkPSJBS0lBVkJRWUIyRkQ0TlFHRFFPQiIKYXdzX3NlY3JldF9hY2Nlc3Nfa2V5P<snip>rUE1mNWVSbTN5K2FpeWhUTUQyQk1WZHBOIgo= customer-key: v+<snip>TFIiq6aaXPbj8dhos= kind: Secret",
"spec: backupLocations: - velero: config: customerKeyEncryptionFile: /credentials/customer-key profile: default",
"echo \"encrypt me please\" > test.txt",
"aws s3api put-object --bucket <bucket> --key test.txt --body test.txt --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256",
"s3cmd get s3://<bucket>/test.txt test.txt",
"aws s3api get-object --bucket <bucket> --key test.txt --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256 downloaded.txt",
"cat downloaded.txt",
"encrypt me please",
"aws s3api get-object --bucket <bucket> --key velero/backups/mysql-persistent-customerkeyencryptionfile4/mysql-persistent-customerkeyencryptionfile4.tar.gz --sse-customer-key fileb://sse.key --sse-customer-algorithm AES256 --debug velero_download.tar.gz",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - openshift 2 - aws resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 8 prefix: <prefix> 9 config: region: <region> profile: \"default\" s3ForcePathStyle: \"true\" 10 s3Url: <s3_url> 11 credential: key: cloud name: cloud-credentials 12 snapshotLocations: 13 - name: default velero: provider: aws config: region: <region> 14 profile: \"default\" credential: key: cloud name: cloud-credentials 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: checksumAlgorithm: \"\" 1 insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: velero: defaultPlugins: - openshift - aws - csi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"ibmcloud plugin install cos -f",
"BUCKET=<bucket_name>",
"REGION=<bucket_region> 1",
"ibmcloud resource group-create <resource_group_name>",
"ibmcloud target -g <resource_group_name>",
"ibmcloud target",
"API endpoint: https://cloud.ibm.com Region: User: test-user Account: Test Account (fb6......e95) <-> 2...122 Resource group: Default",
"RESOURCE_GROUP=<resource_group> 1",
"ibmcloud resource service-instance-create <service_instance_name> \\ 1 <service_name> \\ 2 <service_plan> \\ 3 <region_name> 4",
"ibmcloud resource service-instance-create test-service-instance cloud-object-storage \\ 1 standard global -d premium-global-deployment 2",
"SERVICE_INSTANCE_ID=USD(ibmcloud resource service-instance test-service-instance --output json | jq -r '.[0].id')",
"ibmcloud cos bucket-create \\// --bucket USDBUCKET \\// --ibm-service-instance-id USDSERVICE_INSTANCE_ID \\// --region USDREGION",
"ibmcloud resource service-key-create test-key Writer --instance-name test-service-instance --parameters {\\\"HMAC\\\":true}",
"cat > credentials-velero << __EOF__ [default] aws_access_key_id=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.access_key_id') aws_secret_access_key=USD(ibmcloud resource service-key test-key -o json | jq -r '.[0].credentials.cos_hmac_keys.secret_access_key') __EOF__",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp name: <dpa_name> spec: configuration: velero: defaultPlugins: - openshift - aws - csi backupLocations: - velero: provider: aws 1 default: true objectStorage: bucket: <bucket_name> 2 prefix: velero config: insecureSkipTLSVerify: 'true' profile: default region: <region_name> 3 s3ForcePathStyle: 'true' s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials 5",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials-azure -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: config: resourceGroup: <azure_resource_group> storageAccount: <azure_storage_account_id> subscriptionId: <azure_subscription_id> storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: <custom_secret> 1 provider: azure default: true objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" provider: azure",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - azure - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: config: resourceGroup: <azure_resource_group> 8 storageAccount: <azure_storage_account_id> 9 subscriptionId: <azure_subscription_id> 10 storageAccountKeyEnvVar: AZURE_STORAGE_ACCOUNT_ACCESS_KEY credential: key: cloud name: cloud-credentials-azure 11 provider: azure default: true objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13 snapshotLocations: 14 - velero: config: resourceGroup: <azure_resource_group> subscriptionId: <azure_subscription_id> incremental: \"true\" name: default provider: azure credential: key: cloud name: cloud-credentials-azure 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"gcloud auth login",
"BUCKET=<bucket> 1",
"gsutil mb gs://USDBUCKET/",
"PROJECT_ID=USD(gcloud config get-value project)",
"gcloud iam service-accounts create velero --display-name \"Velero service account\"",
"gcloud iam service-accounts list",
"SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')",
"ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get storage.objects.create storage.objects.delete storage.objects.get storage.objects.list iam.serviceAccounts.signBlob )",
"gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"",
"gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server",
"gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}",
"gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL",
"oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials-gcp -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: gcp default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix> snapshotLocations: - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"mkdir -p oadp-credrequest",
"echo 'apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: oadp-operator-credentials namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec permissions: - compute.disks.get - compute.disks.create - compute.disks.createSnapshot - compute.snapshots.get - compute.snapshots.create - compute.snapshots.useReadOnly - compute.snapshots.delete - compute.zones.get - storage.objects.create - storage.objects.delete - storage.objects.get - storage.objects.list - iam.serviceAccounts.signBlob skipServiceCheck: true secretRef: name: cloud-credentials-gcp namespace: <OPERATOR_INSTALL_NS> serviceAccountNames: - velero ' > oadp-credrequest/credrequest.yaml",
"ccoctl gcp create-service-accounts --name=<name> --project=<gcp_project_id> --credentials-requests-dir=oadp-credrequest --workload-identity-pool=<pool_id> --workload-identity-provider=<provider_id>",
"oc create namespace <OPERATOR_INSTALL_NS>",
"oc apply -f manifests/openshift-adp-cloud-credentials-gcp-credentials.yaml",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: <OPERATOR_INSTALL_NS> 1 spec: configuration: velero: defaultPlugins: - gcp - openshift 2 resourceTimeout: 10m 3 nodeAgent: 4 enable: true 5 uploaderType: kopia 6 podConfig: nodeSelector: <node_selector> 7 backupLocations: - velero: provider: gcp default: true credential: key: cloud 8 name: cloud-credentials-gcp 9 objectStorage: bucket: <bucket_name> 10 prefix: <prefix> 11 snapshotLocations: 12 - velero: provider: gcp default: true config: project: <project> snapshotLocation: us-west1 13 credential: key: cloud name: cloud-credentials-gcp 14 backupImages: true 15",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"cat << EOF > ./credentials-velero [default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> EOF",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: config: profile: \"default\" region: <region_name> 1 s3Url: <url> insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: <custom_secret> 2 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - openshift 3 resourceTimeout: 10m 4 nodeAgent: 5 enable: true 6 uploaderType: kopia 7 podConfig: nodeSelector: <node_selector> 8 backupLocations: - velero: config: profile: \"default\" region: <region_name> 9 s3Url: <url> 10 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" provider: aws default: true credential: key: cloud name: cloud-credentials 11 objectStorage: bucket: <bucket_name> 12 prefix: <prefix> 13",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=credentials-velero",
"oc create secret generic <custom_secret> -n openshift-adp --from-file cloud=credentials-velero",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp spec: backupLocations: - velero: provider: <provider> default: true credential: key: cloud name: <custom_secret> 1 objectStorage: bucket: <bucket_name> prefix: <prefix>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket> prefix: <prefix> caCert: <base64_encoded_cert_string> 1 config: insecureSkipTLSVerify: \"false\" 2",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"velero version Client: Version: v1.12.1-OADP Git commit: - Server: Version: v1.12.1-OADP",
"CA_CERT=USD(oc -n openshift-adp get dataprotectionapplications.oadp.openshift.io <dpa-name> -o jsonpath='{.spec.backupLocations[0].velero.objectStorage.caCert}') [[ -n USDCA_CERT ]] && echo \"USDCA_CERT\" | base64 -d | oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"cat > /tmp/your-cacert.txt\" || echo \"DPA BSL has no caCert\"",
"velero describe backup <backup_name> --details --cacert /tmp/<your_cacert>.txt",
"velero backup logs <backup_name> --cacert /tmp/<your_cacert.txt>",
"oc exec -n openshift-adp -i deploy/velero -c velero -- bash -c \"ls /tmp/your-cacert.txt\" /tmp/your-cacert.txt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - aws 2 - kubevirt 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: velero: defaultPlugins: - openshift - csi 1",
"configuration: nodeAgent: enable: false 1 uploaderType: kopia",
"configuration: nodeAgent: enable: true 1 uploaderType: kopia",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> namespace: openshift-adp 1 spec: configuration: velero: defaultPlugins: - kubevirt 2 - gcp 3 - csi 4 - openshift 5 resourceTimeout: 10m 6 nodeAgent: 7 enable: true 8 uploaderType: kopia 9 podConfig: nodeSelector: <node_selector> 10 backupLocations: - velero: provider: gcp 11 default: true credential: key: cloud name: <default_secret> 12 objectStorage: bucket: <bucket_name> 13 prefix: <prefix> 14",
"oc get all -n openshift-adp",
"NAME READY STATUS RESTARTS AGE pod/oadp-operator-controller-manager-67d9494d47-6l8z8 2/2 Running 0 2m8s pod/node-agent-9cq4q 1/1 Running 0 94s pod/node-agent-m4lts 1/1 Running 0 94s pod/node-agent-pv4kr 1/1 Running 0 95s pod/velero-588db7f655-n842v 1/1 Running 0 95s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oadp-operator-controller-manager-metrics-service ClusterIP 172.30.70.140 <none> 8443/TCP 2m8s service/openshift-adp-velero-metrics-svc ClusterIP 172.30.10.0 <none> 8085/TCP 8h NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE daemonset.apps/node-agent 3 3 3 3 3 <none> 96s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oadp-operator-controller-manager 1/1 1 1 2m9s deployment.apps/velero 1/1 1 1 96s NAME DESIRED CURRENT READY AGE replicaset.apps/oadp-operator-controller-manager-67d9494d47 1 1 1 2m9s replicaset.apps/velero-588db7f655 1 1 1 96s",
"oc get dpa dpa-sample -n openshift-adp -o jsonpath='{.status}'",
"{\"conditions\":[{\"lastTransitionTime\":\"2023-10-27T01:23:57Z\",\"message\":\"Reconcile complete\",\"reason\":\"Complete\",\"status\":\"True\",\"type\":\"Reconciled\"}]}",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 1s 3d16h true",
"apiVersion: velero.io/v1 kind: Backup metadata: name: vmbackupsingle namespace: openshift-adp spec: snapshotMoveData: true includedNamespaces: - <vm_namespace> 1 labelSelector: matchLabels: app: <vm_app_name> 2 storageLocation: <backup_storage_location_name> 3",
"oc apply -f <backup_cr_file_name> 1",
"apiVersion: velero.io/v1 kind: Restore metadata: name: vmrestoresingle namespace: openshift-adp spec: backupName: vmbackupsingle 1 restorePVs: true",
"oc apply -f <restore_cr_file_name> 1",
"oc label vm <vm_name> app=<vm_name> -n openshift-adp",
"apiVersion: velero.io/v1 kind: Restore metadata: name: singlevmrestore namespace: openshift-adp spec: backupName: multiplevmbackup restorePVs: true LabelSelectors: - matchLabels: kubevirt.io/created-by: <datavolume_uid> 1 - matchLabels: app: <vm_name> 2",
"oc apply -f <restore_cr_file_name> 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: restic velero: client-burst: 500 1 client-qps: 300 2 defaultPlugins: - openshift - aws - kubevirt",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-dpa namespace: openshift-adp spec: backupLocations: - name: default velero: config: insecureSkipTLSVerify: \"true\" profile: \"default\" region: <bucket_region> s3ForcePathStyle: \"true\" s3Url: <bucket_url> credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - kubevirt - csi imagePullPolicy: Never 1",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # backupLocations: - name: aws 1 velero: provider: aws default: true 2 objectStorage: bucket: <bucket_name> 3 prefix: <prefix> 4 config: region: <region_name> 5 profile: \"default\" credential: key: cloud name: cloud-credentials 6 - name: odf 7 velero: provider: aws default: false objectStorage: bucket: <bucket_name> prefix: <prefix> config: profile: \"default\" region: <region_name> s3Url: <url> 8 insecureSkipTLSVerify: \"true\" s3ForcePathStyle: \"true\" credential: key: cloud name: <custom_secret_name_odf> 9 #",
"apiVersion: velero.io/v1 kind: Backup spec: includedNamespaces: - <namespace> 1 storageLocation: <backup_storage_location> 2 defaultVolumesToFsBackup: true",
"oc create secret generic cloud-credentials -n openshift-adp --from-file cloud=<aws_credentials_file_name> 1",
"oc create secret generic mcg-secret -n openshift-adp --from-file cloud=<MCG_credentials_file_name> 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: two-bsl-dpa namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 1 credential: key: cloud name: cloud-credentials default: true objectStorage: bucket: <bucket_name> 2 prefix: velero provider: aws - name: mcg velero: config: insecureSkipTLSVerify: \"true\" profile: noobaa region: <region_name> 3 s3ForcePathStyle: \"true\" s3Url: <s3_url> 4 credential: key: cloud name: mcg-secret 5 objectStorage: bucket: <bucket_name_mcg> 6 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws",
"oc create -f <dpa_file_name> 1",
"oc get dpa -o yaml",
"oc get bsl",
"NAME PHASE LAST VALIDATED AGE DEFAULT aws Available 5s 3m28s true mcg Available 5s 3m28s",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup1 namespace: openshift-adp spec: includedNamespaces: - <mysql_namespace> 1 storageLocation: mcg 2 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # snapshotLocations: - velero: config: profile: default region: <region> 1 credential: key: cloud name: cloud-credentials provider: aws - velero: config: profile: default region: <region> credential: key: cloud name: <custom_credential> 2 provider: aws #",
"velero backup create <backup-name> --snapshot-volumes false 1",
"velero describe backup <backup_name> --details 1",
"velero restore create --from-backup <backup-name> 1",
"velero describe restore <restore_name> --details 1",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: hooks: {} includedNamespaces: - <namespace> 1 includedResources: [] 2 excludedResources: [] 3 storageLocation: <velero-sample-1> 4 ttl: 720h0m0s 5 labelSelector: 6 matchLabels: app: <label_1> app: <label_2> app: <label_3> orLabelSelectors: 7 - matchLabels: app: <label_1> app: <label_2> app: <label_3>",
"oc get backups.velero.io -n openshift-adp <backup> -o jsonpath='{.status.phase}'",
"apiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshotClass metadata: name: <volume_snapshot_class_name> labels: velero.io/csi-volumesnapshot-class: \"true\" 1 annotations: snapshot.storage.kubernetes.io/is-default-class: true 2 driver: <csi_driver> deletionPolicy: <deletion_policy_type> 3",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> labels: velero.io/storage-location: default namespace: openshift-adp spec: defaultVolumesToFsBackup: true 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: 2 - <namespace> includedResources: [] - pods 3 excludedResources: [] 4 labelSelector: 5 matchLabels: app: velero component: server pre: 6 - exec: container: <container> 7 command: - /bin/uname 8 - -a onError: Fail 9 timeout: 30s 10 post: 11",
"oc get backupStorageLocations -n openshift-adp",
"NAMESPACE NAME PHASE LAST VALIDATED AGE DEFAULT openshift-adp velero-sample-1 Available 11s 31m",
"cat << EOF | oc apply -f - apiVersion: velero.io/v1 kind: Schedule metadata: name: <schedule> namespace: openshift-adp spec: schedule: 0 7 * * * 1 template: hooks: {} includedNamespaces: - <namespace> 2 storageLocation: <velero-sample-1> 3 defaultVolumesToFsBackup: true 4 ttl: 720h0m0s 5 EOF",
"schedule: \"*/10 * * * *\"",
"oc get schedule -n openshift-adp <schedule> -o jsonpath='{.status.phase}'",
"apiVersion: velero.io/v1 kind: DeleteBackupRequest metadata: name: deletebackuprequest namespace: openshift-adp spec: backupName: <backup_name> 1",
"oc apply -f <deletebackuprequest_cr_filename>",
"velero backup delete <backup_name> -n openshift-adp 1",
"pod/repo-maintain-job-173...2527-2nbls 0/1 Completed 0 168m pod/repo-maintain-job-173....536-fl9tm 0/1 Completed 0 108m pod/repo-maintain-job-173...2545-55ggx 0/1 Completed 0 48m",
"not due for full maintenance cycle until 2024-00-00 18:29:4",
"oc get backuprepositories.velero.io -n openshift-adp",
"oc delete backuprepository <backup_repository_name> -n openshift-adp 1",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true uploaderType: kopia",
"velero backup create <backup-name> --snapshot-volumes false 1",
"velero describe backup <backup_name> --details 1",
"velero restore create --from-backup <backup-name> 1",
"velero describe restore <restore_name> --details 1",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: backupName: <backup> 1 includedResources: [] 2 excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io restorePVs: true 3",
"oc get restores.velero.io -n openshift-adp <restore> -o jsonpath='{.status.phase}'",
"oc get all -n <namespace> 1",
"bash dc-restic-post-restore.sh -> dc-post-restore.sh",
"#!/bin/bash set -e if sha256sum exists, use it to check the integrity of the file if command -v sha256sum >/dev/null 2>&1; then CHECKSUM_CMD=\"sha256sum\" else CHECKSUM_CMD=\"shasum -a 256\" fi label_name () { if [ \"USD{#1}\" -le \"63\" ]; then echo USD1 return fi sha=USD(echo -n USD1|USDCHECKSUM_CMD) echo \"USD{1:0:57}USD{sha:0:6}\" } if [[ USD# -ne 1 ]]; then echo \"usage: USD{BASH_SOURCE} restore-name\" exit 1 fi echo \"restore: USD1\" label=USD(label_name USD1) echo \"label: USDlabel\" echo Deleting disconnected restore pods delete pods --all-namespaces -l oadp.openshift.io/disconnected-from-dc=USDlabel for dc in USD(oc get dc --all-namespaces -l oadp.openshift.io/replicas-modified=USDlabel -o jsonpath='{range .items[*]}{.metadata.namespace}{\",\"}{.metadata.name}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-replicas}{\",\"}{.metadata.annotations.oadp\\.openshift\\.io/original-paused}{\"\\n\"}') do IFS=',' read -ra dc_arr <<< \"USDdc\" if [ USD{#dc_arr[0]} -gt 0 ]; then echo Found deployment USD{dc_arr[0]}/USD{dc_arr[1]}, setting replicas: USD{dc_arr[2]}, paused: USD{dc_arr[3]} cat <<EOF | oc patch dc -n USD{dc_arr[0]} USD{dc_arr[1]} --patch-file /dev/stdin spec: replicas: USD{dc_arr[2]} paused: USD{dc_arr[3]} EOF fi done",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore> namespace: openshift-adp spec: hooks: resources: - name: <hook_name> includedNamespaces: - <namespace> 1 excludedNamespaces: - <namespace> includedResources: - pods 2 excludedResources: [] labelSelector: 3 matchLabels: app: velero component: server postHooks: - init: initContainers: - name: restore-hook-init image: alpine:latest volumeMounts: - mountPath: /restores/pvc1-vm name: pvc1-vm command: - /bin/ash - -c timeout: 4 - exec: container: <container> 5 command: - /bin/bash 6 - -c - \"psql < /backup/backup.sql\" waitTimeout: 5m 7 execTimeout: 1m 8 onError: Continue 9",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --exclude-resources=deployment.apps",
"velero restore create <RESTORE_NAME> --from-backup <BACKUP_NAME> --include-resources=deployment.apps",
"export CLUSTER_NAME=my-cluster 1 export ROSA_CLUSTER_ID=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .id) export REGION=USD(rosa describe cluster -c USD{CLUSTER_NAME} --output json | jq -r .region.id) export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export CLUSTER_VERSION=USD(rosa describe cluster -c USD{CLUSTER_NAME} -o json | jq -r .version.raw_id | cut -f -2 -d '.') export ROLE_NAME=\"USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials\" export SCRATCH=\"/tmp/USD{CLUSTER_NAME}/oadp\" mkdir -p USD{SCRATCH} echo \"Cluster ID: USD{ROSA_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"",
"POLICY_ARN=USD(aws iam list-policies --query \"Policies[?PolicyName=='RosaOadpVer1'].{ARN:Arn}\" --output text) 1",
"if [[ -z \"USD{POLICY_ARN}\" ]]; then cat << EOF > USD{SCRATCH}/policy.json 1 { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:PutBucketTagging\", \"s3:GetBucketTagging\", \"s3:PutEncryptionConfiguration\", \"s3:GetEncryptionConfiguration\", \"s3:PutLifecycleConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetBucketLocation\", \"s3:ListBucket\", \"s3:GetObject\", \"s3:PutObject\", \"s3:DeleteObject\", \"s3:ListBucketMultipartUploads\", \"s3:AbortMultipartUploads\", \"s3:ListMultipartUploadParts\", \"s3:DescribeSnapshots\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumeAttribute\", \"ec2:DescribeVolumesModifications\", \"ec2:DescribeVolumeStatus\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name \"RosaOadpVer1\" --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn --tags Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-oadp Key=operator_name,Value=openshift-oadp --output text) fi",
"echo USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/trust-policy.json { \"Version\":2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_ENDPOINT}:sub\": [ \"system:serviceaccount:openshift-adp:openshift-adp-controller-manager\", \"system:serviceaccount:openshift-adp:velero\"] } } }] } EOF",
"ROLE_ARN=USD(aws iam create-role --role-name \"USD{ROLE_NAME}\" --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json --tags Key=rosa_cluster_id,Value=USD{ROSA_CLUSTER_ID} Key=rosa_openshift_version,Value=USD{CLUSTER_VERSION} Key=rosa_role_prefix,Value=ManagedOpenShift Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=openshift-oadp --query Role.Arn --output text)",
"echo USD{ROLE_ARN}",
"aws iam attach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token region = <aws_region> 1 EOF",
"oc create namespace openshift-adp",
"oc -n openshift-adp create secret generic cloud-credentials --from-file=USD{SCRATCH}/credentials",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi nodeAgent: 2 enable: false uploaderType: kopia 3 EOF",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: \"true\" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF",
"nodeAgent: enable: false uploaderType: restic",
"restic: enable: false",
"oc get sub -o yaml redhat-oadp-operator",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: annotations: creationTimestamp: \"2025-01-15T07:18:31Z\" generation: 1 labels: operators.coreos.com/redhat-oadp-operator.openshift-adp: \"\" name: redhat-oadp-operator namespace: openshift-adp resourceVersion: \"77363\" uid: 5ba00906-5ad2-4476-ae7b-ffa90986283d spec: channel: stable-1.4 config: env: - name: ROLEARN value: arn:aws:iam::11111111:role/wrong-role-arn 1 installPlanApproval: Manual name: redhat-oadp-operator source: prestage-operators sourceNamespace: openshift-marketplace startingCSV: oadp-operator.v1.4.2",
"oc patch subscription redhat-oadp-operator -p '{\"spec\": {\"config\": {\"env\": [{\"name\": \"ROLEARN\", \"value\": \"<role_arn>\"}]}}}' --type='merge'",
"oc get secret cloud-credentials -o jsonpath='{.data.credentials}' | base64 -d",
"[default] sts_regional_endpoints = regional role_arn = arn:aws:iam::160.....6956:role/oadprosa.....8wlf web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: test-rosa-dpa namespace: openshift-adp spec: backupLocations: - bucket: config: region: us-east-1 cloudStorageRef: name: <cloud_storage> 1 credential: name: cloud-credentials key: credentials prefix: velero default: true configuration: velero: defaultPlugins: - aws - openshift",
"oc create -f <dpa_manifest_file>",
"oc get dpa -n openshift-adp -o yaml",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication status: conditions: - lastTransitionTime: \"2023-07-31T04:48:12Z\" message: Reconcile complete reason: Complete status: \"True\" type: Reconciled",
"oc get backupstoragelocations.velero.io -n openshift-adp",
"NAME PHASE LAST VALIDATED AGE DEFAULT ts-dpa-1 Available 3s 6s true",
"oc create namespace hello-world",
"oc new-app -n hello-world --image=docker.io/openshift/hello-openshift",
"oc expose service/hello-openshift -n hello-world",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF",
"watch \"oc -n openshift-adp get backup hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:20:44Z\", \"expiration\": \"2022-10-07T22:20:22Z\", \"formatVersion\": \"1.1.0\", \"phase\": \"Completed\", \"progress\": { \"itemsBackedUp\": 58, \"totalItems\": 58 }, \"startTimestamp\": \"2022-09-07T22:20:22Z\", \"version\": 1 }",
"oc delete ns hello-world",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF",
"watch \"oc -n openshift-adp get restore hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:25:47Z\", \"phase\": \"Completed\", \"progress\": { \"itemsRestored\": 38, \"totalItems\": 38 }, \"startTimestamp\": \"2022-09-07T22:25:28Z\", \"warnings\": 9 }",
"oc -n hello-world get pods",
"NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"oc delete ns hello-world",
"oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa",
"oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp",
"oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge",
"oc -n openshift-adp delete subscription oadp-operator",
"oc delete ns openshift-adp",
"oc delete backups.velero.io hello-world",
"velero backup delete hello-world",
"for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done",
"aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive",
"aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp",
"aws iam detach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn \"USD{POLICY_ARN}\"",
"aws iam delete-role --role-name \"USD{ROLE_NAME}\"",
"export CLUSTER_NAME= <AWS_cluster_name> 1",
"export CLUSTER_VERSION=USD(oc get clusterversion version -o jsonpath='{.status.desired.version}{\"\\n\"}') export AWS_CLUSTER_ID=USD(oc get clusterversion version -o jsonpath='{.spec.clusterID}{\"\\n\"}') export OIDC_ENDPOINT=USD(oc get authentication.config.openshift.io cluster -o jsonpath='{.spec.serviceAccountIssuer}' | sed 's|^https://||') export REGION=USD(oc get infrastructures cluster -o jsonpath='{.status.platformStatus.aws.region}' --allow-missing-template-keys=false || echo us-east-2) export AWS_ACCOUNT_ID=USD(aws sts get-caller-identity --query Account --output text) export ROLE_NAME=\"USD{CLUSTER_NAME}-openshift-oadp-aws-cloud-credentials\"",
"export SCRATCH=\"/tmp/USD{CLUSTER_NAME}/oadp\" mkdir -p USD{SCRATCH}",
"echo \"Cluster ID: USD{AWS_CLUSTER_ID}, Region: USD{REGION}, OIDC Endpoint: USD{OIDC_ENDPOINT}, AWS Account ID: USD{AWS_ACCOUNT_ID}\"",
"export POLICY_NAME=\"OadpVer1\" 1",
"POLICY_ARN=USD(aws iam list-policies --query \"Policies[?PolicyName=='USDPOLICY_NAME'].{ARN:Arn}\" --output text)",
"if [[ -z \"USD{POLICY_ARN}\" ]]; then cat << EOF > USD{SCRATCH}/policy.json { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"s3:CreateBucket\", \"s3:DeleteBucket\", \"s3:PutBucketTagging\", \"s3:GetBucketTagging\", \"s3:PutEncryptionConfiguration\", \"s3:GetEncryptionConfiguration\", \"s3:PutLifecycleConfiguration\", \"s3:GetLifecycleConfiguration\", \"s3:GetBucketLocation\", \"s3:ListBucket\", \"s3:GetObject\", \"s3:PutObject\", \"s3:DeleteObject\", \"s3:ListBucketMultipartUploads\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\", \"ec2:DescribeSnapshots\", \"ec2:DescribeVolumes\", \"ec2:DescribeVolumeAttribute\", \"ec2:DescribeVolumesModifications\", \"ec2:DescribeVolumeStatus\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" } ]} EOF POLICY_ARN=USD(aws iam create-policy --policy-name USDPOLICY_NAME --policy-document file:///USD{SCRATCH}/policy.json --query Policy.Arn --tags Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --output text) 1 fi",
"echo USD{POLICY_ARN}",
"cat <<EOF > USD{SCRATCH}/trust-policy.json { \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{AWS_ACCOUNT_ID}:oidc-provider/USD{OIDC_ENDPOINT}\" }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_ENDPOINT}:sub\": [ \"system:serviceaccount:openshift-adp:openshift-adp-controller-manager\", \"system:serviceaccount:openshift-adp:velero\"] } } }] } EOF",
"ROLE_ARN=USD(aws iam create-role --role-name \"USD{ROLE_NAME}\" --assume-role-policy-document file://USD{SCRATCH}/trust-policy.json --tags Key=cluster_id,Value=USD{AWS_CLUSTER_ID} Key=openshift_version,Value=USD{CLUSTER_VERSION} Key=operator_namespace,Value=openshift-adp Key=operator_name,Value=oadp --query Role.Arn --output text)",
"echo USD{ROLE_ARN}",
"aws iam attach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn USD{POLICY_ARN}",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_sample> spec: configuration: velero: podConfig: nodeSelector: <node_selector> 1 resourceAllocations: 2 limits: cpu: \"1\" memory: 1024Mi requests: cpu: 200m memory: 256Mi",
"cat <<EOF > USD{SCRATCH}/credentials [default] role_arn = USD{ROLE_ARN} web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token EOF",
"oc create namespace openshift-adp",
"oc -n openshift-adp create secret generic cloud-credentials --from-file=USD{SCRATCH}/credentials",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: CloudStorage metadata: name: USD{CLUSTER_NAME}-oadp namespace: openshift-adp spec: creationSecret: key: credentials name: cloud-credentials enableSharedConfig: true name: USD{CLUSTER_NAME}-oadp provider: aws region: USDREGION EOF",
"oc get pvc -n <namespace>",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE applog Bound pvc-351791ae-b6ab-4e8b-88a4-30f73caf5ef8 1Gi RWO gp3-csi 4d19h mysql Bound pvc-16b8e009-a20a-4379-accc-bc81fedd0621 1Gi RWO gp3-csi 4d19h",
"oc get storageclass",
"NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE gp2 kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 4d21h gp2-csi ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3 ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h gp3-csi (default) ebs.csi.aws.com Delete WaitForFirstConsumer true 4d21h",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws - csi restic: enable: false EOF",
"cat << EOF | oc create -f - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: USD{CLUSTER_NAME}-dpa namespace: openshift-adp spec: backupImages: true 1 features: dataMover: enable: false backupLocations: - bucket: cloudStorageRef: name: USD{CLUSTER_NAME}-oadp credential: key: credentials name: cloud-credentials prefix: velero default: true config: region: USD{REGION} configuration: velero: defaultPlugins: - openshift - aws nodeAgent: 2 enable: false uploaderType: restic snapshotLocations: - velero: config: credentialsFile: /tmp/credentials/openshift-adp/cloud-credentials-credentials 3 enableSharedConfig: \"true\" 4 profile: default 5 region: USD{REGION} 6 provider: aws EOF",
"nodeAgent: enable: false uploaderType: restic",
"restic: enable: false",
"oc create namespace hello-world",
"oc new-app -n hello-world --image=docker.io/openshift/hello-openshift",
"oc expose service/hello-openshift -n hello-world",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Backup metadata: name: hello-world namespace: openshift-adp spec: includedNamespaces: - hello-world storageLocation: USD{CLUSTER_NAME}-dpa-1 ttl: 720h0m0s EOF",
"watch \"oc -n openshift-adp get backup hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:20:44Z\", \"expiration\": \"2022-10-07T22:20:22Z\", \"formatVersion\": \"1.1.0\", \"phase\": \"Completed\", \"progress\": { \"itemsBackedUp\": 58, \"totalItems\": 58 }, \"startTimestamp\": \"2022-09-07T22:20:22Z\", \"version\": 1 }",
"oc delete ns hello-world",
"cat << EOF | oc create -f - apiVersion: velero.io/v1 kind: Restore metadata: name: hello-world namespace: openshift-adp spec: backupName: hello-world EOF",
"watch \"oc -n openshift-adp get restore hello-world -o json | jq .status\"",
"{ \"completionTimestamp\": \"2022-09-07T22:25:47Z\", \"phase\": \"Completed\", \"progress\": { \"itemsRestored\": 38, \"totalItems\": 38 }, \"startTimestamp\": \"2022-09-07T22:25:28Z\", \"warnings\": 9 }",
"oc -n hello-world get pods",
"NAME READY STATUS RESTARTS AGE hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s",
"curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`",
"Hello OpenShift!",
"oc delete ns hello-world",
"oc -n openshift-adp delete dpa USD{CLUSTER_NAME}-dpa",
"oc -n openshift-adp delete cloudstorage USD{CLUSTER_NAME}-oadp",
"oc -n openshift-adp patch cloudstorage USD{CLUSTER_NAME}-oadp -p '{\"metadata\":{\"finalizers\":null}}' --type=merge",
"oc -n openshift-adp delete subscription oadp-operator",
"oc delete ns openshift-adp",
"oc delete backups.velero.io hello-world",
"velero backup delete hello-world",
"for CRD in `oc get crds | grep velero | awk '{print USD1}'`; do oc delete crd USDCRD; done",
"aws s3 rm s3://USD{CLUSTER_NAME}-oadp --recursive",
"aws s3api delete-bucket --bucket USD{CLUSTER_NAME}-oadp",
"aws iam detach-role-policy --role-name \"USD{ROLE_NAME}\" --policy-arn \"USD{POLICY_ARN}\"",
"aws iam delete-role --role-name \"USD{ROLE_NAME}\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa_sample namespace: openshift-adp spec: configuration: velero: defaultPlugins: - openshift - aws - csi resourceTimeout: 10m nodeAgent: enable: true uploaderType: kopia backupLocations: - name: default velero: provider: aws default: true objectStorage: bucket: <bucket_name> 1 prefix: <prefix> 2 config: region: <region> 3 profile: \"default\" s3ForcePathStyle: \"true\" s3Url: <s3_url> 4 credential: key: cloud name: cloud-credentials",
"oc create -f dpa.yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: operator-install-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale 1 includedResources: - operatorgroups - subscriptions - namespaces itemOperationTimeout: 1h0m0s snapshotMoveData: false ttl: 720h0m0s",
"oc create -f backup.yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: operator-resources-secrets namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale includedResources: - secrets itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management snapshotMoveData: false snapshotVolumes: false ttl: 720h0m0s",
"oc create -f backup-secret.yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: operator-resources-apim namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: false includedNamespaces: - threescale includedResources: - apimanagers itemOperationTimeout: 1h0m0s snapshotMoveData: false snapshotVolumes: false storageLocation: ts-dpa-1 ttl: 720h0m0s volumeSnapshotLocations: - ts-dpa-1",
"oc create -f backup-apimanager.yaml",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: example-claim namespace: threescale spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: gp3-csi volumeMode: Filesystem",
"oc create -f ts_pvc.yml",
"oc edit deployment system-mysql -n threescale",
"volumeMounts: - name: example-claim mountPath: /var/lib/mysqldump/data - name: mysql-storage mountPath: /var/lib/mysql/data - name: mysql-extra-conf mountPath: /etc/my-extra.d - name: mysql-main-conf mountPath: /etc/my-extra serviceAccount: amp volumes: - name: example-claim persistentVolumeClaim: claimName: example-claim 1",
"apiVersion: velero.io/v1 kind: Backup metadata: name: mysql-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: true hooks: resources: - name: dumpdb pre: - exec: command: - /bin/sh - -c - mysqldump -u USDMYSQL_USER --password=USDMYSQL_PASSWORD system --no-tablespaces > /var/lib/mysqldump/data/dump.sql 1 container: system-mysql onError: Fail timeout: 5m includedNamespaces: 2 - threescale includedResources: - deployment - pods - replicationControllers - persistentvolumeclaims - persistentvolumes itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management threescale_component_element: mysql snapshotMoveData: false ttl: 720h0m0s",
"oc create -f mysql.yaml",
"oc get backups.velero.io mysql-backup",
"NAME STATUS CREATED NAMESPACE POD VOLUME UPLOADER TYPE STORAGE LOCATION AGE mysql-backup-4g7qn Completed 30s threescale system-mysql-2-9pr44 example-claim kopia ts-dpa-1 30s mysql-backup-smh85 Completed 23s threescale system-mysql-2-9pr44 mysql-storage kopia ts-dpa-1 30s",
"oc edit deployment backend-redis -n threescale",
"annotations: post.hook.backup.velero.io/command: >- [\"/bin/bash\", \"-c\", \"redis-cli CONFIG SET auto-aof-rewrite-percentage 100\"] pre.hook.backup.velero.io/command: >- [\"/bin/bash\", \"-c\", \"redis-cli CONFIG SET auto-aof-rewrite-percentage 0\"]",
"apiVersion: velero.io/v1 kind: Backup metadata: name: redis-backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: true includedNamespaces: - threescale includedResources: - deployment - pods - replicationcontrollers - persistentvolumes - persistentvolumeclaims itemOperationTimeout: 1h0m0s labelSelector: matchLabels: app: 3scale-api-management threescale_component: backend threescale_component_element: redis snapshotMoveData: false snapshotVolumes: false ttl: 720h0m0s",
"oc get backups.velero.io redis-backup -o yaml",
"oc get backups.velero.io",
"oc delete project threescale",
"\"threescale\" project deleted successfully",
"apiVersion: velero.io/v1 kind: Restore metadata: name: operator-installation-restore namespace: openshift-adp spec: backupName: operator-install-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s",
"oc create -f restore.yaml",
"oc apply -f - <<EOF --- apiVersion: v1 kind: Secret metadata: name: s3-credentials namespace: threescale stringData: AWS_ACCESS_KEY_ID: <ID_123456> 1 AWS_SECRET_ACCESS_KEY: <ID_98765544> 2 AWS_BUCKET: <mybucket.example.com> 3 AWS_REGION: <us-east-1> 4 type: Opaque EOF",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale",
"apiVersion: velero.io/v1 kind: Restore metadata: name: operator-resources-secrets namespace: openshift-adp spec: backupName: operator-resources-secrets excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s",
"oc create -f restore-secrets.yaml",
"apiVersion: velero.io/v1 kind: Restore metadata: name: operator-resources-apim namespace: openshift-adp spec: backupName: operator-resources-apim excludedResources: 1 - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 4h0m0s",
"oc create -f restore-apimanager.yaml",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=0 -n threescale",
"deployment.apps/threescale-operator-controller-manager-v2 scaled",
"vi ./scaledowndeployment.sh",
"for deployment in apicast-production apicast-staging backend-cron backend-listener backend-redis backend-worker system-app system-memcache system-mysql system-redis system-searchd system-sidekiq zync zync-database zync-que; do oc scale deployment/USDdeployment --replicas=0 -n threescale done",
"./scaledowndeployment.sh",
"deployment.apps.openshift.io/apicast-production scaled deployment.apps.openshift.io/apicast-staging scaled deployment.apps.openshift.io/backend-cron scaled deployment.apps.openshift.io/backend-listener scaled deployment.apps.openshift.io/backend-redis scaled deployment.apps.openshift.io/backend-worker scaled deployment.apps.openshift.io/system-app scaled deployment.apps.openshift.io/system-memcache scaled deployment.apps.openshift.io/system-mysql scaled deployment.apps.openshift.io/system-redis scaled deployment.apps.openshift.io/system-searchd scaled deployment.apps.openshift.io/system-sidekiq scaled deployment.apps.openshift.io/zync scaled deployment.apps.openshift.io/zync-database scaled deployment.apps.openshift.io/zync-que scaled",
"oc delete deployment system-mysql -n threescale",
"Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io \"system-mysql\" deleted",
"apiVersion: velero.io/v1 kind: Restore metadata: name: restore-mysql namespace: openshift-adp spec: backupName: mysql-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io - resticrepositories.velero.io hooks: resources: - name: restoreDB postHooks: - exec: command: - /bin/sh - '-c' - > sleep 30 mysql -h 127.0.0.1 -D system -u root --password=USDMYSQL_ROOT_PASSWORD < /var/lib/mysqldump/data/dump.sql 1 container: system-mysql execTimeout: 80s onError: Fail waitTimeout: 5m itemOperationTimeout: 1h0m0s restorePVs: true",
"oc create -f restore-mysql.yaml",
"oc get podvolumerestores.velero.io -n openshift-adp",
"NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-mysql-rbzvm threescale system-mysql-2-kjkhl kopia mysql-storage Completed 771879108 771879108 40m restore-mysql-z7x7l threescale system-mysql-2-kjkhl kopia example-claim Completed 380415 380415 40m",
"oc get pvc -n threescale",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE backend-redis-storage Bound pvc-3dca410d-3b9f-49d4-aebf-75f47152e09d 1Gi RWO gp3-csi <unset> 68m example-claim Bound pvc-cbaa49b0-06cd-4b1a-9e90-0ef755c67a54 1Gi RWO gp3-csi <unset> 57m mysql-storage Bound pvc-4549649f-b9ad-44f7-8f67-dd6b9dbb3896 1Gi RWO gp3-csi <unset> 68m system-redis-storage Bound pvc-04dadafd-8a3e-4d00-8381-6041800a24fc 1Gi RWO gp3-csi <unset> 68m system-searchd Bound pvc-afbf606c-d4a8-4041-8ec6-54c5baf1a3b9 1Gi RWO gp3-csi <unset> 68m",
"oc delete deployment backend-redis -n threescale",
"Warning: apps.openshift.io/v1 deployment is deprecated in v4.14+, unavailable in v4.10000+ deployment.apps.openshift.io \"backend-redis\" deleted",
"apiVersion: velero.io/v1 kind: Restore metadata: name: restore-backend namespace: openshift-adp spec: backupName: redis-backup excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - csinodes.storage.k8s.io - volumeattachments.storage.k8s.io - backuprepositories.velero.io itemOperationTimeout: 1h0m0s restorePVs: true",
"oc create -f restore-backend.yaml",
"oc get podvolumerestores.velero.io -n openshift-adp",
"NAME NAMESPACE POD UPLOADER TYPE VOLUME STATUS TOTALBYTES BYTESDONE AGE restore-backend-jmrwx threescale backend-redis-1-bsfmv kopia backend-redis-storage Completed 76123 76123 21m",
"oc scale deployment threescale-operator-controller-manager-v2 --replicas=1 -n threescale",
"oc get deployment -n threescale",
"./scaledeployment.sh",
"oc get routes -n threescale",
"NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD backend backend-3scale.apps.custom-cluster-name.openshift.com backend-listener http edge/Allow None zync-3scale-api-b4l4d api-3scale-apicast-production.apps.custom-cluster-name.openshift.com apicast-production gateway edge/Redirect None zync-3scale-api-b6sns api-3scale-apicast-staging.apps.custom-cluster-name.openshift.com apicast-staging gateway edge/Redirect None zync-3scale-master-7sc4j master.apps.custom-cluster-name.openshift.com system-master http edge/Redirect None zync-3scale-provider-7r2nm 3scale-admin.apps.custom-cluster-name.openshift.com system-provider http edge/Redirect None zync-3scale-provider-mjxlb 3scale.apps.custom-cluster-name.openshift.com system-developer http edge/Redirect None",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dpa-sample spec: configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true defaultVolumesToFSBackup: 4 featureFlags: - EnableCSI",
"kind: Backup apiVersion: velero.io/v1 metadata: name: backup namespace: openshift-adp spec: csiSnapshotTimeout: 10m0s defaultVolumesToFsBackup: 1 includedNamespaces: - mysql-persistent itemOperationTimeout: 4h0m0s snapshotMoveData: true 2 storageLocation: default ttl: 720h0m0s 3 volumeSnapshotLocations: - dpa-sample-1",
"Error: relabel failed /var/lib/kubelet/pods/3ac..34/volumes/ kubernetes.io~csi/pvc-684..12c/mount: lsetxattr /var/lib/kubelet/ pods/3ac..34/volumes/kubernetes.io~csi/pvc-68..2c/mount/data-xfs-103: no space left on device",
"oc create -f backup.yaml",
"oc get datauploads -A",
"NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp backup-test-1-sw76b Completed 9m47s 108104082 108104082 dpa-sample-1 9m47s ip-10-0-150-57.us-west-2.compute.internal openshift-adp mongo-block-7dtpf Completed 14m 1073741824 1073741824 dpa-sample-1 14m ip-10-0-150-57.us-west-2.compute.internal",
"oc get datauploads <dataupload_name> -o yaml",
"apiVersion: velero.io/v2alpha1 kind: DataUpload metadata: name: backup-test-1-sw76b namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 csiSnapshot: snapshotClass: \"\" storageClass: gp3-csi volumeSnapshot: velero-mysql-fq8sl operationTimeout: 10m0s snapshotType: CSI sourceNamespace: mysql-persistent sourcePVC: mysql status: completionTimestamp: \"2023-11-02T16:57:02Z\" node: ip-10-0-150-57.us-west-2.compute.internal path: /host_pods/15116bac-cc01-4d9b-8ee7-609c3bef6bde/volumes/kubernetes.io~csi/pvc-eead8167-556b-461a-b3ec-441749e291c4/mount phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 snapshotID: 8da1c5febf25225f4577ada2aeb9f899 startTimestamp: \"2023-11-02T16:56:22Z\"",
"apiVersion: velero.io/v1 kind: Restore metadata: name: restore namespace: openshift-adp spec: backupName: <backup>",
"oc create -f restore.yaml",
"oc get datadownloads -A",
"NAMESPACE NAME STATUS STARTED BYTES DONE TOTAL BYTES STORAGE LOCATION AGE NODE openshift-adp restore-test-1-sk7lg Completed 7m11s 108104082 108104082 dpa-sample-1 7m11s ip-10-0-150-57.us-west-2.compute.internal",
"oc get datadownloads <datadownload_name> -o yaml",
"apiVersion: velero.io/v2alpha1 kind: DataDownload metadata: name: restore-test-1-sk7lg namespace: openshift-adp spec: backupStorageLocation: dpa-sample-1 operationTimeout: 10m0s snapshotID: 8da1c5febf25225f4577ada2aeb9f899 sourceNamespace: mysql-persistent targetVolume: namespace: mysql-persistent pv: \"\" pvc: mysql status: completionTimestamp: \"2023-11-02T17:01:24Z\" node: ip-10-0-150-57.us-west-2.compute.internal phase: Completed 1 progress: bytesDone: 108104082 totalBytes: 108104082 startTimestamp: \"2023-11-02T17:00:52Z\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication # configuration: nodeAgent: enable: true 1 uploaderType: kopia 2 velero: defaultPlugins: - openshift - aws - csi 3 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: <hashing_algorithm_name> 4 - name: KOPIA_ENCRYPTION_ALGORITHM value: <encryption_algorithm_name> 5 - name: KOPIA_SPLITTER_ALGORITHM value: <splitter_algorithm_name> 6",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> 1 namespace: openshift-adp spec: backupLocations: - name: aws velero: config: profile: default region: <region_name> 2 credential: key: cloud name: cloud-credentials 3 default: true objectStorage: bucket: <bucket_name> 4 prefix: velero provider: aws configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - openshift - aws - csi 5 defaultSnapshotMoveData: true podConfig: env: - name: KOPIA_HASHING_ALGORITHM value: BLAKE3-256 6 - name: KOPIA_ENCRYPTION_ALGORITHM value: CHACHA20-POLY1305-HMAC-SHA256 7 - name: KOPIA_SPLITTER_ALGORITHM value: DYNAMIC-8M-RABINKARP 8",
"oc create -f <dpa_file_name> 1",
"oc get dpa -o yaml",
"apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace> 1 defaultVolumesToFsBackup: true",
"oc apply -f <backup_file_name> 1",
"oc get backups.velero.io <backup_name> -o yaml 1",
"kopia repository connect s3 --bucket=<bucket_name> \\ 1 --prefix=velero/kopia/<application_namespace> \\ 2 --password=static-passw0rd \\ 3 --access-key=\"<aws_s3_access_key>\" \\ 4 --secret-access-key=\"<aws_s3_secret_access_key>\" \\ 5",
"kopia repository status",
"Config file: /../.config/kopia/repository.config Description: Repository in S3: s3.amazonaws.com <bucket_name> Storage type: s3 Storage capacity: unbounded Storage config: { \"bucket\": <bucket_name>, \"prefix\": \"velero/kopia/<application_namespace>/\", \"endpoint\": \"s3.amazonaws.com\", \"accessKeyID\": <access_key>, \"secretAccessKey\": \"****************************************\", \"sessionToken\": \"\" } Unique ID: 58....aeb0 Hash: BLAKE3-256 Encryption: CHACHA20-POLY1305-HMAC-SHA256 Splitter: DYNAMIC-8M-RABINKARP Format version: 3",
"apiVersion: v1 kind: Pod metadata: name: oadp-mustgather-pod labels: purpose: user-interaction spec: containers: - name: oadp-mustgather-container image: registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 command: [\"sleep\"] args: [\"infinity\"]",
"oc apply -f <pod_config_file_name> 1",
"oc describe pod/oadp-mustgather-pod | grep scc",
"openshift.io/scc: anyuid",
"oc -n openshift-adp rsh pod/oadp-mustgather-pod",
"sh-5.1# kopia repository connect s3 --bucket=<bucket_name> \\ 1 --prefix=velero/kopia/<application_namespace> \\ 2 --password=static-passw0rd \\ 3 --access-key=\"<access_key>\" \\ 4 --secret-access-key=\"<secret_access_key>\" \\ 5 --endpoint=<bucket_endpoint> \\ 6",
"sh-5.1# kopia benchmark hashing",
"Benchmarking hash 'BLAKE2B-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2B-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE2S-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'BLAKE3-256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA256-128' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-224' (100 x 1048576 bytes, parallelism 1) Benchmarking hash 'HMAC-SHA3-256' (100 x 1048576 bytes, parallelism 1) Hash Throughput ----------------------------------------------------------------- 0. BLAKE3-256 15.3 GB / second 1. BLAKE3-256-128 15.2 GB / second 2. HMAC-SHA256-128 6.4 GB / second 3. HMAC-SHA256 6.4 GB / second 4. HMAC-SHA224 6.4 GB / second 5. BLAKE2B-256-128 4.2 GB / second 6. BLAKE2B-256 4.1 GB / second 7. BLAKE2S-256 2.9 GB / second 8. BLAKE2S-128 2.9 GB / second 9. HMAC-SHA3-224 1.6 GB / second 10. HMAC-SHA3-256 1.5 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --block-hash=BLAKE3-256",
"sh-5.1# kopia benchmark encryption",
"Benchmarking encryption 'AES256-GCM-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Benchmarking encryption 'CHACHA20-POLY1305-HMAC-SHA256'... (1000 x 1048576 bytes, parallelism 1) Encryption Throughput ----------------------------------------------------------------- 0. AES256-GCM-HMAC-SHA256 2.2 GB / second 1. CHACHA20-POLY1305-HMAC-SHA256 1.8 GB / second ----------------------------------------------------------------- Fastest option for this machine is: --encryption=AES256-GCM-HMAC-SHA256",
"sh-5.1# kopia benchmark splitter",
"splitting 16 blocks of 32MiB each, parallelism 1 DYNAMIC 747.6 MB/s count:107 min:9467 10th:2277562 25th:2971794 50th:4747177 75th:7603998 90th:8388608 max:8388608 DYNAMIC-128K-BUZHASH 718.5 MB/s count:3183 min:3076 10th:80896 25th:104312 50th:157621 75th:249115 90th:262144 max:262144 DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144 FIXED-512K 102.9 TB/s count:1024 min:524288 10th:524288 25th:524288 50th:524288 75th:524288 90th:524288 max:524288 FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 ----------------------------------------------------------------- 0. FIXED-8M 566.3 TB/s count:64 min:8388608 10th:8388608 25th:8388608 50th:8388608 75th:8388608 90th:8388608 max:8388608 1. FIXED-4M 425.8 TB/s count:128 min:4194304 10th:4194304 25th:4194304 50th:4194304 75th:4194304 90th:4194304 max:4194304 # 22. DYNAMIC-128K-RABINKARP 164.4 MB/s count:3160 min:9667 10th:80098 25th:106626 50th:162269 75th:250655 90th:262144 max:262144",
"alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'",
"oc describe <velero_cr> <cr_name>",
"oc logs pod/<velero>",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: velero-sample spec: configuration: velero: logLevel: warning",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero --help",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>",
"oc -n openshift-adp exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication configuration: velero: podConfig: resourceAllocations: 1 requests: cpu: 200m memory: 256Mi",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication configuration: restic: podConfig: resourceAllocations: 1 requests: cpu: 1000m memory: 16Gi",
"requests: cpu: 500m memory: 128Mi",
"Velero: pod volume restore failed: data path restore failed: Failed to run kopia restore: Failed to copy snapshot data to the target: restore error: copy file: error creating file: open /host_pods/b4d...6/volumes/kubernetes.io~nfs/pvc-53...4e5/userdata/base/13493/2681: no such file or directory",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-client provisioner: k8s-sigs.io/nfs-subdir-external-provisioner parameters: pathPattern: \"USD{.PVC.namespace}/USD{.PVC.annotations.nfs.io/storage-path}\" 1 onDelete: delete",
"velero restore <restore_name> --from-backup=<backup_name> --include-resources service.serving.knavtive.dev",
"oc get mutatingwebhookconfigurations",
"024-02-27T10:46:50.028951744Z time=\"2024-02-27T10:46:50Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/<backup name> error=\"error executing custom action (groupResource=imagestreams.image.openshift.io, namespace=<BSL Name>, name=postgres): rpc error: code = Aborted desc = plugin panicked: runtime error: index out of range with length 1, stack trace: goroutine 94...",
"oc label backupstoragelocations.velero.io <bsl_name> app.kubernetes.io/component=bsl",
"oc -n openshift-adp get secret/oadp-<bsl_name>-<bsl_provider>-registry-secret -o json | jq -r '.data'",
"[default] 1 aws_access_key_id=AKIAIOSFODNN7EXAMPLE 2 aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"oc get backupstoragelocations.velero.io -A",
"velero backup-location get -n <OADP_Operator_namespace>",
"oc get backupstoragelocations.velero.io -n <namespace> -o yaml",
"apiVersion: v1 items: - apiVersion: velero.io/v1 kind: BackupStorageLocation metadata: creationTimestamp: \"2023-11-03T19:49:04Z\" generation: 9703 name: example-dpa-1 namespace: openshift-adp-operator ownerReferences: - apiVersion: oadp.openshift.io/v1alpha1 blockOwnerDeletion: true controller: true kind: DataProtectionApplication name: example-dpa uid: 0beeeaff-0287-4f32-bcb1-2e3c921b6e82 resourceVersion: \"24273698\" uid: ba37cd15-cf17-4f7d-bf03-8af8655cea83 spec: config: enableSharedConfig: \"true\" region: us-west-2 credential: key: credentials name: cloud-credentials default: true objectStorage: bucket: example-oadp-operator prefix: example provider: aws status: lastValidationTime: \"2023-11-10T22:06:46Z\" message: \"BackupStorageLocation \\\"example-dpa-1\\\" is unavailable: rpc error: code = Unknown desc = WebIdentityErr: failed to retrieve credentials\\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\\n\\tstatus code: 403, request id: d3f2e099-70a0-467b-997e-ff62345e3b54\" phase: Unavailable kind: List metadata: resourceVersion: \"\"",
"level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\"",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: nodeAgent: enable: true uploaderType: restic timeout: 1h",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: resourceTimeout: 10m",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: features: dataMover: timeout: 10m",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: csiSnapshotTimeout: 10m",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: <dpa_name> spec: configuration: velero: defaultItemOperationTimeout: 1h",
"apiVersion: velero.io/v1 kind: Restore metadata: name: <restore_name> spec: itemOperationTimeout: 1h",
"apiVersion: velero.io/v1 kind: Backup metadata: name: <backup_name> spec: itemOperationTimeout: 1h",
"oc -n {namespace} exec deployment/velero -c velero -- ./velero backup describe <backup>",
"oc delete backups.velero.io <backup> -n openshift-adp",
"velero backup describe <backup-name> --details",
"time=\"2023-02-17T16:33:13Z\" level=error msg=\"Error backing up item\" backup=openshift-adp/user1-backup-check5 error=\"error executing custom action (groupResource=persistentvolumeclaims, namespace=busy1, name=pvc1-user1): rpc error: code = Unknown desc = failed to get volumesnapshotclass for storageclass ocs-storagecluster-ceph-rbd: failed to get volumesnapshotclass for provisioner openshift-storage.rbd.csi.ceph.com, ensure that the desired volumesnapshot class has the velero.io/csi-volumesnapshot-class label\" logSource=\"/remote-source/velero/app/pkg/backup/backup.go:417\" name=busybox-79799557b5-vprq",
"oc delete backups.velero.io <backup> -n openshift-adp",
"oc label volumesnapshotclass/<snapclass_name> velero.io/csi-volumesnapshot-class=true",
"apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication spec: configuration: nodeAgent: enable: true uploaderType: restic supplementalGroups: - <group_id> 1",
"oc delete resticrepository openshift-adp <name_of_the_restic_repository>",
"time=\"2021-12-29T18:29:14Z\" level=info msg=\"1 errors encountered backup up item\" backup=velero/backup65 logSource=\"pkg/backup/backup.go:431\" name=mysql-7d99fc949-qbkds time=\"2021-12-29T18:29:14Z\" level=error msg=\"Error backing up item\" backup=velero/backup65 error=\"pod volume backup failed: error running restic backup, stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\\nIs there a repository at the following location?\\ns3:http://minio-minio.apps.mayap-oadp- veleo-1234.qe.devcluster.openshift.com/mayapvelerooadp2/velero1/ restic/ mysql-persistent \\n: exit status 1\" error.file=\"/remote-source/ src/github.com/vmware-tanzu/velero/pkg/restic/backupper.go:184\" error.function=\"github.com/vmware-tanzu/velero/ pkg/restic.(*backupper).BackupPodVolumes\" logSource=\"pkg/backup/backup.go:435\" name=mysql-7d99fc949-qbkds",
"\\\"level=error\\\" in line#2273: time=\\\"2023-06-12T06:50:04Z\\\" level=error msg=\\\"error restoring mysql-869f9f44f6-tp5lv: pods\\\\ \"mysql-869f9f44f6-tp5lv\\\\\\\" is forbidden: violates PodSecurity\\\\ \"restricted:v1.24\\\\\\\": privil eged (container \\\\\\\"mysql\\\\ \" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.capabilities.drop=[\\\\\\\"ALL\\\\\\\"]), seccompProfile (pod or containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.seccompProfile.type to \\\\ \"RuntimeDefault\\\\\\\" or \\\\\\\"Localhost\\\\\\\")\\\" logSource=\\\"/remote-source/velero/app/pkg/restore/restore.go:1388\\\" restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\\n velero container contains \\\"level=error\\\" in line#2447: time=\\\"2023-06-12T06:50:05Z\\\" level=error msg=\\\"Namespace todolist-mariadb, resource restore error: error restoring pods/todolist-mariadb/mysql-869f9f44f6-tp5lv: pods \\\\ \"mysql-869f9f44f6-tp5lv\\\\\\\" is forbidden: violates PodSecurity \\\\\\\"restricted:v1.24\\\\\\\": privileged (container \\\\ \"mysql\\\\\\\" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers \\\\ \"restic-wait\\\\\\\",\\\\\\\"mysql\\\\\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.capabilities.drop=[\\\\\\\"ALL\\\\\\\"]), seccompProfile (pod or containers \\\\ \"restic-wait\\\\\\\", \\\\\\\"mysql\\\\\\\" must set securityContext.seccompProfile.type to \\\\ \"RuntimeDefault\\\\\\\" or \\\\\\\"Localhost\\\\\\\")\\\" logSource=\\\"/remote-source/velero/app/pkg/controller/restore_controller.go:510\\\" restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\\n]\",",
"oc get dpa -o yaml",
"configuration: restic: enable: true velero: args: restore-resource-priorities: 'securitycontextconstraints,customresourcedefinitions,namespaces,storageclasses,volumesnapshotclass.snapshot.storage.k8s.io,volumesnapshotcontents.snapshot.storage.k8s.io,volumesnapshots.snapshot.storage.k8s.io,datauploads.velero.io,persistentvolumes,persistentvolumeclaims,serviceaccounts,secrets,configmaps,limitranges,pods,replicasets.apps,clusterclasses.cluster.x-k8s.io,endpoints,services,-,clusterbootstraps.run.tanzu.vmware.com,clusters.cluster.x-k8s.io,clusterresourcesets.addons.cluster.x-k8s.io' 1 defaultPlugins: - gcp - openshift",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_<time>_essential 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_<time>_essential 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_with_timeout <timeout> 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_with_timeout <timeout> 1",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_metrics_dump",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_metrics_dump",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_without_tls <true/false>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_without_tls <true/false>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds>",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.3 -- /usr/bin/gather_without_tls true",
"oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel9:v1.4 -- /usr/bin/gather_without_tls true",
"oc edit configmap cluster-monitoring-config -n openshift-monitoring",
"apiVersion: v1 data: config.yaml: | enableUserWorkload: true 1 kind: ConfigMap metadata:",
"oc get pods -n openshift-user-workload-monitoring",
"NAME READY STATUS RESTARTS AGE prometheus-operator-6844b4b99c-b57j9 2/2 Running 0 43s prometheus-user-workload-0 5/5 Running 0 32s prometheus-user-workload-1 5/5 Running 0 32s thanos-ruler-user-workload-0 3/3 Running 0 32s thanos-ruler-user-workload-1 3/3 Running 0 32s",
"oc get configmap user-workload-monitoring-config -n openshift-user-workload-monitoring",
"Error from server (NotFound): configmaps \"user-workload-monitoring-config\" not found",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: |",
"oc apply -f 2_configure_user_workload_monitoring.yaml configmap/user-workload-monitoring-config created",
"oc get svc -n openshift-adp -l app.kubernetes.io/name=velero",
"NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE openshift-adp-velero-metrics-svc ClusterIP 172.30.38.244 <none> 8085/TCP 1h",
"apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: labels: app: oadp-service-monitor name: oadp-service-monitor namespace: openshift-adp spec: endpoints: - interval: 30s path: /metrics targetPort: 8085 scheme: http selector: matchLabels: app.kubernetes.io/name: \"velero\"",
"oc apply -f 3_create_oadp_service_monitor.yaml",
"servicemonitor.monitoring.coreos.com/oadp-service-monitor created",
"apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: sample-oadp-alert namespace: openshift-adp spec: groups: - name: sample-oadp-backup-alert rules: - alert: OADPBackupFailing annotations: description: 'OADP had {{USDvalue | humanize}} backup failures over the last 2 hours.' summary: OADP has issues creating backups expr: | increase(velero_backup_failure_total{job=\"openshift-adp-velero-metrics-svc\"}[2h]) > 0 for: 5m labels: severity: warning",
"oc apply -f 4_create_oadp_alert_rule.yaml",
"prometheusrule.monitoring.coreos.com/sample-oadp-alert created",
"oc label node/<node_name> node-role.kubernetes.io/nodeAgent=\"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/nodeAgent: \"\"",
"configuration: nodeAgent: enable: true podConfig: nodeSelector: node-role.kubernetes.io/infra: \"\" node-role.kubernetes.io/worker: \"\"",
"oc api-resources",
"apiVersion: oadp.openshift.io/vialpha1 kind: DataProtectionApplication spec: configuration: velero: featureFlags: - EnableAPIGroupVersions",
"oc -n <your_pod_namespace> annotate pod/<your_pod_name> backup.velero.io/backup-volumes=<your_volume_name_1>, \\ <your_volume_name_2>>,...,<your_volume_name_n>",
"oc -n <your_pod_namespace> annotate pod/<your_pod_name> backup.velero.io/backup-volumes-excludes=<your_volume_name_1>, \\ <your_volume_name_2>>,...,<your_volume_name_n>",
"velero backup create <backup_name> --default-volumes-to-fs-backup <any_other_options>",
"cat change-storageclass.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: change-storage-class-config namespace: openshift-adp labels: velero.io/plugin-config: \"\" velero.io/change-storage-class: RestoreItemAction data: standard-csi: ssd-csi",
"oc create -f change-storage-class-config",
"oc debug --as-root node/<node_name>",
"sh-4.4# chroot /host",
"export HTTP_PROXY=http://<your_proxy.example.com>:8080",
"export HTTPS_PROXY=https://<your_proxy.example.com>:8080",
"export NO_PROXY=<example.com>",
"sh-4.4# /usr/local/bin/cluster-backup.sh /home/core/assets/backup",
"found latest kube-apiserver: /etc/kubernetes/static-pod-resources/kube-apiserver-pod-6 found latest kube-controller-manager: /etc/kubernetes/static-pod-resources/kube-controller-manager-pod-7 found latest kube-scheduler: /etc/kubernetes/static-pod-resources/kube-scheduler-pod-6 found latest etcd: /etc/kubernetes/static-pod-resources/etcd-pod-3 ede95fe6b88b87ba86a03c15e669fb4aa5bf0991c180d3c6895ce72eaade54a1 etcdctl version: 3.4.14 API version: 3.4 {\"level\":\"info\",\"ts\":1624647639.0188997,\"caller\":\"snapshot/v3_snapshot.go:119\",\"msg\":\"created temporary db file\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db.part\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:39.030Z\",\"caller\":\"clientv3/maintenance.go:200\",\"msg\":\"opened snapshot stream; downloading\"} {\"level\":\"info\",\"ts\":1624647639.0301006,\"caller\":\"snapshot/v3_snapshot.go:127\",\"msg\":\"fetching snapshot\",\"endpoint\":\"https://10.0.0.5:2379\"} {\"level\":\"info\",\"ts\":\"2021-06-25T19:00:40.215Z\",\"caller\":\"clientv3/maintenance.go:208\",\"msg\":\"completed snapshot read; closing\"} {\"level\":\"info\",\"ts\":1624647640.6032252,\"caller\":\"snapshot/v3_snapshot.go:142\",\"msg\":\"fetched snapshot\",\"endpoint\":\"https://10.0.0.5:2379\",\"size\":\"114 MB\",\"took\":1.584090459} {\"level\":\"info\",\"ts\":1624647640.6047094,\"caller\":\"snapshot/v3_snapshot.go:152\",\"msg\":\"saved\",\"path\":\"/home/core/assets/backup/snapshot_2021-06-25_190035.db\"} Snapshot saved at /home/core/assets/backup/snapshot_2021-06-25_190035.db {\"hash\":3866667823,\"revision\":31407,\"totalKey\":12828,\"totalSize\":114446336} snapshot db and kube resources are successfully saved to /home/core/assets/backup",
"apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade",
"oc apply -f enable-tech-preview-no-upgrade.yaml",
"oc get crd | grep backup",
"backups.config.openshift.io 2023-10-25T13:32:43Z etcdbackups.operator.openshift.io 2023-10-25T13:32:04Z",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem",
"oc apply -f etcd-backup-pvc.yaml",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s",
"apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1",
"oc apply -f etcd-single-backup.yaml",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate",
"oc apply -f etcd-backup-local-storage.yaml",
"apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: etcd-backup-local-storage local: path: /mnt nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWO Retain Available etcd-backup-local-storage 10s",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Gi 1",
"oc apply -f etcd-backup-pvc.yaml",
"apiVersion: operator.openshift.io/v1alpha1 kind: EtcdBackup metadata: name: etcd-single-backup namespace: openshift-etcd spec: pvcName: etcd-backup-pvc 1",
"oc apply -f etcd-single-backup.yaml",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc namespace: openshift-etcd spec: accessModes: - ReadWriteOnce resources: requests: storage: 200Gi 1 volumeMode: Filesystem storageClassName: etcd-backup-local-storage",
"oc apply -f etcd-backup-pvc.yaml",
"oc get pvc",
"NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE etcd-backup-pvc Bound 51s",
"apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: etcd-backup-local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: Immediate",
"oc apply -f etcd-backup-local-storage.yaml",
"apiVersion: v1 kind: PersistentVolume metadata: name: etcd-backup-pv-fs spec: capacity: storage: 100Gi 1 volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Delete storageClassName: etcd-backup-local-storage local: path: /mnt/ nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - <example_master_node> 2",
"oc get nodes",
"oc get pv",
"NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE etcd-backup-pv-fs 100Gi RWX Delete Available etcd-backup-local-storage 10s",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: etcd-backup-pvc spec: accessModes: - ReadWriteMany volumeMode: Filesystem resources: requests: storage: 10Gi 1 storageClassName: etcd-backup-local-storage",
"oc apply -f etcd-backup-pvc.yaml",
"apiVersion: config.openshift.io/v1alpha1 kind: Backup metadata: name: etcd-recurring-backup spec: etcd: schedule: \"20 4 * * *\" 1 timeZone: \"UTC\" pvcName: etcd-backup-pvc",
"spec: etcd: retentionPolicy: retentionType: RetentionNumber 1 retentionNumber: maxNumberOfBackups: 5 2",
"spec: etcd: retentionPolicy: retentionType: RetentionSize retentionSize: maxSizeOfBackupsGb: 20 1",
"oc create -f etcd-recurring-backup.yaml",
"oc get cronjob -n openshift-etcd",
"oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"EtcdMembersAvailable\")]}{.message}{\"\\n\"}'",
"2 of 3 members are available, ip-10-0-131-183.ec2.internal is unhealthy",
"oc get machines -A -ojsonpath='{range .items[*]}{@.status.nodeRef.name}{\"\\t\"}{@.status.providerStatus.instanceState}{\"\\n\"}' | grep -v running",
"ip-10-0-131-183.ec2.internal stopped 1",
"oc get nodes -o jsonpath='{range .items[*]}{\"\\n\"}{.metadata.name}{\"\\t\"}{range .spec.taints[*]}{.key}{\" \"}' | grep unreachable",
"ip-10-0-131-183.ec2.internal node-role.kubernetes.io/master node.kubernetes.io/unreachable node.kubernetes.io/unreachable 1",
"oc get nodes -l node-role.kubernetes.io/master | grep \"NotReady\"",
"ip-10-0-131-183.ec2.internal NotReady master 122m v1.28.5 1",
"oc get nodes -l node-role.kubernetes.io/master",
"NAME STATUS ROLES AGE VERSION ip-10-0-131-183.ec2.internal Ready master 6h13m v1.28.5 ip-10-0-164-97.ec2.internal Ready master 6h13m v1.28.5 ip-10-0-154-204.ec2.internal Ready master 6h13m v1.28.5",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m 1 etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-131-183.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 6fc1e7c9db35841d | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"sh-4.2# etcdctl member remove 6fc1e7c9db35841d",
"Member 6fc1e7c9db35841d removed from cluster ead669ce1fbfb346",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"oc delete node <node_name>",
"oc delete node ip-10-0-131-183.ec2.internal",
"oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1",
"etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m",
"oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-154-204.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-164-97.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-133-53.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-133-53.ec2.internal 3/3 Running 0 7m49s etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 123m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 124m",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 5eb0d6b8ca24730c | started | ip-10-0-133-53.ec2.internal | https://10.0.133.53:2380 | https://10.0.133.53:2379 | | 757b6793e2408b6c | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | ca8c2990a0aa29d1 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"oc debug node/ip-10-0-131-183.ec2.internal 1",
"sh-4.2# chroot /host",
"sh-4.2# mkdir /var/lib/etcd-backup",
"sh-4.2# mv /etc/kubernetes/manifests/etcd-pod.yaml /var/lib/etcd-backup/",
"sh-4.2# mv /var/lib/etcd/ /tmp",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-131-183.ec2.internal 2/3 Error 7 6h9m etcd-ip-10-0-164-97.ec2.internal 3/3 Running 0 6h6m etcd-ip-10-0-154-204.ec2.internal 3/3 Running 0 6h6m",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | 62bcf33650a7170a | started | ip-10-0-131-183.ec2.internal | https://10.0.131.183:2380 | https://10.0.131.183:2379 | | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"sh-4.2# etcdctl member remove 62bcf33650a7170a",
"Member 62bcf33650a7170a removed from cluster ead669ce1fbfb346",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+------------------------------+---------------------------+---------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | +------------------+---------+------------------------------+---------------------------+---------------------------+ | b78e2856655bc2eb | started | ip-10-0-164-97.ec2.internal | https://10.0.164.97:2380 | https://10.0.164.97:2379 | | d022e10b498760d5 | started | ip-10-0-154-204.ec2.internal | https://10.0.154.204:2380 | https://10.0.154.204:2379 | +------------------+---------+------------------------------+---------------------------+---------------------------+",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"oc get secrets -n openshift-etcd | grep ip-10-0-131-183.ec2.internal 1",
"etcd-peer-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m etcd-serving-metrics-ip-10-0-131-183.ec2.internal kubernetes.io/tls 2 47m",
"oc delete secret -n openshift-etcd etcd-peer-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-ip-10-0-131-183.ec2.internal",
"oc delete secret -n openshift-etcd etcd-serving-metrics-ip-10-0-131-183.ec2.internal",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"single-master-recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]",
"oc rsh -n openshift-etcd etcd-ip-10-0-154-204.ec2.internal",
"sh-4.2# etcdctl endpoint health",
"https://10.0.131.183:2379 is healthy: successfully committed proposal: took = 16.671434ms https://10.0.154.204:2379 is healthy: successfully committed proposal: took = 16.698331ms https://10.0.164.97:2379 is healthy: successfully committed proposal: took = 16.621645ms",
"oc -n openshift-etcd get pods -l k8s-app=etcd -o wide",
"etcd-openshift-control-plane-0 5/5 Running 11 3h56m 192.168.10.9 openshift-control-plane-0 <none> <none> etcd-openshift-control-plane-1 5/5 Running 0 3h54m 192.168.10.10 openshift-control-plane-1 <none> <none> etcd-openshift-control-plane-2 5/5 Running 0 3h58m 192.168.10.11 openshift-control-plane-2 <none> <none>",
"oc rsh -n openshift-etcd etcd-openshift-control-plane-0",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380/ | https://192.168.10.9:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+---------------------+",
"sh-4.2# etcdctl member remove 7a8197040a5126c8",
"Member 7a8197040a5126c8 removed from cluster b23536c33f2cdd1b",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+ | cc3830a72fc357f9 | started | openshift-control-plane-2 | https://192.168.10.11:2380/ | https://192.168.10.11:2379/ | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380/ | https://192.168.10.10:2379/ | false | +------------------+---------+--------------------+---------------------------+---------------------------+-------------------------+",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"oc get secrets -n openshift-etcd | grep openshift-control-plane-2",
"etcd-peer-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-metrics-openshift-control-plane-2 kubernetes.io/tls 2 134m etcd-serving-openshift-control-plane-2 kubernetes.io/tls 2 134m",
"oc delete secret etcd-peer-openshift-control-plane-2 -n openshift-etcd secret \"etcd-peer-openshift-control-plane-2\" deleted",
"oc delete secret etcd-serving-metrics-openshift-control-plane-2 -n openshift-etcd secret \"etcd-serving-metrics-openshift-control-plane-2\" deleted",
"oc delete secret etcd-serving-openshift-control-plane-2 -n openshift-etcd secret \"etcd-serving-openshift-control-plane-2\" deleted",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned",
"oc get clusteroperator baremetal",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE baremetal 4.15.0 True False False 3d15h",
"oc delete bmh openshift-control-plane-2 -n openshift-machine-api",
"baremetalhost.metal3.io \"openshift-control-plane-2\" deleted",
"oc delete machine -n openshift-machine-api examplecluster-control-plane-2",
"oc edit machine -n openshift-machine-api examplecluster-control-plane-2",
"finalizers: - machine.machine.openshift.io",
"machine.machine.openshift.io/examplecluster-control-plane-2 edited",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned",
"oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 3h24m v1.28.5 openshift-control-plane-1 Ready master 3h24m v1.28.5 openshift-compute-0 Ready worker 176m v1.28.5 openshift-compute-1 Ready worker 176m v1.28.5",
"cat <<EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: openshift-control-plane-2-bmc-secret namespace: openshift-machine-api data: password: <password> username: <username> type: Opaque --- apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: name: openshift-control-plane-2 namespace: openshift-machine-api spec: automatedCleaningMode: disabled bmc: address: redfish://10.46.61.18:443/redfish/v1/Systems/1 credentialsName: openshift-control-plane-2-bmc-secret disableCertificateVerification: true bootMACAddress: 48:df:37:b0:8a:a0 bootMode: UEFI externallyProvisioned: false online: true rootDeviceHints: deviceName: /dev/disk/by-id/scsi-<serial_number> userData: name: master-user-data-managed namespace: openshift-machine-api EOF",
"oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 available examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE examplecluster-control-plane-0 Running 3h11m openshift-control-plane-0 baremetalhost:///openshift-machine-api/openshift-control-plane-0/da1ebe11-3ff2-41c5-b099-0aa41222964e externally provisioned 1 examplecluster-control-plane-1 Running 3h11m openshift-control-plane-1 baremetalhost:///openshift-machine-api/openshift-control-plane-1/d9f9acbc-329c-475e-8d81-03b20280a3e1 externally provisioned examplecluster-control-plane-2 Running 3h11m openshift-control-plane-2 baremetalhost:///openshift-machine-api/openshift-control-plane-2/3354bdac-61d8-410f-be5b-6a395b056135 externally provisioned examplecluster-compute-0 Running 165m openshift-compute-0 baremetalhost:///openshift-machine-api/openshift-compute-0/3d685b81-7410-4bb3-80ec-13a31858241f provisioned examplecluster-compute-1 Running 165m openshift-compute-1 baremetalhost:///openshift-machine-api/openshift-compute-1/0fdae6eb-2066-4241-91dc-e7ea72ab13b9 provisioned",
"oc get bmh -n openshift-machine-api",
"oc get bmh -n openshift-machine-api NAME STATE CONSUMER ONLINE ERROR AGE openshift-control-plane-0 externally provisioned examplecluster-control-plane-0 true 4h48m openshift-control-plane-1 externally provisioned examplecluster-control-plane-1 true 4h48m openshift-control-plane-2 provisioned examplecluster-control-plane-3 true 47m openshift-compute-0 provisioned examplecluster-compute-0 true 4h48m openshift-compute-1 provisioned examplecluster-compute-1 true 4h48m",
"oc get nodes",
"oc get nodes NAME STATUS ROLES AGE VERSION openshift-control-plane-0 Ready master 4h26m v1.28.5 openshift-control-plane-1 Ready master 4h26m v1.28.5 openshift-control-plane-2 Ready master 12m v1.28.5 openshift-compute-0 Ready worker 3h58m v1.28.5 openshift-compute-1 Ready worker 3h58m v1.28.5",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"EtcdCertSignerControllerDegraded: [Operation cannot be fulfilled on secrets \"etcd-peer-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-sno-0\": the object has been modified; please apply your changes to the latest version and try again, Operation cannot be fulfilled on secrets \"etcd-serving-metrics-sno-0\": the object has been modified; please apply your changes to the latest version and try again]",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-openshift-control-plane-0 5/5 Running 0 105m etcd-openshift-control-plane-1 5/5 Running 0 107m etcd-openshift-control-plane-2 5/5 Running 0 103m",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc rsh -n openshift-etcd etcd-openshift-control-plane-0",
"sh-4.2# etcdctl member list -w table",
"+------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+ | 7a8197040a5126c8 | started | openshift-control-plane-2 | https://192.168.10.11:2380 | https://192.168.10.11:2379 | false | | 8d5abe9669a39192 | started | openshift-control-plane-1 | https://192.168.10.10:2380 | https://192.168.10.10:2379 | false | | cc3830a72fc357f9 | started | openshift-control-plane-0 | https://192.168.10.9:2380 | https://192.168.10.9:2379 | false | +------------------+---------+--------------------+---------------------------+---------------------------+-----------------+",
"etcdctl endpoint health --cluster",
"https://192.168.10.10:2379 is healthy: successfully committed proposal: took = 8.973065ms https://192.168.10.9:2379 is healthy: successfully committed proposal: took = 11.559829ms https://192.168.10.11:2379 is healthy: successfully committed proposal: took = 11.665203ms",
"oc get etcd -o=jsonpath='{range.items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision",
"sudo mv -v /etc/kubernetes/manifests/etcd-pod.yaml /tmp",
"sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"",
"sudo mv -v /etc/kubernetes/manifests/kube-apiserver-pod.yaml /tmp",
"sudo crictl ps | grep kube-apiserver | egrep -v \"operator|guard\"",
"sudo mv -v /etc/kubernetes/manifests/kube-controller-manager-pod.yaml /tmp",
"sudo crictl ps | grep kube-controller-manager | egrep -v \"operator|guard\"",
"sudo mv -v /etc/kubernetes/manifests/kube-scheduler-pod.yaml /tmp",
"sudo crictl ps | grep kube-scheduler | egrep -v \"operator|guard\"",
"sudo mv -v /var/lib/etcd/ /tmp",
"sudo mv -v /etc/kubernetes/manifests/keepalived.yaml /tmp",
"sudo crictl ps --name keepalived",
"ip -o address | egrep '<api_vip>|<ingress_vip>'",
"sudo ip address del <reported_vip> dev <reported_vip_device>",
"ip -o address | grep <api_vip>",
"sudo -E /usr/local/bin/cluster-restore.sh /home/core/assets/backup",
"...stopping kube-scheduler-pod.yaml ...stopping kube-controller-manager-pod.yaml ...stopping etcd-pod.yaml ...stopping kube-apiserver-pod.yaml Waiting for container etcd to stop .complete Waiting for container etcdctl to stop .............................complete Waiting for container etcd-metrics to stop complete Waiting for container kube-controller-manager to stop complete Waiting for container kube-apiserver to stop ..........................................................................................complete Waiting for container kube-scheduler to stop complete Moving etcd data-dir /var/lib/etcd/member to /var/lib/etcd-backup starting restore-etcd static pod starting kube-apiserver-pod.yaml static-pod-resources/kube-apiserver-pod-7/kube-apiserver-pod.yaml starting kube-controller-manager-pod.yaml static-pod-resources/kube-controller-manager-pod-7/kube-controller-manager-pod.yaml starting kube-scheduler-pod.yaml static-pod-resources/kube-scheduler-pod-8/kube-scheduler-pod.yaml",
"oc get nodes -w",
"NAME STATUS ROLES AGE VERSION host-172-25-75-28 Ready master 3d20h v1.28.5 host-172-25-75-38 Ready infra,worker 3d20h v1.28.5 host-172-25-75-40 Ready master 3d20h v1.28.5 host-172-25-75-65 Ready master 3d20h v1.28.5 host-172-25-75-74 Ready infra,worker 3d20h v1.28.5 host-172-25-75-79 Ready worker 3d20h v1.28.5 host-172-25-75-86 Ready worker 3d20h v1.28.5 host-172-25-75-98 Ready infra,worker 3d20h v1.28.5",
"ssh -i <ssh-key-path> core@<master-hostname>",
"sh-4.4# pwd /var/lib/kubelet/pki sh-4.4# ls kubelet-client-2022-04-28-11-24-09.pem kubelet-server-2022-04-28-11-24-15.pem kubelet-client-current.pem kubelet-server-current.pem",
"sudo systemctl restart kubelet.service",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 2 csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 3 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 4",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"oc adm certificate approve <csr_name>",
"sudo crictl ps | grep etcd | egrep -v \"operator|etcd-guard\"",
"3ad41b7908e32 36f86e2eeaaffe662df0d21041eb22b8198e0e58abeeae8c743c3e6e977e8009 About a minute ago Running etcd 0 7c05f8af362f0",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"NAME READY STATUS RESTARTS AGE etcd-ip-10-0-143-125.ec2.internal 1/1 Running 1 2m47s",
"oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-control-plane",
"oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-control-plane",
"sudo rm -f /var/lib/ovn-ic/etc/*.db",
"sudo systemctl restart ovs-vswitchd ovsdb-server",
"oc -n openshift-ovn-kubernetes delete pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>",
"oc get po -n openshift-ovn-kubernetes",
"oc delete node <node>",
"ssh -i <ssh-key-path> core@<node>",
"sudo mv /var/lib/kubelet/pki/* /tmp",
"sudo systemctl restart kubelet.service",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-<uuid> 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending",
"adm certificate approve csr-<uuid>",
"oc get nodes",
"oc -n openshift-ovn-kubernetes get pod -l app=ovnkube-node --field-selector=spec.nodeName==<node>",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-0 Running m4.xlarge us-east-1 us-east-1a 3h37m ip-10-0-131-183.ec2.internal aws:///us-east-1a/i-0ec2782f8287dfb7e stopped 1 clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc delete machine -n openshift-machine-api clustername-8qw5l-master-0 1",
"oc get machines -n openshift-machine-api -o wide",
"NAME PHASE TYPE REGION ZONE AGE NODE PROVIDERID STATE clustername-8qw5l-master-1 Running m4.xlarge us-east-1 us-east-1b 3h37m ip-10-0-143-125.ec2.internal aws:///us-east-1b/i-096c349b700a19631 running clustername-8qw5l-master-2 Running m4.xlarge us-east-1 us-east-1c 3h37m ip-10-0-154-194.ec2.internal aws:///us-east-1c/i-02626f1dba9ed5bba running clustername-8qw5l-master-3 Provisioning m4.xlarge us-east-1 us-east-1a 85s ip-10-0-173-171.ec2.internal aws:///us-east-1a/i-015b0888fe17bc2c8 running 1 clustername-8qw5l-worker-us-east-1a-wbtgd Running m4.large us-east-1 us-east-1a 3h28m ip-10-0-129-226.ec2.internal aws:///us-east-1a/i-010ef6279b4662ced running clustername-8qw5l-worker-us-east-1b-lrdxb Running m4.large us-east-1 us-east-1b 3h28m ip-10-0-144-248.ec2.internal aws:///us-east-1b/i-0cb45ac45a166173b running clustername-8qw5l-worker-us-east-1c-pkg26 Running m4.large us-east-1 us-east-1c 3h28m ip-10-0-170-181.ec2.internal aws:///us-east-1c/i-06861c00007751b0a running",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": {\"useUnsupportedUnsafeNonHANonProductionUnstableEtcd\": true}}}'",
"export KUBECONFIG=/etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost-recovery.kubeconfig",
"oc patch etcd cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge 1",
"oc patch etcd/cluster --type=merge -p '{\"spec\": {\"unsupportedConfigOverrides\": null}}'",
"oc get etcd/cluster -oyaml",
"oc get etcd -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubeapiserver cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubeapiserver -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubecontrollermanager cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubecontrollermanager -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc patch kubescheduler cluster -p='{\"spec\": {\"forceRedeploymentReason\": \"recovery-'\"USD( date --rfc-3339=ns )\"'\"}}' --type=merge",
"oc get kubescheduler -o=jsonpath='{range .items[0].status.conditions[?(@.type==\"NodeInstallerProgressing\")]}{.reason}{\"\\n\"}{.message}{\"\\n\"}'",
"AllNodesAtLatestRevision 3 nodes are at revision 7 1",
"oc -n openshift-etcd get pods -l k8s-app=etcd",
"etcd-ip-10-0-143-125.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-154-194.ec2.internal 2/2 Running 0 9h etcd-ip-10-0-173-171.ec2.internal 2/2 Running 0 9h",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig",
"oc whoami",
"oc get csr",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2s94x 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending 1 csr-4bd6t 8m3s kubernetes.io/kubelet-serving system:node:<node_name> Pending csr-4hl85 13m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending 2 csr-zhhhp 3m8s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc describe csr <csr_name> 1",
"oc adm certificate approve <csr_name>",
"oc adm certificate approve <csr_name>"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html-single/backup_and_restore/index |
Chapter 2. Object storage | Chapter 2. Object storage The Ceph Object Gateway client is a leading storage backend for cloud platforms that provides a RESTful S3-compliant and Swift-compliant object storage for objects like audio, bitmap, video, and other data. Ceph Object Gateway, also known as RADOS Gateway (RGW), is an object storage interface built on top of the librados library to provide applications with a RESTful gateway to Ceph storage clusters. Common use cases The following are some of the most common uses cases for CephFS. Storage as a Service (SaaS) Provides scalability and performance for both small and large object stores. AI, Analytics and Big Data including Data Lake and Data Lake House Cloud native data lake, with massive scalability and high availability to support demanding workloads. Backup and archive or large amounts of unstructured data A unique new way to architect the dataflow in applications which is through event driven architectures. Data intensive workloads like Cloud Native (S3) object data Back up and recover into and from an object storage helps improve recovery time objectives (RTO) and recovery point objectives (RPO). Internet of Things (IoT) Object gateways serve as intermediatries in IoT systems, aggregating data from various devices, translating communication protocols, and enabling edge processing. They enhance security, facilitate device management, and ensure interoperability, streaming the overall IoT ecosystem. 2.1. Object storage common workloads Understand the most common workloads for object storage. Data efficiency Use for erasure coding, thin provisioning, lifecycle management, and compression. Data security Use for object lock, key management, at rest and inflight encryption, and WORM. Data resilience Use for backup, snapshots, cloning, and business continuity. 2.2. Object storage interfaces Learn about the three object storage interfaces. Administrative API Provides an administrative interface for managing the Ceph Object Gateways. For more information, see Ceph Object Gateway administrative API S3 Provides object storage functionality with an interface that is compatible with a large subset of the Amazon S3 RESTful API. For more information, see Ceph Object Gateway and the S3 API . Swift Provides object storage functionality with an interface that is compatible with a large subset of the OpenStack Swift API. The Ceph Object Gateway is a service interacting with a Ceph storage cluster. For more information, see Ceph Object Gateway and the Swift API . 2.3. Getting started with object storage This section lists the relevant tasks required for working with object storage. Before you begin See Compatibility Matrix for Red Hat Ceph Storage 7.0 for a list of backup vendors that are certified with Red Hat Ceph Storage as an S3 target. Prerequisites There are specific network and hardware requirements to work with Ceph object storage. For more information, see Red Hat Ceph Storage considerations and recommendations . Setting up S3 server-side security For detailed information, see Server-Side Encryption (SSE) . Creating S3 users and testing S3 access For detailed information on creating S3 users, see Create an S3 user . For detailed information on testing S3 access, see Test S3 access . Managing Object Gateway through the dashboard For detailed information, see Management of Ceph Object Gateway using the dashboard . Multi-site replication to enable Disaster Recovery of backup For detailed information, see Failover and disaster recovery . Deploying Ceph Object Gateway with Ceph Orchestrator Ceph Object Gateway is deployed by either using the Ceph Orchestrator with the command line interface or by using the service specification. You can also configure multi-site Ceph Object Gateways, and remove the Ceph Object Gateway using the Ceph Orchestrator. The cephadm command deploys the Ceph Object Gateway as a collection of daemons that manages a single-cluster deployment or a particular realm and zone in a multi-site deployment. For full Ceph Object Gateway deployment information and instructions, see Deployment . Alternatively, you can deploy Ceph Object Gateway using the command-line interface. For more information, see Deploying the Ceph Object Gateway using the command line interface . | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/getting_started_guide/object-storage_getting-started |
Chapter 2. Deploy OpenShift Data Foundation using local storage devices | Chapter 2. Deploy OpenShift Data Foundation using local storage devices You can deploy OpenShift Data Foundation on any platform including virtualized and cloud environments where OpenShift Container Platform is already installed. Also, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation. For more information, see Deploy standalone Multicloud Object Gateway . Perform the following steps to deploy OpenShift Data Foundation: Install the Local Storage Operator . Install the Red Hat OpenShift Data Foundation Operator . Create an OpenShift Data Foundation cluster on any platform . 2.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 2.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.17 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.3. Creating OpenShift Data Foundation cluster on any platform Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. Ensure that the disk type is SSD, which is the only supported disk type. If you want to use multus networking, you must create network attachment definitions (NADs) before deployment which is later attached to the cluster. For more information, see Multi network plug-in (Multus) support and Creating network attachment definitions . Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, perform the following: Select Full Deployment for the Deployment type option. Select the Create a new StorageClass using the local storage devices option. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . Important You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . The local volume set name appears as the default value for the storage class name. You can change the name. Select one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed if at least 24 CPUs and 72 GiB of RAM is available. For minimum starting node requirements, see the Resource requirements section in the Planning guide. From the available list of Disk Type , select SSD/NVMe . Expand the Advanced section and set the following options: Volume Mode Block is selected as the default value. Device Type Select one or more device types from the dropdown list. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of Persistent Volumes (PVs) that you can create on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Select one or both of the following Encryption level : Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Select one of the following: Default (SDN) If you are using a single network. Custom (Multus) If you are using multiple network interfaces. Select a Public Network Interface from the dropdown. Select a Cluster Network Interface from the dropdown. Note If you are using only one additional network interface, select the single NetworkAttachementDefinition , that is, ocs-public-cluster for the Public Network Interface and leave the Cluster Network Interface blank. Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System Click ocs-storagecluster-storagesystem Resources . Verify that the Status of the StorageCluster is Ready and has a green tick mark to it. To verify if the flexible scaling is enabled on your storage cluster, perform the following steps (for arbiter mode, flexible scaling is disabled): In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System Click ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for the keys flexibleScaling in the spec section and failureDomain in the status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled: To verify that all the components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation installation . To verify the multi networking (Multus), see Verifying the Multus networking . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide and follow the instructions in the "Scaling storage of bare metal OpenShift Data Foundation cluster" section. 2.4. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the Multicloud Object Gateway is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . Verify the Multus networking . 2.4.1. Verifying the state of the pods Procedure Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) MON rook-ceph-mon-* (3 pods distributed across storage nodes) MGR rook-ceph-mgr-* (1 pod on any storage node) MDS rook-ceph-mds-ocs-storagecluster-cephfilesystem-* (2 pods distributed across storage nodes) RGW rook-ceph-rgw-ocs-storagecluster-cephobjectstore-* (1 pod on any storage node) CSI cephfs csi-cephfsplugin-* (1 pod on each storage node) csi-cephfsplugin-provisioner-* (2 pods distributed across storage nodes) rbd csi-rbdplugin-* (1 pod on each storage node) csi-rbdplugin-provisioner-* (2 pods distributed across storage nodes) rook-ceph-crashcollector rook-ceph-crashcollector-* (1 pod on each storage node) OSD rook-ceph-osd-* (1 pod for each device) 2.4.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.4.3. Verifying the Multicloud Object Gateway is healthy Procedure In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the object service dashboard, see Monitoring OpenShift Data Foundation . Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 2.4.4. Verifying that the specific storage classes exist Procedure Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-storagecluster-ceph-rbd ocs-storagecluster-cephfs openshift-storage.noobaa.io ocs-storagecluster-ceph-rgw 2.4.5. Verifying the Multus networking To determine if Multus is working in your cluster, verify the Multus networking. Procedure Based on your Network configuration choices, the OpenShift Data Foundation operator will do one of the following: If only a single NetworkAttachmentDefinition (for example, ocs-public-cluster ) was selected for the Public Network Interface, then the traffic between the application pods and the OpenShift Data Foundation cluster will happen on this network. Additionally the cluster will be self configured to also use this network for the replication and rebalancing traffic between OSDs. If both NetworkAttachmentDefinitions (for example, ocs-public and ocs-cluster ) were selected for the Public Network Interface and the Cluster Network Interface respectively during the Storage Cluster installation, then client storage traffic will be on the public network and cluster network for the replication and rebalancing traffic between OSDs. To verify the network configuration is correct, complete the following: In the OpenShift console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for network in the spec section and ensure the configuration is correct for your network interface choices. This example is for separating the client storage traffic from the storage replication traffic. Sample output: To verify the network configuration is correct using the command line interface, run the following commands: Sample output: Confirm the OSD pods are using correct network In the openshift-storage namespace use one of the OSD pods to verify the pod has connectivity to the correct networks. This example is for separating the client storage traffic from the storage replication traffic. Note Only the OSD pods will connect to both Multus public and cluster networks if both are created. All other OCS pods will connect to the Multus public network. Sample output: To confirm the OSD pods are using correct network using the command line interface, run the following command (requires the jq utility): Sample output: | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"spec: flexibleScaling: true [...] status: failureDomain: host",
"[..] spec: [..] network: ipFamily: IPv4 provider: multus selectors: cluster: openshift-storage/ocs-cluster public: openshift-storage/ocs-public [..]",
"oc get storagecluster ocs-storagecluster -n openshift-storage -o=jsonpath='{.spec.network}{\"\\n\"}'",
"{\"ipFamily\":\"IPv4\",\"provider\":\"multus\",\"selectors\":{\"cluster\":\"openshift-storage/ocs-cluster\",\"public\":\"openshift-storage/ocs-public\"}}",
"oc get -n openshift-storage USD(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\\.v1\\.cni\\.cncf\\.io/network-status}{\"\\n\"}'",
"[{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.129.2.30\" ], \"default\": true, \"dns\": {} },{ \"name\": \"openshift-storage/ocs-cluster\", \"interface\": \"net1\", \"ips\": [ \"192.168.2.1\" ], \"mac\": \"e2:04:c6:81:52:f1\", \"dns\": {} },{ \"name\": \"openshift-storage/ocs-public\", \"interface\": \"net2\", \"ips\": [ \"192.168.1.1\" ], \"mac\": \"ee:a0:b6:a4:07:94\", \"dns\": {} }]",
"oc get -n openshift-storage USD(oc get pods -n openshift-storage -o name -l app=rook-ceph-osd | grep 'osd-0') -o=jsonpath='{.metadata.annotations.k8s\\.v1\\.cni\\.cncf\\.io/network-status}{\"\\n\"}' | jq -r '.[].name'",
"openshift-sdn openshift-storage/ocs-cluster openshift-storage/ocs-public"
] | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_on_any_platform/deploy-using-local-storage-devices-bm |
37.3.2. Useful Websites | 37.3.2. Useful Websites http://www.redhat.com/mirrors/LDP/HOWTO/Module-HOWTO/index.html - Linux Loadable Kernel Module HOWTO from the Linux Documentation Project. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/kernel_modules-additional_resources-useful_websites |
Chapter 15. IPPool [whereabouts.cni.cncf.io/v1alpha1] | Chapter 15. IPPool [whereabouts.cni.cncf.io/v1alpha1] Description IPPool is the Schema for the ippools API Type object 15.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object IPPoolSpec defines the desired state of IPPool 15.1.1. .spec Description IPPoolSpec defines the desired state of IPPool Type object Required allocations range Property Type Description allocations object Allocations is the set of allocated IPs for the given range. Its` indices are a direct mapping to the IP with the same index/offset for the pool's range. allocations{} object IPAllocation represents metadata about the pod/container owner of a specific IP range string Range is a RFC 4632/4291-style string that represents an IP address and prefix length in CIDR notation 15.1.2. .spec.allocations Description Allocations is the set of allocated IPs for the given range. Its` indices are a direct mapping to the IP with the same index/offset for the pool's range. Type object 15.1.3. .spec.allocations{} Description IPAllocation represents metadata about the pod/container owner of a specific IP Type object Required id podref Property Type Description id string ifname string podref string 15.2. API endpoints The following API endpoints are available: /apis/whereabouts.cni.cncf.io/v1alpha1/ippools GET : list objects of kind IPPool /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/ippools DELETE : delete collection of IPPool GET : list objects of kind IPPool POST : create an IPPool /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/ippools/{name} DELETE : delete an IPPool GET : read the specified IPPool PATCH : partially update the specified IPPool PUT : replace the specified IPPool 15.2.1. /apis/whereabouts.cni.cncf.io/v1alpha1/ippools HTTP method GET Description list objects of kind IPPool Table 15.1. HTTP responses HTTP code Reponse body 200 - OK IPPoolList schema 401 - Unauthorized Empty 15.2.2. /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/ippools HTTP method DELETE Description delete collection of IPPool Table 15.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind IPPool Table 15.3. HTTP responses HTTP code Reponse body 200 - OK IPPoolList schema 401 - Unauthorized Empty HTTP method POST Description create an IPPool Table 15.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.5. Body parameters Parameter Type Description body IPPool schema Table 15.6. HTTP responses HTTP code Reponse body 200 - OK IPPool schema 201 - Created IPPool schema 202 - Accepted IPPool schema 401 - Unauthorized Empty 15.2.3. /apis/whereabouts.cni.cncf.io/v1alpha1/namespaces/{namespace}/ippools/{name} Table 15.7. Global path parameters Parameter Type Description name string name of the IPPool HTTP method DELETE Description delete an IPPool Table 15.8. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 15.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified IPPool Table 15.10. HTTP responses HTTP code Reponse body 200 - OK IPPool schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified IPPool Table 15.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.12. HTTP responses HTTP code Reponse body 200 - OK IPPool schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified IPPool Table 15.13. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 15.14. Body parameters Parameter Type Description body IPPool schema Table 15.15. HTTP responses HTTP code Reponse body 200 - OK IPPool schema 201 - Created IPPool schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/network_apis/ippool-whereabouts-cni-cncf-io-v1alpha1 |
Appendix B. Configuring a Host for PCI Passthrough | Appendix B. Configuring a Host for PCI Passthrough Note This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV Enabling PCI passthrough allows a virtual machine to use a host device as if the device were directly attached to the virtual machine. To enable the PCI passthrough function, you must enable virtualization extensions and the IOMMU function. The following procedure requires you to reboot the host. If the host is attached to the Manager already, ensure you place the host into maintenance mode first. Prerequisites Ensure that the host hardware meets the requirements for PCI device passthrough and assignment. See PCI Device Requirements for more information. Configuring a Host for PCI Passthrough Enable the virtualization extension and IOMMU extension in the BIOS. See Enabling Intel VT-x and AMD-V virtualization hardware extensions in BIOS in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide for more information. Enable the IOMMU flag in the kernel by selecting the Hostdev Passthrough & SR-IOV check box when adding the host to the Manager or by editing the grub configuration file manually. To enable the IOMMU flag from the Administration Portal, see Adding Standard Hosts to the Red Hat Virtualization Manager and Kernel Settings Explained . To edit the grub configuration file manually, see Enabling IOMMU Manually . For GPU passthrough, you need to run additional configuration steps on both the host and the guest system. See GPU device passthrough: Assigning a host GPU to a single virtual machine in Setting up an NVIDIA GPU for a virtual machine in Red Hat Virtualization for more information. Enabling IOMMU Manually Enable IOMMU by editing the grub configuration file. Note If you are using IBM POWER8 hardware, skip this step as IOMMU is enabled by default. For Intel, boot the machine, and append intel_iommu=on to the end of the GRUB_CMDLINE_LINUX line in the grub configuration file. For AMD, boot the machine, and append amd_iommu=on to the end of the GRUB_CMDLINE_LINUX line in the grub configuration file. # vi /etc/default/grub ... GRUB_CMDLINE_LINUX="nofb splash=quiet console=tty0 ... amd_iommu=on ... Note If intel_iommu=on or an AMD IOMMU is detected, you can try adding iommu=pt . The pt option only enables IOMMU for devices used in passthrough and provides better host performance. However, the option might not be supported on all hardware. Revert to the option if the pt option doesn't work for your host. If the passthrough fails because the hardware does not support interrupt remapping, you can consider enabling the allow_unsafe_interrupts option if the virtual machines are trusted. The allow_unsafe_interrupts is not enabled by default because enabling it potentially exposes the host to MSI attacks from virtual machines. To enable the option: Refresh the grub.cfg file and reboot the host for these changes to take effect: # grub2-mkconfig -o /boot/grub2/grub.cfg # reboot | [
"vi /etc/default/grub GRUB_CMDLINE_LINUX=\"nofb splash=quiet console=tty0 ... intel_iommu=on",
"vi /etc/default/grub ... GRUB_CMDLINE_LINUX=\"nofb splash=quiet console=tty0 ... amd_iommu=on ...",
"vi /etc/modprobe.d options vfio_iommu_type1 allow_unsafe_interrupts=1",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"reboot"
] | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/Configuring_a_Host_for_PCI_Passthrough_SM_remoteDB_deploy |
1.2. Logical Volumes | 1.2. Logical Volumes Volume management creates a layer of abstraction over physical storage, allowing you to create logical storage volumes. This provides much greater flexibility in a number of ways than using physical storage directly. With a logical volume, you are not restricted to physical disk sizes. In addition, the hardware storage configuration is hidden from the software so it can be resized and moved without stopping applications or unmounting file systems. This can reduce operational costs. Logical volumes provide the following advantages over using physical storage directly: Flexible capacity When using logical volumes, file systems can extend across multiple disks, since you can aggregate disks and partitions into a single logical volume. Resizeable storage pools You can extend logical volumes or reduce logical volumes in size with simple software commands, without reformatting and repartitioning the underlying disk devices. Online data relocation To deploy newer, faster, or more resilient storage subsystems, you can move data while your system is active. Data can be rearranged on disks while the disks are in use. For example, you can empty a hot-swappable disk before removing it. Convenient device naming Logical storage volumes can be managed in user-defined and custom named groups. Disk striping You can create a logical volume that stripes data across two or more disks. This can dramatically increase throughput. Mirroring volumes Logical volumes provide a convenient way to configure a mirror for your data. Volume Snapshots Using logical volumes, you can take device snapshots for consistent backups or to test the effect of changes without affecting the real data. The implementation of these features in LVM is described in the remainder of this document. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/logical_volume_manager_administration/logical_volumes |
Chapter 4. Upgrading OpenShift Virtualization | Chapter 4. Upgrading OpenShift Virtualization You can manually upgrade to the minor version of OpenShift Virtualization and monitor the status of an update by using the web console. 4.1. About upgrading OpenShift Virtualization 4.1.1. How OpenShift Virtualization upgrades work You can upgrade to the minor version of OpenShift Virtualization by using the OpenShift Container Platform web console to change the channel of your Operator subscription. You can enable automatic z-stream updates during OpenShift Virtualization installation. Updates are delivered via the Marketplace Operator , which is deployed during OpenShift Container Platform installation. The Marketplace Operator makes external Operators available to your cluster. The amount of time an update takes to complete depends on your network connection. Most automatic updates complete within fifteen minutes. 4.1.2. How OpenShift Virtualization upgrades affect your cluster Upgrading does not interrupt virtual machine workloads. Virtual machine pods are not restarted or migrated during an upgrade. If you need to update the virt-launcher pod, you must restart or live migrate the virtual machine. Note Each virtual machine has a virt-launcher pod that runs the virtual machine instance. The virt-launcher pod runs an instance of libvirt , which is used to manage the virtual machine process. Upgrading does not interrupt network connections. Data volumes and their associated persistent volume claims are preserved during upgrade. Important If you have virtual machines running that cannot be live migrated, they might block an OpenShift Container Platform cluster upgrade. This includes virtual machines that use hostpath provisioner storage or SR-IOV network interfaces. As a workaround, you can reconfigure the virtual machines so that they can be powered off automatically during a cluster upgrade. Remove the evictionStrategy: LiveMigrate field and set the runStrategy field to Always . 4.2. Upgrading OpenShift Virtualization to the minor version You can manually upgrade OpenShift Virtualization to the minor version by using the OpenShift Container Platform web console to change the channel of your Operator subscription. Prerequisites Log in to the cluster as a user with the cluster-admin role. Procedure Access the OpenShift Container Platform web console and navigate to Operators Installed Operators . Click OpenShift Virtualization to open the Operator Details page. Click the Subscription tab to open the Subscription Overview page. In the Channel pane, click the pencil icon on the right side of the version number to open the Change Subscription Update Channel window. Select stable . This ensures that you install the version of OpenShift Virtualization that is compatible with your OpenShift Container Platform version. Click Save . Check the status of the upgrade by navigating to Operators Installed Operators . You can also check the status by running the following oc command: USD oc get csv -n openshift-cnv 4.3. Monitoring upgrade status The best way to monitor OpenShift Virtualization upgrade status is to watch the cluster service version (CSV) PHASE . You can also monitor the CSV conditions in the web console or by running the command provided here. Note The PHASE and conditions values are approximations that are based on available information. Prerequisites Log in to the cluster as a user with the cluster-admin role. Install the OpenShift CLI ( oc ). Procedure Run the following command: USD oc get csv Review the output, checking the PHASE field. For example: Example output VERSION REPLACES PHASE 2.5.0 kubevirt-hyperconverged-operator.v2.4.3 Installing 2.4.3 Replacing Optional: Monitor the aggregated status of all OpenShift Virtualization component conditions by running the following command: USD oc get hco -n openshift-cnv kubevirt-hyperconverged \ -o=jsonpath='{range .status.conditions[*]}{.type}{"\t"}{.status}{"\t"}{.message}{"\n"}{end}' A successful upgrade results in the following output: Example output ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully 4.4. Additional resources Cluster service versions (CSVs) Configuring virtual machine eviction strategy | [
"oc get csv -n openshift-cnv",
"oc get csv",
"VERSION REPLACES PHASE 2.5.0 kubevirt-hyperconverged-operator.v2.4.3 Installing 2.4.3 Replacing",
"oc get hco -n openshift-cnv kubevirt-hyperconverged -o=jsonpath='{range .status.conditions[*]}{.type}{\"\\t\"}{.status}{\"\\t\"}{.message}{\"\\n\"}{end}'",
"ReconcileComplete True Reconcile completed successfully Available True Reconcile completed successfully Progressing False Reconcile completed successfully Degraded False Reconcile completed successfully Upgradeable True Reconcile completed successfully"
] | https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/openshift_virtualization/upgrading-openshift-virtualization |
16.3. SSM Resources | 16.3. SSM Resources For more information on SSM, see the following resources: The man ssm page provides good descriptions and examples, as well as details on all of the commands and options too specific to be documented here. Local documentation for SSM is stored in the doc/ directory. The SSM wiki can be accessed at http://storagemanager.sourceforge.net/index.html . The mailing list can be subscribed from https://lists.sourceforge.net/lists/listinfo/storagemanager-devel and mailing list archives from http://sourceforge.net/mailarchive/forum.php?forum_name=storagemanager-devel . The mailing list is where developers communicate. There is currently no user mailing list so feel free to post questions there as well. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/ssm-resources |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.