title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 2. Preparing your Red Hat OpenShift Container Platform environment for Service Telemetry Framework | Chapter 2. Preparing your Red Hat OpenShift Container Platform environment for Service Telemetry Framework To prepare your Red Hat OpenShift Container Platform environment for Service Telemetry Framework (STF), you must plan for persistent storage, adequate resources, event storage, and network considerations: Ensure that you have persistent storage available in your Red Hat OpenShift Container Platform cluster for a production-grade deployment. For more information, see Section 2.2, "Persistent volumes" . Ensure that enough resources are available to run the Operators and the application containers. For more information, see Section 2.3, "Resource allocation" . Ensure that you have a fully connected network environment. For more information, see Section 2.4, "Network considerations for Service Telemetry Framework" . 2.1. Observability Strategy in Service Telemetry Framework Service Telemetry Framework (STF) does not include storage backends and alerting tools. STF uses community operators to deploy Prometheus, Alertmanager, Grafana, and Elasticsearch. STF makes requests to these community operators to create instances of each application configured to work with STF. Instead of having Service Telemetry Operator create custom resource requests, you can use your own deployments of these applications or other compatible applications, and scrape the metrics Smart Gateways for delivery to your own Prometheus-compatible system for telemetry storage. If you set the observabilityStrategy to none , then storage backends will not be deployed so persistent storage will not be required by STF. 2.2. Persistent volumes Service Telemetry Framework (STF) uses persistent storage in Red Hat OpenShift Container Platform to request persistent volumes so that Prometheus and Elasticsearch can store metrics and events. When you enable persistent storage through the Service Telemetry Operator, the Persistent Volume Claims (PVC) requested in an STF deployment results in an access mode of RWO (ReadWriteOnce). If your environment contains pre-provisioned persistent volumes, ensure that volumes of RWO are available in the Red Hat OpenShift Container Platform default configured storageClass . Additional resources For more information about configuring persistent storage for Red Hat OpenShift Container Platform, see Understanding persistent storage. For more information about recommended configurable storage technology in Red Hat OpenShift Container Platform, see Recommended configurable storage technology . For more information about configuring persistent storage for Prometheus in STF, see the section called "Configuring persistent storage for Prometheus" . For more information about configuring persistent storage for Elasticsearch in STF, see the section called "Configuring persistent storage for Elasticsearch" . 2.3. Resource allocation To enable the scheduling of pods within the Red Hat OpenShift Container Platform infrastructure, you need resources for the components that are running. If you do not allocate enough resources, pods remain in a Pending state because they cannot be scheduled. The amount of resources that you require to run Service Telemetry Framework (STF) depends on your environment and the number of nodes and clouds that you want to monitor. Additional resources For recommendations about sizing for metrics collection, see Service Telemetry Framework Performance and Scaling . For information about sizing requirements for Elasticsearch, see https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-managing-compute-resources.html . 2.4. Network considerations for Service Telemetry Framework You can only deploy Service Telemetry Framework (STF) in a fully connected network environment. You cannot deploy STF in Red Hat OpenShift Container Platform-disconnected environments or network proxy environments. | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/service_telemetry_framework_1.5/assembly-preparing-your-ocp-environment-for-stf_assembly |
Chapter 18. Managing Out of Memory states | Chapter 18. Managing Out of Memory states Out-of-memory (OOM) is a computing state where all available memory, including swap space, has been allocated. Normally this causes the system to panic and stop functioning as expected. The provided instructions help in avoiding OOM states on your system. Prerequisites You have root permissions on the system. 18.1. Changing the Out of Memory value The /proc/sys/vm/panic_on_oom file contains a value which is the switch that controls Out of Memory (OOM) behavior. When the file contains 1 , the kernel panics on OOM and stops functioning as expected. The default value is 0 , which instructs the kernel to call the oom_killer() function when the system is in an OOM state. Usually, oom_killer() terminates unnecessary processes, which allows the system to survive. You can change the value of /proc/sys/vm/panic_on_oom . Procedure Display the current value of /proc/sys/vm/panic_on_oom . To change the value in /proc/sys/vm/panic_on_oom : Echo the new value to /proc/sys/vm/panic_on_oom . Note It is recommended that you make the Real-Time kernel panic on OOM ( 1 ). Otherwise, when the system encounters an OOM state, it is no longer deterministic. Verification Display the value of /proc/sys/vm/panic_on_oom . Verify that the displayed value matches the value specified. 18.2. Prioritizing processes to kill when in an Out of Memory state You can prioritize the processes that get terminated by the oom_killer() function. This can ensure that high-priority processes keep running during an OOM state. Each process has a directory, /proc/ PID . Each directory includes the following files: oom_adj - Valid scores for oom_adj are in the range -16 to +15. This value is used to calculate the performance footprint of the process, using an algorithm that also takes into account how long the process has been running, among other factors. oom_score - Contains the result of the algorithm calculated using the value in oom_adj . In an Out of Memory state, the oom_killer() function terminates processes with the highest oom_score . You can prioritize the processes to terminate by editing the oom_adj file for the process. Prerequisites Know the process ID (PID) of the process you want to prioritize. Procedure Display the current oom_score for a process. Display the contents of oom_adj for the process. Edit the value in oom_adj . Verification Display the current oom_score for the process. Verify that the displayed value is lower than the value. 18.3. Disabling the Out of Memory killer for a process You can disable the oom_killer() function for a process by setting oom_adj to the reserved value of -17 . This will keep the process alive, even in an OOM state. Procedure Set the value in oom_adj to -17 . Verification Display the current oom_score for the process. Verify that the displayed value is 0 . | [
"cat /proc/sys/vm/panic_on_oom 0",
"echo 1 > /proc/sys/vm/panic_on_oom",
"cat /proc/sys/vm/panic_on_oom 1",
"cat /proc/12465/oom_score 79872",
"cat /proc/12465/oom_adj 13",
"echo -5 > /proc/12465/oom_adj",
"cat /proc/12465/oom_score 78",
"echo -17 > /proc/12465/oom_adj",
"cat /proc/12465/oom_score 0"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_real_time/9/html/optimizing_rhel_9_for_real_time_for_low_latency_operation/assembly_managing-out-of-memory-states_optimizing-rhel9-for-real-time-for-low-latency-operation |
probe::ipmib.FragFails | probe::ipmib.FragFails Name probe::ipmib.FragFails - Count datagram fragmented unsuccessfully Synopsis ipmib.FragFails Values op Value to be added to the counter (default value of 1) skb pointer to the struct sk_buff being acted on Description The packet pointed to by skb is filtered by the function ipmib_filter_key . If the packet passes the filter is is counted in the global FragFails (equivalent to SNMP's MIB IPSTATS_MIB_FRAGFAILS) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ipmib-fragfails |
1.2.3. Administrative Controls | 1.2.3. Administrative Controls Administrative controls define the human factors of security. It involves all levels of personnel within an organization and determines which users have access to what resources and information by such means as: Training and awareness Disaster preparedness and recovery plans Personnel recruitment and separation strategies Personnel registration and accounting | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-sgs-ov-ctrl-admin |
Chapter 14. Project networking with IPv6 | Chapter 14. Project networking with IPv6 14.1. IPv6 subnet options When you create IPv6 subnets in a Red Hat OpenStack Platform (RHOSP) project network you can specify address mode and Router Advertisement mode to obtain a particular result as described in the following table. Note RHOSP does not support IPv6 prefix delegation from an external entity in ML2/OVN deployments. You must obtain the the Global Unicast Address prefix from your external prefix delegation router and set it by using the subnet-range argument during creation of a IPv6 subnet. For example: RA Mode Address Mode Result ipv6_ra_mode=not set ipv6-address-mode=slaac The instance receives an IPv6 address from the external router (not managed by OpenStack Networking) using Stateless Address Autoconfiguration (SLAAC). Note OpenStack Networking supports only EUI-64 IPv6 address assignment for SLAAC. This allows for simplified IPv6 networking, as hosts self-assign addresses based on the base 64-bits plus the MAC address. You cannot create subnets with a different netmask and address_assign_type of SLAAC. ipv6_ra_mode=not set ipv6-address-mode=dhcpv6-stateful The instance receives an IPv6 address and optional information from OpenStack Networking (dnsmasq) using DHCPv6 stateful . ipv6_ra_mode=not set ipv6-address-mode=dhcpv6-stateless The instance receives an IPv6 address from the external router using SLAAC, and optional information from OpenStack Networking (dnsmasq) using DHCPv6 stateless . ipv6_ra_mode=slaac ipv6-address-mode=not-set The instance uses SLAAC to receive an IPv6 address from OpenStack Networking ( radvd ). ipv6_ra_mode=dhcpv6-stateful ipv6-address-mode=not-set The instance receives an IPv6 address and optional information from an external DHCPv6 server using DHCPv6 stateful . ipv6_ra_mode=dhcpv6-stateless ipv6-address-mode=not-set The instance receives an IPv6 address from OpenStack Networking ( radvd ) using SLAAC, and optional information from an external DHCPv6 server using DHCPv6 stateless . ipv6_ra_mode=slaac ipv6-address-mode=slaac The instance receives an IPv6 address from OpenStack Networking ( radvd ) using SLAAC . ipv6_ra_mode=dhcpv6-stateful ipv6-address-mode=dhcpv6-stateful The instance receives an IPv6 address from OpenStack Networking ( dnsmasq ) using DHCPv6 stateful , and optional information from OpenStack Networking ( dnsmasq ) using DHCPv6 stateful . ipv6_ra_mode=dhcpv6-stateless ipv6-address-mode=dhcpv6-stateless The instance receives an IPv6 address from OpenStack Networking ( radvd ) using SLAAC , and optional information from OpenStack Networking ( dnsmasq ) using DHCPv6 stateless . 14.2. Create an IPv6 subnet using Stateful DHCPv6 You can create an IPv6 subnet in a Red Hat OpenStack (RHOSP) project network. For example, you can create an IPv6 subnet using Stateful DHCPv6 in network named database-servers in a project named QA. Procedure Retrieve the project ID of the Project where you want to create the IPv6 subnet. These values are unique between OpenStack deployments, so your values differ from the values in this example. Retrieve a list of all networks present in OpenStack Networking (neutron), and note the name of the network where you want to host the IPv6 subnet: Include the project ID, network name, and ipv6 address mode in the openstack subnet create command: Validation steps Validate this configuration by reviewing the network list. Note that the entry for database-servers now reflects the newly created IPv6 subnet: Result As a result of this configuration, instances that the QA project creates can receive a DHCP IPv6 address when added to the database-servers subnet: Additional resources To find the Router Advertisement mode and address mode combinations to achieve a particular result in an IPv6 subnet, see IPv6 subnet options in the Networking Guide . | [
"openstack subnet create --subnet-range 2002:c000:200::64 --no-dhcp --gateway 2002:c000:2fe:: --dns-nameserver 2002:c000:2fe:: --network provider provider-subnet-2002:c000:200::",
"openstack project list +----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 25837c567ed5458fbb441d39862e1399 | QA | | f59f631a77264a8eb0defc898cb836af | admin | | 4e2e1951e70643b5af7ed52f3ff36539 | demo | | 8561dff8310e4cd8be4b6fd03dc8acf5 | services | +----------------------------------+----------+",
"openstack network list +--------------------------------------+------------------+-------------------------------------------------------------+ | id | name | subnets | +--------------------------------------+------------------+-------------------------------------------------------------+ | 8357062a-0dc2-4146-8a7f-d2575165e363 | private | c17f74c4-db41-4538-af40-48670069af70 10.0.0.0/24 | | 31d61f7d-287e-4ada-ac29-ed7017a54542 | public | 303ced03-6019-4e79-a21c-1942a460b920 172.24.4.224/28 | | 6aff6826-4278-4a35-b74d-b0ca0cbba340 | database-servers | | +--------------------------------------+------------------+-------------------------------------------------------------+",
"openstack subnet create --ip-version 6 --ipv6-address-mode dhcpv6-stateful --project 25837c567ed5458fbb441d39862e1399 --network database-servers --subnet-range fdf8:f53b:82e4::53/125 subnet_name Created a new subnet: +-------------------+--------------------------------------------------------------+ | Field | Value | +-------------------+--------------------------------------------------------------+ | allocation_pools | {\"start\": \"fdf8:f53b:82e4::52\", \"end\": \"fdf8:f53b:82e4::56\"} | | cidr | fdf8:f53b:82e4::53/125 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | fdf8:f53b:82e4::51 | | host_routes | | | id | cdfc3398-997b-46eb-9db1-ebbd88f7de05 | | ip_version | 6 | | ipv6_address_mode | dhcpv6-stateful | | ipv6_ra_mode | | | name | | | network_id | 6aff6826-4278-4a35-b74d-b0ca0cbba340 | | tenant_id | 25837c567ed5458fbb441d39862e1399 | +-------------------+--------------------------------------------------------------+",
"openstack network list +--------------------------------------+------------------+-------------------------------------------------------------+ | id | name | subnets | +--------------------------------------+------------------+-------------------------------------------------------------+ | 6aff6826-4278-4a35-b74d-b0ca0cbba340 | database-servers | cdfc3398-997b-46eb-9db1-ebbd88f7de05 fdf8:f53b:82e4::50/125 | | 8357062a-0dc2-4146-8a7f-d2575165e363 | private | c17f74c4-db41-4538-af40-48670069af70 10.0.0.0/24 | | 31d61f7d-287e-4ada-ac29-ed7017a54542 | public | 303ced03-6019-4e79-a21c-1942a460b920 172.24.4.224/28 | +--------------------------------------+------------------+-------------------------------------------------------------+",
"openstack server list +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+ | fad04b7a-75b5-4f96-aed9-b40654b56e03 | corp-vm-01 | ACTIVE | - | Running | database-servers=fdf8:f53b:82e4::52 | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/networking_guide/proj-network-ipv6_rhosp-network |
2.2.7.2. NFS and Postfix | 2.2.7.2. NFS and Postfix Never put the mail spool directory, /var/spool/postfix/ , on an NFS shared volume. Because NFSv2 and NFSv3 do not maintain control over user and group IDs, two or more users can have the same UID, and receive and read each other's mail. Note With NFSv4 using Kerberos, this is not the case, since the SECRPC_GSS kernel module does not utilize UID-based authentication. However, it is still considered good practice not to put the mail spool directory on NFS shared volumes. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-securing_postfix-nfs_and_postfix |
Chapter 13. Valgrind | Chapter 13. Valgrind Valgrind is an instrumentation framework that ships with a number of tools for profiling applications. It can be used to detect various memory errors and memory-management problems, such as the use of uninitialized memory or an improper allocation and freeing of memory, or to identify the use of improper arguments in system calls. For a complete list of profiling tools that are distributed with the Red Hat Developer Toolset version of Valgrind , see Table 13.1, "Tools Distributed with Valgrind for Red Hat Developer Toolset" . Valgrind profiles an application by rewriting it and instrumenting the rewritten binary. This allows you to profile your application without the need to recompile it, but it also makes Valgrind significantly slower than other profilers, especially when performing extremely detailed runs. It is therefore not suited to debugging time-specific issues, or kernel-space debugging. Red Hat Developer Toolset is distributed with Valgrind 3.19.0 . This version is more recent than the version included in the release of Red Hat Developer Toolset and provides numerous bug fixes and enhancements. Table 13.1. Tools Distributed with Valgrind for Red Hat Developer Toolset Name Description Memcheck Detects memory management problems by intercepting system calls and checking all read and write operations. Cachegrind Identifies the sources of cache misses by simulating the level 1 instruction cache ( I1 ), level 1 data cache ( D1 ), and unified level 2 cache ( L2 ). Callgrind Generates a call graph representing the function call history. Helgrind Detects synchronization errors in multithreaded C, C++, and Fortran programs that use POSIX threading primitives. DRD Detects errors in multithreaded C and C++ programs that use POSIX threading primitives or any other threading concepts that are built on top of these POSIX threading primitives. Massif Monitors heap and stack usage. 13.1. Installing Valgrind In Red Hat Developer Toolset, Valgrind is provided by the devtoolset-12-valgrind package and is automatically installed with devtoolset-12-perftools . For detailed instructions on how to install Red Hat Developer Toolset and related packages to your system, see Section 1.5, "Installing Red Hat Developer Toolset" . Note Note that if you use Valgrind in combination with the GNU Debugger , it is recommended that you use the version of GDB that is included in Red Hat Developer Toolset to ensure that all features are fully supported. 13.2. Using Valgrind To run any of the Valgrind tools on a program you want to profile: See Table 13.1, "Tools Distributed with Valgrind for Red Hat Developer Toolset" for a list of tools that are distributed with Valgrind . The argument of the --tool command line option must be specified in lower case, and if this option is omitted, Valgrind uses Memcheck by default. For example, to run Cachegrind on a program to identify the sources of cache misses: Note that you can execute any command using the scl utility, causing it to be run with the Red Hat Developer Toolset binaries used in preference to the Red Hat Enterprise Linux system equivalent. This allows you to run a shell session with Red Hat Developer Toolset Valgrind as default: Note To verify the version of Valgrind you are using at any point: Red Hat Developer Toolset's valgrind executable path will begin with /opt . Alternatively, you can use the following command to confirm that the version number matches that for Red Hat Developer Toolset Valgrind : 13.3. Additional Resources For more information about Valgrind and its features, see the resources listed below. Installed Documentation valgrind (1) - The manual page for the valgrind utility provides detailed information on how to use Valgrind. To display the manual page for the version included in Red Hat Developer Toolset: Valgrind Documentation - HTML documentation for Valgrind is located at /opt/rh/devtoolset-12/root/usr/share/doc/devtoolset-12-valgrind-3.19.0/html/index.html . Online Documentation Red Hat Enterprise Linux 7 Developer Guide - The Developer Guide for Red Hat Enterprise Linux 7 provides more information about Valgrind and its Eclipse plug-in. Red Hat Enterprise Linux 7 Performance Tuning Guide - The Performance Tuning Guide for Red Hat Enterprise Linux 7 provide more detailed information about using Valgrind to profile applications. See Also Chapter 1, Red Hat Developer Toolset - An overview of Red Hat Developer Toolset and more information on how to install it on your system. Chapter 11, memstomp - Instructions on using the memstomp utility to identify calls to library functions with overlapping memory regions that are not allowed by various standards. Chapter 12, SystemTap - An introduction to the SystemTap tool and instructions on how to use it to monitor the activities of a running system. Chapter 14, OProfile - Instructions on using the OProfile tool to determine which sections of code consume the greatest amount of CPU time and why. Chapter 15, Dyninst - Instructions on using the Dyninst library to instrument a user-space executable. | [
"scl enable devtoolset-12 'valgrind --tool= tool program argument ...'",
"scl enable devtoolset-12 'valgrind --tool=cachegrind program argument ...'",
"scl enable devtoolset-12 'bash'",
"which valgrind",
"valgrind --version",
"scl enable devtoolset-12 'man valgrind'"
]
| https://docs.redhat.com/en/documentation/red_hat_developer_toolset/12/html/user_guide/chap-Valgrind |
Chapter 59. Installation and Booting | Chapter 59. Installation and Booting FIPS mode unsupported when installing from an HTTPS kickstart source Installation images do not support FIPS mode during installation with an HTTPS kickstart source. As a consequence, it is currently impossible to to install a system with the fips=1 and inst.ks=https://<location>/ks.cfg options added to the command line. (BZ# 1341280 ) PXE boot with UEFI and IPv6 displays the GRUB2 shell instead of the operating system selection menu When the Pre-Boot Execution Environment (PXE) starts on a client configured with UEFI and IPv6, the boot menu configured in the /boot/grub/grub.cfg file is not displayed. After a timeout, the GRUB2 shell is displayed instead of the configured operating system selection menu. (BZ#1154226) Specifying a driverdisk partition with non-alphanumeric characters generates an invalid output Kickstart file When installing Red Hat Enterprise Linux using the Anaconda installer, you can add a driver disk by including a path to the partition containing the driver disk in the Kickstart file. At present, if you specify the partition by LABEL or CDLABEL which has non-alphanumeric characters in it, for example: the output Kickstart file created during the Anaconda installation will contain incorrect information. To work around this problem, use only alphanumeric characters when specifying the partition by LABEL or CDLABEL. (BZ# 1452770 ) The Scientific Computing variant is missing packages required for certain security profiles When installing the Red Hat Enterprise Linux for Scientific Computing variant, also known as Compute Node, you can select a security profile similarly to any other variant's installation process. However, since this variant is meant to be minimal, it is missing packages which are required by certain profiles, such as United States Government Configuration Baseline . If you select this profile, the installer displays a warning that some packages are missing. The warning allows you to continue the installation despite missing packages, which can be used to work around the problem. The installation will complete normally, however, note that if you install the system despite the warning, and then attempt to run a security scan after the installation, the scan will report failing rules due to these missing packages. This behavior is expected. (BZ#1462647) | [
"driverdisk \"CDLABEL=Fedora 23 x86_64:/path/to/rpm\""
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/known_issues_installation_and_booting |
C.3. Application Restrictions | C.3. Application Restrictions There are aspects of virtualization that make it unsuitable for certain types of applications. Applications with high I/O throughput requirements should use KVM's paravirtualized drivers (virtio drivers) for fully-virtualized guests. Without the virtio drivers, certain applications may be unpredictable under heavy I/O loads. The following applications should be avoided due to high I/O requirements: kdump server netdump server You should carefully evaluate applications and tools that heavily utilize I/O or those that require real-time performance. Consider the virtio drivers or PCI device assignment for increased I/O performance. For more information on the virtio drivers for fully virtualized guests, see Chapter 5, KVM Paravirtualized (virtio) Drivers . For more information on PCI device assignment, see Chapter 16, Guest Virtual Machine Device Configuration . Applications suffer a small performance loss from running in virtualized environments. The performance benefits of virtualization through consolidating to newer and faster hardware should be evaluated against the potential application performance issues associated with using virtualization. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-virtualization_restrictions-application_restrictions |
Chapter 6. Clustering | Chapter 6. Clustering New SNMP agent to query a Pacemaker cluster The new pcs_snmp_agent agent allows you to query a Pacemaker cluster for data by means of SNMP. This agent provides basic information about a cluster, its nodes, and its resources. For information on configuring this agent, see the pcs_snmp_agent (8) man page and the High Availability Add-On Reference. (BZ#1367808) Support for Red Hat Enterprise Linux High Availability clusters on Amazon Web Services Red Hat Enterprise Linux 7.5 supports High Availability clusters of virtual machines (VMs) on Amazon Web Services (AWS). For information on configuring a Red Hat Enterprise Linux High Availability Cluster on AWS, see https://access.redhat.com/articles/3354781 . (BZ#1451776) Support for Red Hat Enterprise Linux High Availability clusters on Microsoft Azure Red Hat Enterprise Linux 7.5 supports High Availability clusters of virtual machines (VMs) in Microsoft Azure. For information on configuring a Red Hat Enterprise Linux High Availability cluster on Microsoft Azure, see https://access.redhat.com/articles/3252491 . (BZ#1476009) Unfencing is done in resource cleanup only if relevant parameters changed Previously, in a cluster that included a fence device that supports unfencing, such as fence_scsi or fence_mpath , a general resource cleanup or a cleanup of any stonith resource would always result in unfencing, including a restart of all resources. Now, unfencing is only done if the parameters to the device that supports unfencing changed. (BZ#1427648) The pcsd port is now configurable The port on which pcsd is listening can now be changed in the pcsd configuration file, and pcs can now communicate with pcsd using a custom port. This feature is primarily for the use of pcsd inside containers. (BZ# 1415197 ) Fencing and resource agents are now supported by AWS Python libraries and a CLI client With this enhancement, Amazon Web Services Python libraries (python-boto3, python-botocore, and python-s3transfer) and a CLI client (awscli) have been added to support fencing and resource agents in high availability setups. (BZ#1512020) Fencing in HA setups is now supported by Azure Python libraries With this enhancement, Azure Python libraries (python-isodate, python-jwt, python-adal, python-msrest, python-msrestazure, and python-azure-sdk) have been added to support fencing in high availability setups. (BZ#1512021) New features added to the sbd binary. The sbd binary used as a command line tool now provides the following additional features: Easy verification of the functionality of a watchdog device Ability to query a list of available watchdog devices For information on the sbd command line tool, see the sbd (8) man page. (BZ#1462002) sbd rebased to version 1.3.1 The sbd package has been rebased to upstream version 1.3.1. This version brings the following changes: Adds commands to test and query watchdog devices Overhauls the command-line options and configuration file Properly handles off actions instead of reboot (BZ# 1499864 ) Cluster status now shows by default when a resource action is pending Pacemaker supports a record-pending option that previously defaulted to false , meaning that cluster status would only show the current status of a resource (started or stopped). Now, record-pending defaults to true , meaning that cluster status may also show when a resource is in the process of starting or stopping. (BZ#1461976) clufter rebased to version 0.77.0 The clufter packages have been upgraded to upstream version 0.77.0, which provides a number of bug fixes, new features, and user experience enhancements over the version. Among the notable updates are the following: When using clufter to translate an existing configuration with the pcs2pcscmd-needle command in the case where the corosync.conf equivalent omits the cluster_name option (which is not the case with standard `pcs`-initiated configurations), the contained pcs cluster setup invocation no longer causes cluster misconfiguration with the name of the first given node interpreted as the required cluster name specification. The same invocation will now include the --encryption 0|1 switch when available, in order to reflect the original configuration accurately. In any script-like output sequence such as that produced with the ccs2pcscmd and pcs2pcscmd families of clufter commands, the intended shell interpreter is now emitted in a valid form, so that the respective commented line can be honored by the operating system. (BZ#1381531) The clufter tool now also covers some additional recently added means of configuration as facilitated with pcs (heuristics for a quorum device, meta attributes for top-level bundle resource units) when producing the sequence of configuring pcs commands to reflect existing configurations when applicable. For information on the capabilities of clufter , see the clufter(1) man page or the output of the clufter -h command. For examples of clufter usage, see the following Red Hat Knowledgebase article: https://access.redhat.com/articles/2810031 . (BZ# 1509381 ) Support for Sybase ASE failover The Red Hat High Availability Add-On now provides support for Sybase ASE failover through the ocf:heartbeat:sybaseASE resource. To display the parameters you can configure for this resource, run the pcs resource describe ocf:heartbeat:sybaseASE command. For more information on this agent, see the ocf_heartbeat_sybaseASE (7) man page. (BZ#1436189) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/new_features_clustering |
Chapter 3. Deploy standalone Multicloud Object Gateway | Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing the Local Storage Operator. Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway 3.1. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 3.2. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.15 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.3. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) | [
"oc annotate namespace openshift-storage openshift.io/node-selector="
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_on_any_platform/deploy-standalone-multicloud-object-gateway |
Chapter 4. Installing a cluster on Alibaba Cloud with customizations | Chapter 4. Installing a cluster on Alibaba Cloud with customizations In OpenShift Container Platform version 4.15, you can install a customized cluster on infrastructure that the installation program provisions on Alibaba Cloud. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. Note The scope of the OpenShift Container Platform installation configurations is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more OpenShift Container Platform configuration tasks after an installation completes. Important Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You registered your domain . If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud Resource Access Management (RAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain Resource Access Management (RAM) credentials . 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.4.1. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Alibaba Cloud. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select alibabacloud as the platform to target. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Provide a descriptive name for your cluster. Installing the cluster into Alibaba Cloud requires that the Cloud Credential Operator (CCO) operate in manual mode. Modify the install-config.yaml file to set the credentialsMode parameter to Manual : Example install-config.yaml configuration file with credentialsMode set to Manual apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled ... 1 Add this line to set the credentialsMode to Manual . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for Alibaba Cloud 4.4.2. Generating the required installation manifests You must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. Procedure Generate the manifests by running the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the directory in which the installation program creates files. 4.4.3. Creating credentials for OpenShift Container Platform components with the ccoctl tool You can use the OpenShift Container Platform Cloud Credential Operator (CCO) utility to automate the creation of Alibaba Cloud RAM users and policies for each in-cluster component. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Created a RAM user with sufficient permission to create the OpenShift Container Platform cluster. Added the AccessKeyID ( access_key_id ) and AccessKeySecret ( access_key_secret ) of that RAM user into the ~/.alibabacloud/credentials file on your local computer. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: Run the following command to use the tool: USD ccoctl alibabacloud create-ram-users \ --name <name> \ 1 --region=<alibaba_region> \ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 3 --output-dir=<path_to_ccoctl_output_dir> 4 1 Specify the name used to tag any cloud resources that are created for tracking. 2 Specify the Alibaba Cloud region in which cloud resources will be created. 3 Specify the directory containing the files for the component CredentialsRequest objects. 4 Specify the directory where the generated component credentials secrets will be placed. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Example output 2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml ... Note A RAM user can have up to two AccessKeys at the same time. If you run ccoctl alibabacloud create-ram-users more than twice, the previously generated manifests secret becomes stale and you must reapply the newly generated secrets. Verify that the OpenShift Container Platform secrets are created: USD ls <path_to_ccoctl_output_dir>/manifests Example output openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml You can verify that the RAM users and policies are created by querying Alibaba Cloud. For more information, refer to Alibaba Cloud documentation on listing RAM users and policies. Copy the generated credential files to the target manifests directory: USD cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/ where: <path_to_ccoctl_output_dir> Specifies the directory created by the ccoctl alibabacloud create-ram-users command. <path_to_installation_dir> Specifies the directory in which the installation program creates files. 4.4.4. Sample customized install-config.yaml file for Alibaba Cloud You can customize the installation configuration file ( install-config.yaml ) to specify more details about your cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: alicloud-dev.devcluster.openshift.com credentialsMode: Manual compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: test-cluster 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 2 serviceNetwork: - 172.30.0.0/16 platform: alibabacloud: defaultMachinePlatform: 3 instanceType: ecs.g6.xlarge systemDiskCategory: cloud_efficiency systemDiskSize: 200 region: ap-southeast-1 4 resourceGroupID: rg-acfnw6j3hyai 5 vpcID: vpc-0xifdjerdibmaqvtjob2b 6 vswitchIDs: 7 - vsw-0xi8ycgwc8wv5rhviwdq5 - vsw-0xiy6v3z2tedv009b4pz2 publish: External pullSecret: '{"auths": {"cloud.openshift.com": {"auth": ... }' 8 sshKey: | ssh-rsa AAAA... 9 1 Required. The installation program prompts you for a cluster name. 2 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 3 Optional. Specify parameters for machine pools that do not define their own platform configuration. 4 Required. The installation program prompts you for the region to deploy the cluster to. 5 Optional. Specify an existing resource group where the cluster should be installed. 8 Required. The installation program prompts you for the pull secret. 9 Optional. The installation program prompts you for the SSH key value that you use to access the machines in your cluster. 6 7 Optional. These are example vswitchID values. 4.4.5. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.6. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.8. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. 4.9. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 4.10. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: amd64 hyperthreading: Enabled",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl alibabacloud create-ram-users --name <name> \\ 1 --region=<alibaba_region> \\ 2 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 3 --output-dir=<path_to_ccoctl_output_dir> 4",
"2022/02/11 16:18:26 Created RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:27 Ready for creating new ram policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy 2022/02/11 16:18:27 RAM policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has created 2022/02/11 16:18:28 Policy user1-alicloud-openshift-machine-api-alibabacloud-credentials-policy-policy has attached on user user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Created access keys for RAM User: user1-alicloud-openshift-machine-api-alibabacloud-credentials 2022/02/11 16:18:29 Saved credentials configuration to: user1-alicloud/manifests/openshift-machine-api-alibabacloud-credentials-credentials.yaml",
"ls <path_to_ccoctl_output_dir>/manifests",
"openshift-cluster-csi-drivers-alibaba-disk-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-alibabacloud-credentials-credentials.yaml",
"cp ./<path_to_ccoctl_output_dir>/manifests/*credentials.yaml ./<path_to_installation>dir>/manifests/",
"apiVersion: v1 baseDomain: alicloud-dev.devcluster.openshift.com credentialsMode: Manual compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: test-cluster 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 2 serviceNetwork: - 172.30.0.0/16 platform: alibabacloud: defaultMachinePlatform: 3 instanceType: ecs.g6.xlarge systemDiskCategory: cloud_efficiency systemDiskSize: 200 region: ap-southeast-1 4 resourceGroupID: rg-acfnw6j3hyai 5 vpcID: vpc-0xifdjerdibmaqvtjob2b 6 vswitchIDs: 7 - vsw-0xi8ycgwc8wv5rhviwdq5 - vsw-0xiy6v3z2tedv009b4pz2 publish: External pullSecret: '{\"auths\": {\"cloud.openshift.com\": {\"auth\": ... }' 8 sshKey: | ssh-rsa AAAA... 9",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_alibaba/installing-alibaba-customizations |
Chapter 3. Setting the simple content access mode from the Hybrid Cloud Console | Chapter 3. Setting the simple content access mode from the Hybrid Cloud Console When you create a new manifest, simple content access is enabled by default. Simple content access gives you the ability to consume content on your systems without strict entitlement enforcement. However, if you have an environment where simple content access is not appropriate for all manifests, the Subscriptions administrator can change the simple content access mode for each manifest, as needed. For Red Hat Satellite version 6.5 and later, you can set the simple content access mode for a particular manifest directly from the Manifests page on the Hybrid Cloud Console. You can check whether simple content access is enabled for a manifest from the setting that is displayed in the Simple Content Access column in the Manifests table. For Red Hat Satellite versions older than 6.5, the Simple Content Access column displays N/A because simple content access is not available for those versions of Satellite. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console. You are connected to a Red Hat Satellite Server. You have Red Hat Satellite 6 or later. You have the Subscriptions administrator role in the role-based access control (RBAC) system for the Red Hat Hybrid Cloud Console. Procedure To set the simple content access mode to Enabled or Disabled , complete the following steps: From the Hybrid Cloud Console home page, click Services > Subscriptions and Spend > Manifests . From the Manifests page, locate the name of the manifest that you want to modify in the Manifests table. In the Simple Content Access column, set the simple content access switch to Enabled or Disabled , depending on the mode that you want to use for that manifest. Note The ability to set the simple content access mode for manifests is only available in Satellite versions 6.5 and later. Note In Satellite 6.13, the simple content access status is set on the Satellite organization, not on the manifest. Importing a manifest does not change your organization's simple content access status. Addtional resources For information about enabling simple content access on a Satellite organization, see Managing Organizations . | null | https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/creating_and_managing_manifests_for_a_connected_satellite_server/proc-setting-simple-content-access-console |
4.4. Packages Required to Install a Replica | 4.4. Packages Required to Install a Replica Replica package requirements are the same as server package requirements. See Section 2.2, "Packages Required to Install an IdM Server" . | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/replica-required-packages |
Chapter 2. Onboarding certification partners | Chapter 2. Onboarding certification partners Use the Red Hat Customer Portal to create a new account if you are a new partner, or use your existing Red Hat account if you are a current partner to onboard with Red Hat for certifying your products. 2.1. Onboarding existing certification partners Prerequisites You have an existing Red Hat account. Procedure Access Red Hat Customer Portal and click Log in . Enter your Red Hat login or email address and click . Then, use either of the following options: Log in with company single sign-on Log in with Red Hat account From the menu bar on the header, click your avatar to view the account details. If an account number is associated with your account, then you can proceed with the certification process. If an account number is not associated with your account, then first contact the Red Hat global customer service team to raise a request for creating a new account number. After you get an account number, contact the certification team to proceed with the certification process. 2.2. Onboarding new certification partners Creating a new Red Hat account is the first step for onboarding new certification partners. Procedure Access Red Hat Customer Portal and click Register . Enter the following details to create a new Red Hat account: Select Corporate in the Account Type field. If you have created a Corporate type account and require an account number, contact the Red Hat global customer service team . Note Ensure that you create a company account and not a personal account. The account created during this step is also used to sign in to the Red Hat Ecosystem Catalog when working with certification requests. Choose a Red Hat login and password. Important If your login ID is associated with multiple accounts, then do not use your contact email as the login ID as this can cause issues during login. Also, you cannot change your login ID once created. Enter your Personal information and Company information . Click Create My Account . A new Red Hat account is created. Contact your Ecosystem Partner Management (EPM) representative, if available. Else contact the certification team to proceed with the certification process. 2.3. Exploring the Partner landing page After logging in to Red Hat Partner Connect , the partner landing page opens. This page serves as a centralized hub, offering access to various partner services and capabilities that enable you to start working on opportunities. The Partner landing page offers the following services: Software certification Red Hat Demo platform Red Hat Partner Training Portal Access library of marketing, sales & technical content Partner support Email preference center Partner subscriptions User account As part of the Red Hat partnership, partners receive access to various Red Hat systems and services that enable them to create shared value with Red Hat for our joint customers. Go to the Software certification tile and click Certify your software to begin your product certification journey. The personalized Product certification dashboard opens. | null | https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_services_on_openshift_certification_workflow_guide/onboarding-certification-partners_introduction-to-red-hat-openstack-services-on-openshift-rhoso-certification-program |
5.96. grep | 5.96. grep 5.96.1. RHBA-2012:0352 - grep bug fix update An updated grep package that fixes one bug is now available for Red Hat Enterprise Linux 6. The grep utility searches through textual input for lines which contain a match to a specified pattern and then prints the matching lines. GNU grep utilities include grep, egrep and fgrep. Bug Fix BZ# 741452 Previously, the grep utility was not able to handle the EPIPE error. If a SIGPIPE signal was blocked by the shell, grep kept continuously printing error messages. An upstream patch has been applied to address this problem, so that grep exits on the first EPIPE error and prints only one error message. All users of grep are advised to upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/grep |
28.2.2. Connecting the Installation System to a VNC Listener | 28.2.2. Connecting the Installation System to a VNC Listener To have the installation system automatically connect to a VNC client, first start the client in listening mode. On Red Hat Enterprise Linux systems, use the -listen option to run vncviewer as a listener. In a terminal window, enter the command: Note By default, vncviewer uses TCP port 5500 when in listening mode. The firewall must be configured to permit connections to this port from other systems. Choose System Administration Firewall . Select Other ports , and Add . Enter 5500 in the Port(s) field, and specify tcp as the Protocol . Once the listening client is active, start the installation system and set the VNC options at the boot: prompt. In addition to vnc and vncpassword options, use the vncconnect option to specify the name or IP address of the system that has the listening client. To specify the TCP port for the listener, add a colon and the port number to the name of the system. For example, to connect to a VNC client on the system desktop.mydomain.com on the port 5500, enter the following at the boot: prompt: | [
"vncviewer -listen",
"linux vnc vncpassword= qwerty vncconnect= desktop.mydomain.com:5500"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-remoteaccess-installation-vnclistener |
probe::scsi.iodispatching | probe::scsi.iodispatching Name probe::scsi.iodispatching - SCSI mid-layer dispatched low-level SCSI command Synopsis Values device_state_str The current state of the device, as a string dev_id The scsi device id channel The channel number data_direction The data_direction specifies whether this command is from/to the device 0 (DMA_BIDIRECTIONAL), 1 (DMA_TO_DEVICE), 2 (DMA_FROM_DEVICE), 3 (DMA_NONE) lun The lun number request_bufflen The request buffer length host_no The host number device_state The current state of the device data_direction_str Data direction, as a string req_addr The current struct request pointer, as a number request_buffer The request buffer address | [
"scsi.iodispatching"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-scsi-iodispatching |
Chapter 3. Serving models on the multi-model serving platform | Chapter 3. Serving models on the multi-model serving platform For deploying small and medium-sized models, OpenShift AI includes a multi-model serving platform that is based on the ModelMesh component. On the multi-model serving platform, multiple models can be deployed from the same model server and share the server resources. 3.1. Configuring model servers 3.1.1. Enabling the multi-model serving platform To use the multi-model serving platform, you must first enable the platform. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. Your cluster administrator has not edited the OpenShift AI dashboard configuration to disable the ability to select the multi-model serving platform, which uses the ModelMesh component. For more information, see Dashboard configuration options . Procedure In the left menu of the OpenShift AI dashboard, click Settings Cluster settings . Locate the Model serving platforms section. Select the Multi-model serving platform checkbox. Click Save changes . 3.1.2. Adding a custom model-serving runtime for the multi-model serving platform A model-serving runtime adds support for a specified set of model frameworks and the model formats supported by those frameworks. By default, the multi-model serving platform includes the OpenVINO Model Server runtime. You can also add your own custom runtime if the default runtime does not meet your needs, such as supporting a specific model format. As an administrator, you can use the Red Hat OpenShift AI dashboard to add and enable a custom model-serving runtime. You can then choose the custom runtime when you create a new model server for the multi-model serving platform. Note Red Hat does not provide support for custom runtimes. You are responsible for ensuring that you are licensed to use any custom runtimes that you add, and for correctly configuring and maintaining them. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. You are familiar with how to add a model server to your project . When you have added a custom model-serving runtime, you must configure a new model server to use the runtime. You have reviewed the example runtimes in the kserve/modelmesh-serving repository. You can use these examples as starting points. However, each runtime requires some further modification before you can deploy it in OpenShift AI. The required modifications are described in the following procedure. Note OpenShift AI includes the OpenVINO Model Server runtime by default. You do not need to add this runtime to OpenShift AI. Procedure From the OpenShift AI dashboard, click Settings > Serving runtimes . The Serving runtimes page opens and shows the model-serving runtimes that are already installed and enabled. To add a custom runtime, choose one of the following options: To start with an existing runtime (for example the OpenVINO Model Server runtime), click the action menu (...) to the existing runtime and then click Duplicate . To add a new custom runtime, click Add serving runtime . In the Select the model serving platforms this runtime supports list, select Multi-model serving platform . Note The multi-model serving platform supports only the REST protocol. Therefore, you cannot change the default value in the Select the API protocol this runtime supports list. Optional: If you started a new runtime (rather than duplicating an existing one), add your code by choosing one of the following options: Upload a YAML file Click Upload files . In the file browser, select a YAML file on your computer. This file might be the one of the example runtimes that you downloaded from the kserve/modelmesh-serving repository. The embedded YAML editor opens and shows the contents of the file that you uploaded. Enter YAML code directly in the editor Click Start from scratch . Enter or paste YAML code directly in the embedded editor. The YAML that you paste might be copied from one of the example runtimes in the kserve/modelmesh-serving repository. Optional: If you are adding one of the example runtimes in the kserve/modelmesh-serving repository, perform the following modifications: In the YAML editor, locate the kind field for your runtime. Update the value of this field to ServingRuntime . In the kustomization.yaml file in the kserve/modelmesh-serving repository, take note of the newName and newTag values for the runtime that you want to add. You will specify these values in a later step. In the YAML editor for your custom runtime, locate the containers.image field. Update the value of the containers.image field in the format newName:newTag , based on the values that you previously noted in the kustomization.yaml file. Some examples are shown. Nvidia Triton Inference Server image: nvcr.io/nvidia/tritonserver:23.04-py3 Seldon Python MLServer image: seldonio/mlserver:1.3.2 TorchServe image: pytorch/torchserve:0.7.1-cpu In the metadata.name field, ensure that the value of the runtime you are adding is unique (that is, the value doesn't match a runtime that you have already added). Optional: To configure a custom display name for the runtime that you are adding, add a metadata.annotations.openshift.io/display-name field and specify a value, as shown in the following example: Note If you do not configure a custom display name for your runtime, OpenShift AI shows the value of the metadata.name field. Click Add . The Serving runtimes page opens and shows the updated list of runtimes that are installed. Observe that the runtime you added is automatically enabled. Optional: To edit your custom runtime, click the action menu (...) and select Edit . Verification The custom model-serving runtime that you added is shown in an enabled state on the Serving runtimes page. Additional resources To learn how to configure a model server that uses a custom model-serving runtime that you have added, see Adding a model server to your data science project . 3.1.3. Adding a tested and verified model-serving runtime for the multi-model serving platform In addition to preinstalled and custom model-serving runtimes, you can also use Red Hat tested and verified model-serving runtimes such as the NVIDIA Triton Inference Server to support your needs. For more information about Red Hat tested and verified runtimes, see Tested and verified runtimes for Red Hat OpenShift AI . You can use the Red Hat OpenShift AI dashboard to add and enable the NVIDIA Triton Inference Server runtime and then choose the runtime when you create a new model server for the multi-model serving platform. Prerequisites You have logged in to OpenShift AI as a user with OpenShift AI administrator privileges. You are familiar with how to add a model server to your project . After you have added a tested and verified model-serving runtime, you must configure a new model server to use the runtime. Procedure From the OpenShift AI dashboard, click Settings > Serving runtimes . The Serving runtimes page opens and shows the model-serving runtimes that are already installed and enabled. To add a tested and verified runtime, click Add serving runtime . In the Select the model serving platforms this runtime supports list, select Multi-model serving platform . Note The multi-model serving platform supports only the REST protocol. Therefore, you cannot change the default value in the Select the API protocol this runtime supports list. Click Start from scratch . Enter or paste the following YAML code directly in the embedded editor. In the metadata.name field, make sure that the value of the runtime you are adding does not match a runtime that you have already added). Optional: To use a custom display name for the runtime that you are adding, add a metadata.annotations.openshift.io/display-name field and specify a value, as shown in the following example: Note If you do not configure a custom display name for your runtime, OpenShift AI shows the value of the metadata.name field. Click Create . The Serving runtimes page opens and shows the updated list of runtimes that are installed. Observe that the runtime you added is automatically enabled. Optional: To edit the runtime, click the action menu (...) and select Edit . Verification The model-serving runtime that you added is shown in an enabled state on the Serving runtimes page. Additional resources To learn how to configure a model server that uses a model-serving runtime that you have added, see Adding a model server to your data science project . 3.1.4. Adding a model server for the multi-model serving platform When you have enabled the multi-model serving platform, you must configure a model server to deploy models. If you require extra computing power for use with large datasets, you can assign accelerators to your model server. Note In OpenShift AI 2.18, Red Hat supports only NVIDIA and AMD GPU accelerators for model serving. Prerequisites You have logged in to Red Hat OpenShift AI. If you use OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have created a data science project that you can add a model server to. You have enabled the multi-model serving platform. If you want to use a custom model-serving runtime for your model server, you have added and enabled the runtime. See Adding a custom model-serving runtime . If you want to use graphics processing units (GPUs) with your model server, you have enabled GPU support in OpenShift AI. If you use NVIDIA GPUs, see Enabling NVIDIA GPUs . If you use AMD GPUs, see AMD GPU integration . Procedure In the left menu of the OpenShift AI dashboard, click Data Science Projects . The Data Science Projects page opens. Click the name of the project that you want to configure a model server for. A project details page opens. Click the Models tab. Perform one of the following actions: If you see a Multi-model serving platform tile, click Add model server on the tile. If you do not see any tiles, click the Add model server button. The Add model server dialog opens. In the Model server name field, enter a unique name for the model server. From the Serving runtime list, select a model-serving runtime that is installed and enabled in your OpenShift AI deployment. Note If you are using a custom model-serving runtime with your model server and want to use GPUs, you must ensure that your custom runtime supports GPUs and is appropriately configured to use them. In the Number of model replicas to deploy field, specify a value. From the Model server size list, select a value. Optional: If you selected Custom in the preceding step, configure the following settings in the Model server size section to customize your model server: In the CPUs requested field, specify the number of CPUs to use with your model server. Use the list beside this field to specify the value in cores or millicores. In the CPU limit field, specify the maximum number of CPUs to use with your model server. Use the list beside this field to specify the value in cores or millicores. In the Memory requested field, specify the requested memory for the model server in gibibytes (Gi). In the Memory limit field, specify the maximum memory limit for the model server in gibibytes (Gi). Optional: From the Accelerator list, select an accelerator. If you selected an accelerator in the preceding step, specify the number of accelerators to use. Optional: In the Model route section, select the Make deployed models available through an external route checkbox to make your deployed models available to external clients. Optional: In the Token authentication section, select the Require token authentication checkbox to require token authentication for your model server. To finish configuring token authentication, perform the following actions: In the Service account name field, enter a service account name for which the token will be generated. The generated token is created and displayed in the Token secret field when the model server is configured. To add an additional service account, click Add a service account and enter another service account name. Click Add . The model server that you configured appears on the Models tab for the project, in the Models and model servers list. Optional: To update the model server, click the action menu ( ... ) beside the model server and select Edit model server . 3.1.5. Deleting a model server When you no longer need a model server to host models, you can remove it from your data science project. Note When you remove a model server, you also remove the models that are hosted on that model server. As a result, the models are no longer available to applications. Prerequisites You have created a data science project and an associated model server. You have notified the users of the applications that access the models that the models will no longer be available. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. Procedure From the OpenShift AI dashboard, click Data Science Projects . The Data Science Projects page opens. Click the name of the project from which you want to delete the model server. A project details page opens. Click the Models tab. Click the action menu ( ... ) beside the project whose model server you want to delete and then click Delete model server . The Delete model server dialog opens. Enter the name of the model server in the text field to confirm that you intend to delete it. Click Delete model server . Verification The model server that you deleted is no longer displayed on the Models tab for the project. 3.2. Working with deployed models 3.2.1. Deploying a model by using the multi-model serving platform You can deploy trained models on OpenShift AI to enable you to test and implement them into intelligent applications. Deploying a model makes it available as a service that you can access by using an API. This enables you to return predictions based on data inputs. When you have enabled the multi-model serving platform, you can deploy models on the platform. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users ) in OpenShift. You have enabled the multi-model serving platform. You have created a data science project and added a model server. You have access to S3-compatible object storage. For the model that you want to deploy, you know the associated folder path in your S3-compatible object storage bucket. Procedure In the left menu of the OpenShift AI dashboard, click Data Science Projects . The Data Science Projects page opens. Click the name of the project that you want to deploy a model in. A project details page opens. Click the Models tab. Click Deploy model . Configure properties for deploying your model as follows: In the Model name field, enter a unique name for the model that you are deploying. From the Model framework list, select a framework for your model. Note The Model framework list shows only the frameworks that are supported by the model-serving runtime that you specified when you configured your model server. To specify the location of the model you want to deploy from S3-compatible object storage, perform one of the following sets of actions: To use an existing connection Select Existing connection . From the Name list, select a connection that you previously defined. In the Path field, enter the folder path that contains the model in your specified data source. To use a new connection To define a new connection that your model can access, select New connection . In the Add connection modal, select a Connection type . The S3 compatible object storage and URI options are pre-installed connection types. Additional options might be available if your OpenShift AI administrator added them. The Add connection form opens with fields specific to the connection type that you selected. Enter the connection detail fields. (Optional) Customize the runtime parameters in the Configuration parameters section: Modify the values in Additional serving runtime arguments to define how the deployed model behaves. Modify the values in Additional environment variables to define variables in the model's environment. Click Deploy . Verification Confirm that the deployed model is shown on the Models tab for the project, and on the Model Serving page of the dashboard with a checkmark in the Status column. 3.2.2. Viewing a deployed model To analyze the results of your work, you can view a list of deployed models on Red Hat OpenShift AI. You can also view the current statuses of deployed models and their endpoints. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. Procedure From the OpenShift AI dashboard, click Model Serving . The Deployed models page opens. For each model, the page shows details such as the model name, the project in which the model is deployed, the model-serving runtime that the model uses, and the deployment status. Optional: For a given model, click the link in the Inference endpoint column to see the inference endpoints for the deployed model. Verification A list of previously deployed data science models is displayed on the Deployed models page. 3.2.3. Updating the deployment properties of a deployed model You can update the deployment properties of a model that has been deployed previously. For example, you can change the model's connection and name. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have deployed a model on OpenShift AI. Procedure From the OpenShift AI dashboard, click Model Serving . The Deployed models page opens. Click the action menu ( ... ) beside the model whose deployment properties you want to update and click Edit . The Edit model dialog opens. Update the deployment properties of the model as follows: In the Model name field, enter a new, unique name for your model. From the Model servers list, select a model server for your model. From the Model framework list, select a framework for your model. Note The Model framework list shows only the frameworks that are supported by the model-serving runtime that you specified when you configured your model server. Optionally, update the connection by specifying an existing connection or by creating a new connection. Click Redeploy . Verification The model whose deployment properties you updated is displayed on the Model Serving page of the dashboard. 3.2.4. Deleting a deployed model You can delete models you have previously deployed. This enables you to remove deployed models that are no longer required. Prerequisites You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have deployed a model. Procedure From the OpenShift AI dashboard, click Model serving . The Deployed models page opens. Click the action menu ( ... ) beside the deployed model that you want to delete and click Delete . The Delete deployed model dialog opens. Enter the name of the deployed model in the text field to confirm that you intend to delete it. Click Delete deployed model . Verification The model that you deleted is no longer displayed on the Deployed models page. 3.3. Configuring monitoring for the multi-model serving platform The multi-model serving platform includes model and model server metrics for the ModelMesh component. ModelMesh generates its own set of metrics and does not rely on the underlying model-serving runtimes to provide them. The set of metrics that ModelMesh generates includes metrics for model request rates and timings, model loading and unloading rates, times and sizes, internal queuing delays, capacity and usage, cache state, and least recently-used models. For more information, see ModelMesh metrics . After you have configured monitoring, you can view metrics for the ModelMesh component. Prerequisites You have cluster administrator privileges for your OpenShift cluster. You have downloaded and installed the OpenShift command-line interface (CLI). See Installing the OpenShift CLI . You are familiar with creating a config map for monitoring a user-defined workflow. You will perform similar steps in this procedure. You are familiar with enabling monitoring for user-defined projects in OpenShift. You will perform similar steps in this procedure. You have assigned the monitoring-rules-view role to users that will monitor metrics. Procedure In a terminal window, if you are not already logged in to your OpenShift cluster as a cluster administrator, log in to the OpenShift CLI as shown in the following example: Define a ConfigMap object in a YAML file called uwm-cm-conf.yaml with the following contents: The user-workload-monitoring-config object configures the components that monitor user-defined projects. Observe that the retention time is set to the recommended value of 15 days. Apply the configuration to create the user-workload-monitoring-config object. Define another ConfigMap object in a YAML file called uwm-cm-enable.yaml with the following contents: The cluster-monitoring-config object enables monitoring for user-defined projects. Apply the configuration to create the cluster-monitoring-config object. 3.4. Viewing model-serving runtime metrics for the multi-model serving platform After a cluster administrator has configured monitoring for the multi-model serving platform, non-admin users can use the OpenShift web console to view model-serving runtime metrics for the ModelMesh component. Prerequisites A cluster administrator has configured monitoring for the multi-model serving platform. You have been assigned the monitoring-rules-view role. For more information, see Granting users permission to configure monitoring for user-defined projects . You are familiar with how to monitor project metrics in the OpenShift web console. For more information, see Monitoring your project metrics . Procedure Log in to the OpenShift web console. Switch to the Developer perspective. In the left menu, click Observe . As described in Monitoring your project metrics , use the web console to run queries for modelmesh_* metrics. 3.5. Monitoring model performance In the multi-model serving platform, you can view performance metrics for all models deployed on a model server and for a specific model that is deployed on the model server. 3.5.1. Viewing performance metrics for all models on a model server You can monitor the following metrics for all the models that are deployed on a model server: HTTP requests per 5 minutes - The number of HTTP requests that have failed or succeeded for all models on the server. Average response time (ms) - For all models on the server, the average time it takes the model server to respond to requests. CPU utilization (%) - The percentage of the CPU's capacity that is currently being used by all models on the server. Memory utilization (%) - The percentage of the system's memory that is currently being used by all models on the server. You can specify a time range and a refresh interval for these metrics to help you determine, for example, when the peak usage hours are and how the models are performing at a specified time. Prerequisites You have installed Red Hat OpenShift AI. On the OpenShift cluster where OpenShift AI is installed, user workload monitoring is enabled. You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have deployed models on the multi-model serving platform. Procedure From the OpenShift AI dashboard navigation menu, click Data Science Projects . The Data Science Projects page opens. Click the name of the project that contains the data science models that you want to monitor. In the project details page, click the Models tab. In the row for the model server that you are interested in, click the action menu (...) and then select View model server metrics . Optional: On the metrics page for the model server, set the following options: Time range - Specifies how long to track the metrics. You can select one of these values: 1 hour, 24 hours, 7 days, and 30 days. Refresh interval - Specifies how frequently the graphs on the metrics page are refreshed (to show the latest data). You can select one of these values: 15 seconds, 30 seconds, 1 minute, 5 minutes, 15 minutes, 30 minutes, 1 hour, 2 hours, and 1 day. Scroll down to view data graphs for HTTP requests per 5 minutes, average response time, CPU utilization, and memory utilization. Verification On the metrics page for the model server, the graphs provide data on performance metrics. 3.5.2. Viewing HTTP request metrics for a deployed model You can view a graph that illustrates the HTTP requests that have failed or succeeded for a specific model that is deployed on the multi-model serving platform. Prerequisites You have installed Red Hat OpenShift AI. On the OpenShift cluster where OpenShift AI is installed, user workload monitoring is enabled. The following dashboard configuration options are set to the default values as shown: For more information, see Dashboard configuration options . You have logged in to Red Hat OpenShift AI. If you are using OpenShift AI groups, you are part of the user group or admin group (for example, rhoai-users or rhoai-admins ) in OpenShift. You have deployed models on the multi-model serving platform. Procedure From the OpenShift AI dashboard navigation menu, select Model Serving . On the Deployed models page, select the model that you are interested in. Optional: On the Endpoint performance tab, set the following options: Time range - Specifies how long to track the metrics. You can select one of these values: 1 hour, 24 hours, 7 days, and 30 days. Refresh interval - Specifies how frequently the graphs on the metrics page are refreshed (to show the latest data). You can select one of these values: 15 seconds, 30 seconds, 1 minute, 5 minutes, 15 minutes, 30 minutes, 1 hour, 2 hours, and 1 day. Verification The Endpoint performance tab shows a graph of the HTTP metrics for the model. | [
"apiVersion: serving.kserve.io/v1alpha1 kind: ServingRuntime metadata: name: mlserver-0.x annotations: openshift.io/display-name: MLServer",
"apiVersion: serving.kserve.io/v1alpha1 kind: ServingRuntime metadata: annotations: enable-route: \"true\" name: modelmesh-triton labels: opendatahub.io/dashboard: \"true\" spec: annotations: opendatahub.io/modelServingSupport: '[\"multi\"x`x`]' prometheus.kserve.io/path: /metrics prometheus.kserve.io/port: \"8002\" builtInAdapter: env: - name: CONTAINER_MEM_REQ_BYTES value: \"268435456\" - name: USE_EMBEDDED_PULLER value: \"true\" memBufferBytes: 134217728 modelLoadingTimeoutMillis: 90000 runtimeManagementPort: 8001 serverType: triton containers: - args: - -c - 'mkdir -p /models/_triton_models; chmod 777 /models/_triton_models; exec tritonserver \"--model-repository=/models/_triton_models\" \"--model-control-mode=explicit\" \"--strict-model-config=false\" \"--strict-readiness=false\" \"--allow-http=true\" \"--allow-grpc=true\" ' command: - /bin/sh image: nvcr.io/nvidia/tritonserver@sha256:xxxxx name: triton resources: limits: cpu: \"1\" memory: 2Gi requests: cpu: \"1\" memory: 2Gi grpcDataEndpoint: port:8001 grpcEndpoint: port:8085 multiModel: true protocolVersions: - grpc-v2 - v2 supportedModelFormats: - autoSelect: true name: onnx version: \"1\" - autoSelect: true name: pytorch version: \"1\" - autoSelect: true name: tensorflow version: \"1\" - autoSelect: true name: tensorflow version: \"2\" - autoSelect: true name: tensorrt version: \"7\" - autoSelect: false name: xgboost version: \"1\" - autoSelect: true name: python version: \"1\"",
"apiVersion: serving.kserve.io/v1alpha1 kind: ServingRuntime metadata: name: modelmesh-triton annotations: openshift.io/display-name: Triton ServingRuntime",
"oc login <openshift_cluster_url> -u <admin_username> -p <password>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: logLevel: debug retention: 15d",
"oc apply -f uwm-cm-conf.yaml",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true",
"oc apply -f uwm-cm-enable.yaml",
"disablePerformanceMetrics:false disableKServeMetrics:false"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_ai_self-managed/2.18/html/serving_models/serving-small-and-medium-sized-models_model-serving |
Chapter 3. Updating Red Hat build of OpenJDK 11 for Microsoft Windows using the archive | Chapter 3. Updating Red Hat build of OpenJDK 11 for Microsoft Windows using the archive Red Hat build of OpenJDK 11 for Microsoft Windows can be manually update using the archive. Procedure Download the archive of Red Hat build of OpenJDK 11. Extract the contents of an archive to a directory of your choice. Note Extracting the contents of an archive to a directory path that does not contain spaces is recommended. On Command Prompt, update JAVA_HOME environment variable as follows: Open Command Prompt as an administrator. Set the value of the environment variable to your Red Hat build of OpenJDK 11 for Microsoft Windows installation path: If the path contains spaces, use the shortened path name. Restart Command Prompt to reload the environment variables. Set the value of PATH variable if it is not set already: Restart Command Prompt to reload the environment variables. Verify that java -version works without supplying the full path. | [
"C:\\> setx /m JAVA_HOME \"C:\\Progra~1\\RedHat\\java-11-openjdk-11.0.1.13-1\"",
"C:\\> setx -m PATH \"%PATH%;%JAVA_HOME%\\bin\";",
"C:\\> java -version openjdk version \"11.0.3\" 2019-04-16 LTS OpenJDK Runtime Environment (build 11.0.3+7-LTS) OpenJDK 64-bit Server VM (build 11.0.3+7-LTS, mixed mode)"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/installing_and_using_red_hat_build_of_openjdk_11_for_windows/updating-openjdk-windows-using-archive |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/making-open-source-more-inclusive |
Chapter 6. Configuring Puppet smart class parameters | Chapter 6. Configuring Puppet smart class parameters 6.1. Puppet parameter hierarchy Puppet parameters are structured hierarchically. Parameters at a lower level override parameters of the higher levels: Global parameters Organization parameters Location parameters Host group parameters Host parameters For example, host specific parameters override the parameter at any higher level, and location parameters only override parameters at the organization or global level. This feature is especially useful when you use locations or organizations to group hosts. 6.2. Overriding a smart class parameter globally You can configure a Puppet class after you have imported it to Satellite Server. This example overrides the default list of ntp servers. Procedure In the Satellite web UI, navigate to Configure > Puppet ENC > Classes . Select the ntp Puppet class to change its configuration. Select the Smart Class Parameter tab and search for servers . Ensure the Override checkbox is selected. Set the Parameter Type drop down menu to array . Insert a list of ntp servers as Default Value : An alternative way to describe the array is the yaml syntax: - 0.de.pool.ntp.org - 1.de.pool.ntp.org - 2.de.pool.ntp.org - 3.de.pool.ntp.org Click Submit to change the default configuration of the Puppet module ntp . 6.3. Overriding a smart class parameter for an organization You can use groups of hosts to override Puppet parameters for multiple hosts at once. The following example chooses the organization context to illustrate setting context based parameters. Note that organization -level Puppet parameters are overridden by location -level Puppet parameters. Procedure In the Satellite web UI, navigate to Configure > Puppet ENC > Classes . Click a class name to select a class. On the Smart Class Parameter tab, select a parameter. Use the Order list to define the hierarchy of the Puppet parameters. The individual host ( fqdn ) marks the most and the organization context ( organization ) the least relevant. Check Merge Overrides if you want to add all further matched parameters after finding the first match. Check Merge Default if you want to also include the default value even if there are more specific values defined. Check Avoid Duplicates if you want to create a list of unique values for the selected parameter. The matcher field requires an attribute type from the order list. Optional: Click Add Matcher to add more matchers. Click Submit to save the changes. 6.4. Overriding a smart class parameter for a location You can use groups of hosts to override Puppet parameters for multiple hosts at once. The following examples chooses the location context to illustrate setting context based parameters. Procedure In the Satellite web UI, navigate to Configure > Puppet ENC > Classes . Click a class name to select a class. On the Smart Class Parameter tab, select a parameter. Use the Order list to define the hierarchy of the Puppet parameters. The individual host ( fqdn ) marks the most and the location context ( location ) the least relevant. Check Merge Overrides if you want to add all further matched parameters after finding the first match. Check Merge Default if you want to also include the default value even if there are more specific values defined. Check Avoid Duplicates if you want to create a list of unique values for the selected parameter. The matcher field requires an attribute type from the order list. For example, you can choose Paris as location context and set the value to French ntp servers. Optional: Click Add Matcher to add more matchers. Click Submit to save the changes. 6.5. Overriding a smart class parameter on an individual host You can override parameters on individual hosts. This is recommended if you have multiple hosts and only want to make changes to a single one. Procedure In the Satellite web UI, navigate to Hosts > All Hosts . Click a host name to select a host. Click Edit . On the Host tab, select a Puppet Environment . Select the Puppet ENC tab. Click Override to edit the Puppet parameter. Click Submit to save the changes. | [
"[\"0.de.pool.ntp.org\",\"1.de.pool.ntp.org\",\"2.de.pool.ntp.org\",\"3.de.pool.ntp.org\"]",
"- 0.de.pool.ntp.org - 1.de.pool.ntp.org - 2.de.pool.ntp.org - 3.de.pool.ntp.org"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/managing_configurations_by_using_puppet_integration/configuring_puppet_smart_class_parameters_managing-configurations-puppet |
1.2. Subscriptions | 1.2. Subscriptions To install the Red Hat Virtualization Manager and hosts, your systems must be registered with the Content Delivery Network using Red Hat Subscription Management. This section outlines the subscriptions and repositories required to set up a Red Hat Virtualization environment. 1.2.1. Required Subscriptions and Repositories The packages provided in the following repositories are required to install and configure a functioning Red Hat Virtualization environment. When one of these repositories is required to install a package, the steps required to enable the repository are provided in the appropriate location in the documentation. Table 1.1. Red Hat Virtualization Manager Subscription Pool Repository Name Repository Label Details Red Hat Enterprise Linux Server Red Hat Enterprise Linux Server rhel-7-server-rpms Provides the Red Hat Enterprise Linux 7 Server. Red Hat Enterprise Linux Server RHEL Server Supplementary rhel-7-server-supplementary-rpms Provides the virtio-win package, which provides the Windows VirtIO drivers for use in virtual machines. Red Hat Virtualization Red Hat Virtualization rhel-7-server-rhv-4.3-manager-rpms Provides the Red Hat Virtualization Manager. Red Hat Virtualization Red Hat Virtualization Tools rhel-7-server-rhv-4-manager-tools-rpms Provides dependencies for the the Red Hat Virtualization Manager that are common to all Red Hat Virtualization 4 releases. Red Hat Ansible Engine Red Hat Ansible Engine rhel-7-server-ansible-2.9-rpms Provides Red Hat Ansible Engine. Red Hat Virtualization Red Hat JBoss Enterprise Application Platform jb-eap-7.2-for-rhel-7-server-rpms Provides the supported release of Red Hat JBoss Enterprise Application Platform on which the Manager runs. Table 1.2. Red Hat Virtualization Host Subscription Pool Repository Name Repository Label Details Red Hat Virtualization Red Hat Virtualization Host rhel-7-server-rhvh-4-rpms Provides the rhev-hypervisor7-ng-image-update package, which allows you to update the image installed on the host. Table 1.3. Red Hat Enterprise Linux 7 Hosts Subscription Pool Repository Name Repository Label Details Red Hat Enterprise Linux Server Red Hat Enterprise Linux Server rhel-7-server-rpms Provides the Red Hat Enterprise Linux 7 Server. Red Hat Virtualization Red Hat Virtualization Management Agents (RPMs) rhel-7-server-rhv-4-mgmt-agent-rpms Provides the QEMU and KVM packages required for using Red Hat Enterprise Linux 7 servers as virtualization hosts. Red Hat Ansible Engine Red Hat Ansible Engine rhel-7-server-ansible-2.9-rpms Provides Red Hat Ansible Engine. 1.2.2. Optional Subscriptions and Repositories The packages provided in the following repositories are not required to install and configure a functioning Red Hat Virtualization environment. However, they are required to install packages that provide supporting functionality on virtual machines and client systems such as virtual machine resource monitoring. When one of these repositories is required to install a package, the steps required to enable the repository are provided in the appropriate location in the documentation. Table 1.4. Optional Subscriptions and Repositories Subscription Pool Repository Name Repository Label Details Red Hat Enterprise Linux Server Red Hat Enterprise Linux 7 Server - RH Common (v.7 Server for x86_64) rhel-7-server-rh-common-rpms Provides the ovirt-guest-agent-common package for Red Hat Enterprise Linux 7, which allows you to monitor virtual machine resources on Red Hat Enterprise Linux 7 clients. Red Hat Enterprise Linux Server Red Hat Enterprise Virt Agent (v.6 Server for x86_64) rhel-6-server-rhv-4-agent-rpms Provides the ovirt-guest-agent-common package for Red Hat Enterprise Linux 6, which allows you to monitor virtual machine resources on Red Hat Enterprise Linux 6 clients. Red Hat Enterprise Linux Server Red Hat Enterprise Virt Agent (v.5 Server for x86_64) rhel-5-server-rhv-4-agent-rpms Provides the rhevm-guest-agent package for Red Hat Enterprise Linux 5, which allows you to monitor virtual machine resources on Red Hat Enterprise Linux 5 clients. Red Hat Virtualization Red Hat Virtualization Host Build rhel-7-server-rhvh-4-build-rpms Provides packages used to build your own version of the Red Hat Virtualization Host image. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/release_notes/sect-subscriptions |
Appendix D. S3 supported and unsupported verbs | Appendix D. S3 supported and unsupported verbs This information lists the latest supported and unsupported S3 verbs. Table D.1. Supported verbs Action API AbortMultipartUpload S3 CompleteMultipartUpload S3 CopyObject S3 CreateBucket S3 CreateMultipartUpload S3 DeleteBucket S3 DeleteBucketCORS S3 DeleteBucketEncryption S3 DeleteBucketLifecycle S3 DeleteBucketPolicy S3 DeleteBucketReplication S3 DeleteBucketTagging S3 DeleteBucketWebsite S3 DeleteObject S3 DeleteObjects S3 DeleteObjectTagging S3 GetBucketAcl S3 GetBucketCORS S3 GetBucketEncryption S3 GetBucketLocation S3 GetBucketNotificationConfiguration S3 GetBucketPolicy S3 GetBucketPolicyStatus S3 GetBucketReplication S3 GetBucketRequestPayment S3 GetBucketTagging S3 GetBucketVersioning S3 GetBucketWebsite S3 GetObject S3 GetObjectAcl S3 GetObjectAttributes S3 GetObjectLegalHold S3 GetObjectLockConfiguration S3 GetObjectRetention S3 GetObjectTagging S3 GetObjectTorrent S3 HeadBucket S3 HeadObject S3 ListBuckets S3 ListMultipartUploads S3 ListObjects S3 ListObjectsV2 S3 ListObjectVersions S3 ListParts S3 PutBucketAcl S3 PutBucketCORS S3 PutBucketEncryption S3 PutBucketLifecycle S3 PutBucketLifecycleConfiguration S3 PutBucketNotificationConfiguration S3 PutBucketPolicy S3 PutBucketReplication S3 PutBucketRequestPayment S3 PutBucketTagging S3 PutBucketVersioning S3 PutBucketWebsite S3 PutObject S3 PutObjectAcl S3 PutObjectLegalHold S3 PutObjectLockConfiguration S3 PutObjectRetention S3 PutObjectTagging S3 SelectObjectContent S3 UploadPart S3 UploadPartCopy S3 AssumeRole STS AssumeRoleWithWebIdentity STS GetSessionToken STS CreateOpenIDConnectProvider IAM CreateRole IAM DeleteOpenIDConnectProvider IAM DeleteRole IAM GetOpenIDConnectProvider IAM GetRole IAM ListRoles IAM DeleteBucketNotification S3 extension CreateTopic SNS GetTopicAttributes SNS DeleteTopic SNS ListTopics SNS Table D.2. Unsupported verbs Action API DeleteBucketInventoryConfiguration S3 DeleteIntelligentTieringConfiguration S3 GetBucketInventoryConfiguration S3 GetBucketLogging S3 GetIntelligentTieringConfiguration S3 ListBucketIntelligentTieringConfigurations S3 ListBucketInventoryConfigurations S3 PutBucketIntelligentTieringConfiguration S3 PutBucketInventoryConfiguration S3 PutBucketLogging S3 RestoreObject S3 AssumeRoleWithSAML STS DecodeAuthorizationMessage STS GetAccessKeyInfo STS GetCallerIdentity STS GetFederationToken STS AttachGroupPolicy IAM AttachRolePolicy IAM AttachUserPolicy IAM CreatePolicy IAM CreateSAMLProvider IAM DeletePolicy IAM DeleteSAMLProviders IAM GetPolicy IAM GetSAMLProviders IAM ListOpenIDConnectProviders IAM ListPolicies IAM ListSAMLProviders IAM | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/developer_guide/s3-supported-and-unsupported-verbs_dev |
GitOps | GitOps Red Hat Advanced Cluster Management for Kubernetes 2.12 GitOps Red Hat Advanced Cluster Management for Kubernetes Team | null | https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/gitops/index |
Chapter 10. Cluster Quorum | Chapter 10. Cluster Quorum A Red Hat Enterprise Linux High Availability Add-On cluster uses the votequorum service, in conjunction with fencing, to avoid split brain situations. A number of votes is assigned to each system in the cluster, and cluster operations are allowed to proceed only when a majority of votes is present. The service must be loaded into all nodes or none; if it is loaded into a subset of cluster nodes, the results will be unpredictable. For information on the configuration and operation of the votequorum service, see the votequorum (5) man page. 10.1. Configuring Quorum Options There are some special features of quorum configuration that you can set when you create a cluster with the pcs cluster setup command. Table 10.1, "Quorum Options" summarizes these options. Table 10.1. Quorum Options Option Description --auto_tie_breaker When enabled, the cluster can suffer up to 50% of the nodes failing at the same time, in a deterministic fashion. The cluster partition, or the set of nodes that are still in contact with the nodeid configured in auto_tie_breaker_node (or lowest nodeid if not set), will remain quorate. The other nodes will be inquorate. The auto_tie_breaker option is principally used for clusters with an even number of nodes, as it allows the cluster to continue operation with an even split. For more complex failures, such as multiple, uneven splits, it is recommended that you use a quorum device, as described in Section 10.5, "Quorum Devices" . The auto_tie_breaker option is incompatible with quorum devices. --wait_for_all When enabled, the cluster will be quorate for the first time only after all nodes have been visible at least once at the same time. The wait_for_all option is primarily used for two-node clusters and for even-node clusters using the quorum device lms (last man standing) algorithm. The wait_for_all option is automatically enabled when a cluster has two nodes, does not use a quorum device, and auto_tie_breaker is disabled. You can override this by explicitly setting wait_for_all to 0. --last_man_standing When enabled, the cluster can dynamically recalculate expected_votes and quorum under specific circumstances. You must enable wait_for_all when you enable this option. The last_man_standing option is incompatible with quorum devices. --last_man_standing_window The time, in milliseconds, to wait before recalculating expected_votes and quorum after a cluster loses nodes. For further information about configuring and using these options, see the votequorum (5) man page. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/ch-Quorum-HAAR |
Chapter 2. CertificateSigningRequest [certificates.k8s.io/v1] | Chapter 2. CertificateSigningRequest [certificates.k8s.io/v1] Description CertificateSigningRequest objects provide a mechanism to obtain x509 certificates by submitting a certificate signing request, and having it asynchronously approved and issued. Kubelets use this API to obtain: 1. client certificates to authenticate to kube-apiserver (with the "kubernetes.io/kube-apiserver-client-kubelet" signerName). 2. serving certificates for TLS endpoints kube-apiserver can connect to securely (with the "kubernetes.io/kubelet-serving" signerName). This API can be used to request client certificates to authenticate to kube-apiserver (with the "kubernetes.io/kube-apiserver-client" signerName), or to obtain certificates from custom non-Kubernetes signers. Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta spec object CertificateSigningRequestSpec contains the certificate request. status object CertificateSigningRequestStatus contains conditions used to indicate approved/denied/failed status of the request, and the issued certificate. 2.1.1. .spec Description CertificateSigningRequestSpec contains the certificate request. Type object Required request signerName Property Type Description expirationSeconds integer expirationSeconds is the requested duration of validity of the issued certificate. The certificate signer may issue a certificate with a different validity duration so a client must check the delta between the notBefore and and notAfter fields in the issued certificate to determine the actual duration. The v1.22+ in-tree implementations of the well-known Kubernetes signers will honor this field as long as the requested duration is not greater than the maximum duration they will honor per the --cluster-signing-duration CLI flag to the Kubernetes controller manager. Certificate signers may not honor this field for various reasons: 1. Old signer that is unaware of the field (such as the in-tree implementations prior to v1.22) 2. Signer whose configured maximum is shorter than the requested duration 3. Signer whose configured minimum is longer than the requested duration The minimum valid value for expirationSeconds is 600, i.e. 10 minutes. extra object extra contains extra attributes of the user that created the CertificateSigningRequest. Populated by the API server on creation and immutable. extra{} array (string) groups array (string) groups contains group membership of the user that created the CertificateSigningRequest. Populated by the API server on creation and immutable. request string request contains an x509 certificate signing request encoded in a "CERTIFICATE REQUEST" PEM block. When serialized as JSON or YAML, the data is additionally base64-encoded. signerName string signerName indicates the requested signer, and is a qualified name. List/watch requests for CertificateSigningRequests can filter on this field using a "spec.signerName=NAME" fieldSelector. Well-known Kubernetes signers are: 1. "kubernetes.io/kube-apiserver-client": issues client certificates that can be used to authenticate to kube-apiserver. Requests for this signer are never auto-approved by kube-controller-manager, can be issued by the "csrsigning" controller in kube-controller-manager. 2. "kubernetes.io/kube-apiserver-client-kubelet": issues client certificates that kubelets use to authenticate to kube-apiserver. Requests for this signer can be auto-approved by the "csrapproving" controller in kube-controller-manager, and can be issued by the "csrsigning" controller in kube-controller-manager. 3. "kubernetes.io/kubelet-serving" issues serving certificates that kubelets use to serve TLS endpoints, which kube-apiserver can connect to securely. Requests for this signer are never auto-approved by kube-controller-manager, and can be issued by the "csrsigning" controller in kube-controller-manager. More details are available at https://k8s.io/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers Custom signerNames can also be specified. The signer defines: 1. Trust distribution: how trust (CA bundles) are distributed. 2. Permitted subjects: and behavior when a disallowed subject is requested. 3. Required, permitted, or forbidden x509 extensions in the request (including whether subjectAltNames are allowed, which types, restrictions on allowed values) and behavior when a disallowed extension is requested. 4. Required, permitted, or forbidden key usages / extended key usages. 5. Expiration/certificate lifetime: whether it is fixed by the signer, configurable by the admin. 6. Whether or not requests for CA certificates are allowed. uid string uid contains the uid of the user that created the CertificateSigningRequest. Populated by the API server on creation and immutable. usages array (string) usages specifies a set of key usages requested in the issued certificate. Requests for TLS client certificates typically request: "digital signature", "key encipherment", "client auth". Requests for TLS serving certificates typically request: "key encipherment", "digital signature", "server auth". Valid values are: "signing", "digital signature", "content commitment", "key encipherment", "key agreement", "data encipherment", "cert sign", "crl sign", "encipher only", "decipher only", "any", "server auth", "client auth", "code signing", "email protection", "s/mime", "ipsec end system", "ipsec tunnel", "ipsec user", "timestamping", "ocsp signing", "microsoft sgc", "netscape sgc" username string username contains the name of the user that created the CertificateSigningRequest. Populated by the API server on creation and immutable. 2.1.2. .spec.extra Description extra contains extra attributes of the user that created the CertificateSigningRequest. Populated by the API server on creation and immutable. Type object 2.1.3. .status Description CertificateSigningRequestStatus contains conditions used to indicate approved/denied/failed status of the request, and the issued certificate. Type object Property Type Description certificate string certificate is populated with an issued certificate by the signer after an Approved condition is present. This field is set via the /status subresource. Once populated, this field is immutable. If the certificate signing request is denied, a condition of type "Denied" is added and this field remains empty. If the signer cannot issue the certificate, a condition of type "Failed" is added and this field remains empty. Validation requirements: 1. certificate must contain one or more PEM blocks. 2. All PEM blocks must have the "CERTIFICATE" label, contain no headers, and the encoded data must be a BER-encoded ASN.1 Certificate structure as described in section 4 of RFC5280. 3. Non-PEM content may appear before or after the "CERTIFICATE" PEM blocks and is unvalidated, to allow for explanatory text as described in section 5.2 of RFC7468. If more than one PEM block is present, and the definition of the requested spec.signerName does not indicate otherwise, the first block is the issued certificate, and subsequent blocks should be treated as intermediate certificates and presented in TLS handshakes. The certificate is encoded in PEM format. When serialized as JSON or YAML, the data is additionally base64-encoded, so it consists of: base64( -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- ) conditions array conditions applied to the request. Known conditions are "Approved", "Denied", and "Failed". conditions[] object CertificateSigningRequestCondition describes a condition of a CertificateSigningRequest object 2.1.4. .status.conditions Description conditions applied to the request. Known conditions are "Approved", "Denied", and "Failed". Type array 2.1.5. .status.conditions[] Description CertificateSigningRequestCondition describes a condition of a CertificateSigningRequest object Type object Required type status Property Type Description lastTransitionTime Time lastTransitionTime is the time the condition last transitioned from one status to another. If unset, when a new condition type is added or an existing condition's status is changed, the server defaults this to the current time. lastUpdateTime Time lastUpdateTime is the time of the last update to this condition message string message contains a human readable message with details about the request state reason string reason indicates a brief reason for the request state status string status of the condition, one of True, False, Unknown. Approved, Denied, and Failed conditions may not be "False" or "Unknown". type string type of the condition. Known conditions are "Approved", "Denied", and "Failed". An "Approved" condition is added via the /approval subresource, indicating the request was approved and should be issued by the signer. A "Denied" condition is added via the /approval subresource, indicating the request was denied and should not be issued by the signer. A "Failed" condition is added via the /status subresource, indicating the signer failed to issue the certificate. Approved and Denied conditions are mutually exclusive. Approved, Denied, and Failed conditions cannot be removed once added. Only one condition of a given type is allowed. 2.2. API endpoints The following API endpoints are available: /apis/certificates.k8s.io/v1/certificatesigningrequests DELETE : delete collection of CertificateSigningRequest GET : list or watch objects of kind CertificateSigningRequest POST : create a CertificateSigningRequest /apis/certificates.k8s.io/v1/watch/certificatesigningrequests GET : watch individual changes to a list of CertificateSigningRequest. deprecated: use the 'watch' parameter with a list operation instead. /apis/certificates.k8s.io/v1/certificatesigningrequests/{name} DELETE : delete a CertificateSigningRequest GET : read the specified CertificateSigningRequest PATCH : partially update the specified CertificateSigningRequest PUT : replace the specified CertificateSigningRequest /apis/certificates.k8s.io/v1/watch/certificatesigningrequests/{name} GET : watch changes to an object of kind CertificateSigningRequest. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/certificates.k8s.io/v1/certificatesigningrequests/{name}/status GET : read status of the specified CertificateSigningRequest PATCH : partially update status of the specified CertificateSigningRequest PUT : replace status of the specified CertificateSigningRequest /apis/certificates.k8s.io/v1/certificatesigningrequests/{name}/approval GET : read approval of the specified CertificateSigningRequest PATCH : partially update approval of the specified CertificateSigningRequest PUT : replace approval of the specified CertificateSigningRequest 2.2.1. /apis/certificates.k8s.io/v1/certificatesigningrequests HTTP method DELETE Description delete collection of CertificateSigningRequest Table 2.1. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.2. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CertificateSigningRequest Table 2.3. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequestList schema 401 - Unauthorized Empty HTTP method POST Description create a CertificateSigningRequest Table 2.4. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.5. Body parameters Parameter Type Description body CertificateSigningRequest schema Table 2.6. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 202 - Accepted CertificateSigningRequest schema 401 - Unauthorized Empty 2.2.2. /apis/certificates.k8s.io/v1/watch/certificatesigningrequests HTTP method GET Description watch individual changes to a list of CertificateSigningRequest. deprecated: use the 'watch' parameter with a list operation instead. Table 2.7. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/certificates.k8s.io/v1/certificatesigningrequests/{name} Table 2.8. Global path parameters Parameter Type Description name string name of the CertificateSigningRequest HTTP method DELETE Description delete a CertificateSigningRequest Table 2.9. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 2.10. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CertificateSigningRequest Table 2.11. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CertificateSigningRequest Table 2.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.13. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CertificateSigningRequest Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.15. Body parameters Parameter Type Description body CertificateSigningRequest schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 401 - Unauthorized Empty 2.2.4. /apis/certificates.k8s.io/v1/watch/certificatesigningrequests/{name} Table 2.17. Global path parameters Parameter Type Description name string name of the CertificateSigningRequest HTTP method GET Description watch changes to an object of kind CertificateSigningRequest. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.18. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.5. /apis/certificates.k8s.io/v1/certificatesigningrequests/{name}/status Table 2.19. Global path parameters Parameter Type Description name string name of the CertificateSigningRequest HTTP method GET Description read status of the specified CertificateSigningRequest Table 2.20. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CertificateSigningRequest Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CertificateSigningRequest Table 2.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.24. Body parameters Parameter Type Description body CertificateSigningRequest schema Table 2.25. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 401 - Unauthorized Empty 2.2.6. /apis/certificates.k8s.io/v1/certificatesigningrequests/{name}/approval Table 2.26. Global path parameters Parameter Type Description name string name of the CertificateSigningRequest HTTP method GET Description read approval of the specified CertificateSigningRequest Table 2.27. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 401 - Unauthorized Empty HTTP method PATCH Description partially update approval of the specified CertificateSigningRequest Table 2.28. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.29. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 401 - Unauthorized Empty HTTP method PUT Description replace approval of the specified CertificateSigningRequest Table 2.30. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.31. Body parameters Parameter Type Description body CertificateSigningRequest schema Table 2.32. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/security_apis/certificatesigningrequest-certificates-k8s-io-v1 |
Chapter 38. General Updates | Chapter 38. General Updates The systemd-importd VM and container image import and export service Latest systemd version now contains the systemd-importd daemon that was not enabled in the earlier build, which caused the machinectl pull-* commands to fail. Note that the systemd-importd daemon is offered as a Technology Preview and should not be considered stable. (BZ# 1284974 ) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/technology_previews_general_updates |
Chapter 4. Setting up Enterprise Security Client | Chapter 4. Setting up Enterprise Security Client The following sections contain basic instructions on using the Enterprise Security Client for token enrollment, formatting, and password reset operations. 4.1. Installing the Smart Card Package Group Packages used to manage smart cards, such as esc , should already be installed on the Red Hat Enterprise Linux system. If the packages are not installed or need to be updated, all of the smart card-related packages can be pulled in by installing the Smart card support package group. For example: | [
"groupinstall \"Smart card support\""
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_smart_cards/using_the_enterprise_security_client |
Chapter 6. ConsolePlugin [console.openshift.io/v1] | Chapter 6. ConsolePlugin [console.openshift.io/v1] Description ConsolePlugin is an extension for customizing OpenShift web console by dynamically loading code from another service running on the cluster. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required metadata spec 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsolePluginSpec is the desired plugin configuration. 6.1.1. .spec Description ConsolePluginSpec is the desired plugin configuration. Type object Required backend displayName Property Type Description backend object backend holds the configuration of backend which is serving console's plugin . displayName string displayName is the display name of the plugin. The dispalyName should be between 1 and 128 characters. i18n object i18n is the configuration of plugin's localization resources. proxy array proxy is a list of proxies that describe various service type to which the plugin needs to connect to. proxy[] object ConsolePluginProxy holds information on various service types to which console's backend will proxy the plugin's requests. 6.1.2. .spec.backend Description backend holds the configuration of backend which is serving console's plugin . Type object Required type Property Type Description service object service is a Kubernetes Service that exposes the plugin using a deployment with an HTTP server. The Service must use HTTPS and Service serving certificate. The console backend will proxy the plugins assets from the Service using the service CA bundle. type string type is the backend type which servers the console's plugin. Currently only "Service" is supported. --- 6.1.3. .spec.backend.service Description service is a Kubernetes Service that exposes the plugin using a deployment with an HTTP server. The Service must use HTTPS and Service serving certificate. The console backend will proxy the plugins assets from the Service using the service CA bundle. Type object Required name namespace port Property Type Description basePath string basePath is the path to the plugin's assets. The primary asset it the manifest file called plugin-manifest.json , which is a JSON document that contains metadata about the plugin and the extensions. name string name of Service that is serving the plugin assets. namespace string namespace of Service that is serving the plugin assets. port integer port on which the Service that is serving the plugin is listening to. 6.1.4. .spec.i18n Description i18n is the configuration of plugin's localization resources. Type object Required loadType Property Type Description loadType string loadType indicates how the plugin's localization resource should be loaded. Valid values are Preload, Lazy and the empty string. When set to Preload, all localization resources are fetched when the plugin is loaded. When set to Lazy, localization resources are lazily loaded as and when they are required by the console. When omitted or set to the empty string, the behaviour is equivalent to Lazy type. 6.1.5. .spec.proxy Description proxy is a list of proxies that describe various service type to which the plugin needs to connect to. Type array 6.1.6. .spec.proxy[] Description ConsolePluginProxy holds information on various service types to which console's backend will proxy the plugin's requests. Type object Required alias endpoint Property Type Description alias string alias is a proxy name that identifies the plugin's proxy. An alias name should be unique per plugin. The console backend exposes following proxy endpoint: /api/proxy/plugin/<plugin-name>/<proxy-alias>/<request-path>?<optional-query-parameters> Request example path: /api/proxy/plugin/acm/search/pods?namespace=openshift-apiserver authorization string authorization provides information about authorization type, which the proxied request should contain caCertificate string caCertificate provides the cert authority certificate contents, in case the proxied Service is using custom service CA. By default, the service CA bundle provided by the service-ca operator is used. endpoint object endpoint provides information about endpoint to which the request is proxied to. 6.1.7. .spec.proxy[].endpoint Description endpoint provides information about endpoint to which the request is proxied to. Type object Required type Property Type Description service object service is an in-cluster Service that the plugin will connect to. The Service must use HTTPS. The console backend exposes an endpoint in order to proxy communication between the plugin and the Service. Note: service field is required for now, since currently only "Service" type is supported. type string type is the type of the console plugin's proxy. Currently only "Service" is supported. --- 6.1.8. .spec.proxy[].endpoint.service Description service is an in-cluster Service that the plugin will connect to. The Service must use HTTPS. The console backend exposes an endpoint in order to proxy communication between the plugin and the Service. Note: service field is required for now, since currently only "Service" type is supported. Type object Required name namespace port Property Type Description name string name of Service that the plugin needs to connect to. namespace string namespace of Service that the plugin needs to connect to port integer port on which the Service that the plugin needs to connect to is listening on. 6.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consoleplugins DELETE : delete collection of ConsolePlugin GET : list objects of kind ConsolePlugin POST : create a ConsolePlugin /apis/console.openshift.io/v1/consoleplugins/{name} DELETE : delete a ConsolePlugin GET : read the specified ConsolePlugin PATCH : partially update the specified ConsolePlugin PUT : replace the specified ConsolePlugin 6.2.1. /apis/console.openshift.io/v1/consoleplugins Table 6.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ConsolePlugin Table 6.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsolePlugin Table 6.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.5. HTTP responses HTTP code Reponse body 200 - OK ConsolePluginList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsolePlugin Table 6.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.7. Body parameters Parameter Type Description body ConsolePlugin schema Table 6.8. HTTP responses HTTP code Reponse body 200 - OK ConsolePlugin schema 201 - Created ConsolePlugin schema 202 - Accepted ConsolePlugin schema 401 - Unauthorized Empty 6.2.2. /apis/console.openshift.io/v1/consoleplugins/{name} Table 6.9. Global path parameters Parameter Type Description name string name of the ConsolePlugin Table 6.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ConsolePlugin Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.12. Body parameters Parameter Type Description body DeleteOptions schema Table 6.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsolePlugin Table 6.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 6.15. HTTP responses HTTP code Reponse body 200 - OK ConsolePlugin schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsolePlugin Table 6.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.17. Body parameters Parameter Type Description body Patch schema Table 6.18. HTTP responses HTTP code Reponse body 200 - OK ConsolePlugin schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsolePlugin Table 6.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.20. Body parameters Parameter Type Description body ConsolePlugin schema Table 6.21. HTTP responses HTTP code Reponse body 200 - OK ConsolePlugin schema 201 - Created ConsolePlugin schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/console_apis/consoleplugin-console-openshift-io-v1 |
3.11. Cluster Networking | 3.11. Cluster Networking Cluster level networking objects include: Clusters Logical Networks Figure 3.1. Networking within a cluster A data center is a logical grouping of multiple clusters and each cluster is a logical group of multiple hosts. Figure 3.1, "Networking within a cluster" depicts the contents of a single cluster. Hosts in a cluster all have access to the same storage domains. Hosts in a cluster also have logical networks applied at the cluster level. For a virtual machine logical network to become operational for use with virtual machines, the network must be defined and implemented for each host in the cluster using the Red Hat Virtualization Manager. Other logical network types can be implemented on only the hosts that use them. Multi-host network configuration automatically applies any updated network settings to all of the hosts within the data center to which the network is assigned. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/Cluster_Networking |
Chapter 19. Installation configuration parameters for AWS | Chapter 19. Installation configuration parameters for AWS Before you deploy an OpenShift Container Platform cluster on AWS, you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 19.1. Available installation configuration parameters for AWS The following tables specify the required, optional, and AWS-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 19.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 19.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 19.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 19.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 19.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 19.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings Required to set the NLB load balancer type in AWS. Valid values are Classic or NLB . If no value is specified, the installation program defaults to Classic . The installation program sets the value provided here in the ingress cluster configuration object. If you do not specify a load balancer type for other Ingress Controllers, they use the type set in this parameter. Classic or NLB . The default value is Classic . How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough , or Manual . Important Setting this parameter to Manual enables alternatives to storing administrator-level secrets in the kube-system project, which require additional configuration steps. For more information, see "Alternatives to storing administrator-level secrets in the kube-system project". 19.1.4. Optional AWS configuration parameters Optional AWS configuration parameters are described in the following table: Table 19.4. Optional AWS parameters Parameter Description Values The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. Integer, for example 4000 . The size in GiB of the root volume. Integer, for example 500 . The type of the root volume. Valid AWS EBS volume type , such as io1 . The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of worker nodes with a specific KMS key. Valid key ID or the key ARN . The EC2 instance type for the compute machines. Valid AWS instance type, such as m4.2xlarge . See the Supported AWS machine types table that follows. The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . The AWS region that the installation program creates compute resources in. Any valid AWS region , such as us-east-1 . You can use the AWS CLI to access the regions available based on your selected instance type. For example: aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. Currently, AWS Graviton3 processors are only available in some regions. The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. The Input/Output Operations Per Second (IOPS) that is reserved for the root volume on control plane machines. Integer, for example 4000 . The size in GiB of the root volume for control plane machines. Integer, for example 500 . The type of the root volume for control plane machines. Valid AWS EBS volume type , such as io1 . The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of control plane nodes with a specific KMS key. Valid key ID and the key ARN . The EC2 instance type for the control plane machines. Valid AWS instance type, such as m6i.xlarge . See the Supported AWS machine types table that follows. The availability zones where the installation program creates machines for the control plane machine pool. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . The AWS region that the installation program creates control plane resources in. Valid AWS region , such as us-east-1 . The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. String, for example Z3URY6TWQ91KVV . An Amazon Resource Name (ARN) for an existing IAM role in the account containing the specified hosted zone. The installation program and cluster operators will assume this role when performing operations on the hosted zone. This parameter should only be used if you are installing a cluster into a shared VPC. String, for example arn:aws:iam::1234567890:role/shared-vpc-role . The AWS service endpoint name and URL. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. Valid AWS service endpoint name and valid AWS service endpoint URL. A map of keys and values that the installation program adds as tags to all resources that it creates. Any valid YAML map, such as key value pairs in the <key>: <value> format. For more information about AWS tags, see Tagging Your Amazon EC2 Resources in the AWS documentation. Note You can add up to 25 user defined tags during installation. The remaining 25 tags are reserved for OpenShift Container Platform. A flag that directs in-cluster Operators to include the specified user tags in the tags of the AWS resources that the Operators create. Boolean values, for example true or false . If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same machineNetwork[].cidr ranges that you specify. For a standard cluster, specify a public and a private subnet for each availability zone. For a private cluster, specify a private subnet for each availability zone. For clusters that use AWS Local Zones, you must add AWS Local Zone subnets to this list to ensure edge machine pool creation. Valid subnet IDs. Prevents the S3 bucket from being deleted after completion of bootstrapping. true or false . The default value is false , which results in the S3 bucket being deleted. | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"platform: aws: lbType:",
"publish:",
"sshKey:",
"compute: platform: aws: amiID:",
"compute: platform: aws: iamRole:",
"compute: platform: aws: rootVolume: iops:",
"compute: platform: aws: rootVolume: size:",
"compute: platform: aws: rootVolume: type:",
"compute: platform: aws: rootVolume: kmsKeyARN:",
"compute: platform: aws: type:",
"compute: platform: aws: zones:",
"compute: aws: region:",
"aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge",
"controlPlane: platform: aws: amiID:",
"controlPlane: platform: aws: iamRole:",
"controlPlane: platform: aws: rootVolume: iops:",
"controlPlane: platform: aws: rootVolume: size:",
"controlPlane: platform: aws: rootVolume: type:",
"controlPlane: platform: aws: rootVolume: kmsKeyARN:",
"controlPlane: platform: aws: type:",
"controlPlane: platform: aws: zones:",
"controlPlane: aws: region:",
"platform: aws: amiID:",
"platform: aws: hostedZone:",
"platform: aws: hostedZoneRole:",
"platform: aws: serviceEndpoints: - name: url:",
"platform: aws: userTags:",
"platform: aws: propagateUserTags:",
"platform: aws: subnets:",
"platform: aws: preserveBootstrapIgnition:"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_aws/installation-config-parameters-aws |
Chapter 1. Architecture overview | Chapter 1. Architecture overview CodeReady Workspaces needs a workspace engine to manage the lifecycle of the workspaces. Two workspace engines are available. The choice of a workspace engine defines the architecture. Section 1.1, "CodeReady Workspaces architecture with CodeReady Workspaces server" CodeReady Workspaces server is the default workspace engine. Figure 1.1. High-level CodeReady Workspaces architecture with the CodeReady Workspaces server engine Section 1.4, "CodeReady Workspaces architecture with Dev Workspace" The Dev Workspace Operator is a new workspace engine. Technology preview feature Managing workspaces with the Dev Workspace engine is an experimental feature. Don't use this workspace engine in production. Known limitations Workspaces are not secured. Whoever knows the URL of a workspace can have access to it and leak the user credentials. Figure 1.2. High-level CodeReady Workspaces architecture with the Dev Workspace operator Additional resources Section 1.1, "CodeReady Workspaces architecture with CodeReady Workspaces server" Section 1.4, "CodeReady Workspaces architecture with Dev Workspace" https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/installation_guide/index#enabling-dev-workspace-operator.adoc Dev Workspace Operator GitHub repository 1.1. CodeReady Workspaces architecture with CodeReady Workspaces server CodeReady Workspaces server is the default workspace engine. Figure 1.3. High-level CodeReady Workspaces architecture with the CodeReady Workspaces server engine Red Hat CodeReady Workspaces components are: CodeReady Workspaces server An always-running service that manages user workspaces with the OpenShift API. User workspaces Container-based IDEs running on user requests. Additional resources Section 1.2, "Understanding CodeReady Workspaces server" Section 1.3, "Understanding CodeReady Workspaces workspaces architecture" 1.2. Understanding CodeReady Workspaces server This chapter describes the CodeReady Workspaces controller and the services that are a part of the controller. 1.2.1. CodeReady Workspaces server The workspaces controller manages the container-based development environments: CodeReady Workspaces workspaces. To secure the development environments with authentication, the deployment is always multiuser and multitenant. The following diagram shows the different services that are a part of the CodeReady Workspaces workspaces controller. Figure 1.4. CodeReady Workspaces workspaces controller Additional resources Section 12.1, "Authenticating users" 1.2.2. CodeReady Workspaces server The CodeReady Workspaces server is the central service of CodeReady Workspaces server-side components. It is a Java web service exposing an HTTP REST API to manage CodeReady Workspaces workspaces and users. It is the default workspace engine. Additional resources https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/installation_guide/index#advanced-configuration-options-for-the-che-server-component.adoc 1.2.3. CodeReady Workspaces user dashboard The user dashboard is the landing page of Red Hat CodeReady Workspaces. It is a React application. CodeReady Workspaces users navigate the user dashboard from their browsers to create, start, and manage CodeReady Workspaces workspaces. Additional resources https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/end-user_guide/index#navigating-che.adoc 1.2.4. CodeReady Workspaces devfile registry The CodeReady Workspaces devfile registry is a service that provides a list of CodeReady Workspaces samples to create ready-to-use workspaces. This list of samples is used in the Dashboard Create Workspace window. The devfile registry runs in a container and can be deployed wherever the user dashboard can connect. Additional resources https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/end-user_guide/index#creating-a-workspace-from-a-code-sample.adoc CodeReady Workspaces devfile registry repository 1.2.5. CodeReady Workspaces plug-in registry The CodeReady Workspaces plug-in registry is a service that provides the list of plug-ins and editors for CodeReady Workspaces workspaces. A devfile only references a plug-in that is published in a CodeReady Workspaces plug-in registry. It runs in a container and can be deployed wherever CodeReady Workspaces server connects. 1.2.6. CodeReady Workspaces and PostgreSQL The PostgreSQL database is a prerequisite for CodeReady Workspaces server and RH-SSO. The CodeReady Workspaces administrator can choose to: Connect CodeReady Workspaces to an existing PostgreSQL instance. Let the CodeReady Workspaces deployment start a new dedicated PostgreSQL instance. Services use the database for the following purposes: CodeReady Workspaces server Persist user configurations such as workspaces metadata and Git credentials. RH-SSO Persist user information. Additional resources Section 8.8, "Backups of PostgreSQL" quay.io/eclipse/che-postgres container image CodeReady Workspaces Postgres repository 1.2.7. CodeReady Workspaces and RH-SSO RH-SSO is a prerequisite to configure CodeReady Workspaces. The CodeReady Workspaces administrator can choose to connect CodeReady Workspaces to an existing RH-SSO instance or let the CodeReady Workspaces deployment start a new dedicated RH-SSO instance. The CodeReady Workspaces server uses RH-SSO as an OpenID Connect (OIDC) provider to authenticate CodeReady Workspaces users and secure access to CodeReady Workspaces resources. Additional resources quay.io/eclipse/che-keycloak container image CodeReady Workspaces RH-SSO repository 1.3. Understanding CodeReady Workspaces workspaces architecture This chapter describes the architecture and components of CodeReady Workspaces. 1.3.1. CodeReady Workspaces workspaces architecture A CodeReady Workspaces deployment on the cluster consists of the CodeReady Workspaces server component, a database for storing user profile and preferences, and several additional deployments hosting workspaces. The CodeReady Workspaces server orchestrates the creation of workspaces, which consist of a deployment containing the workspace containers and enabled plug-ins, plus the related components, such as: ConfigMaps services endpoints ingresses or routes secrets persistent volumes (PVs) The CodeReady Workspaces workspace is a web application. It is composed of microservices running in containers that provide all the services of a modern IDE such as an editor, language auto-completion, and debugging tools. The IDE services are deployed with the development tools, packaged in containers and user runtime applications, which are defined as OpenShift resources. The source code of the projects of a CodeReady Workspaces workspace is persisted in a OpenShift PersistentVolume . Microservices run in containers that have read-write access to the source code (IDE services, development tools), and runtime applications have read-write access to this shared directory. The following diagram shows the detailed components of a CodeReady Workspaces workspace. Figure 1.5. CodeReady Workspaces workspace components In the diagram, there are four running workspaces: two belonging to User A , one to User B and one to User C . Use the devfile format to specify the tools and runtime applications of a CodeReady Workspaces workspace. 1.3.2. CodeReady Workspaces workspace components This section describes the components of a CodeReady Workspaces workspace. 1.3.2.1. Che Editor plug-in A Che Editor plug-in is a CodeReady Workspaces workspace plug-in. It defines the web application that is used as an editor in a workspace. The default CodeReady Workspaces workspace editor is Che-Theia . It is a web-based source-code editor similar to Visual Studio Code (Visual Studio Code). It has a plug-in system that supports Visual Studio Code extensions. Source code Che-Theia Container image eclipse/che-theia Endpoints theia , webviews , theia-dev , theia-redirect-1 , theia-redirect-2 , theia-redirect-3 Additional resources Che-Theia Eclipse Theia open source project Visual Studio Code 1.3.2.2. CodeReady Workspaces user runtimes Use any non-terminating user container as a user runtime. An application that can be defined as a container image or as a set of OpenShift resources can be included in a CodeReady Workspaces workspace. This makes it easy to test applications in the CodeReady Workspaces workspace. To test an application in the CodeReady Workspaces workspace, include the application YAML definition used in stage or production in the workspace specification. It is a 12-factor application development / production parity. Examples of user runtimes are Node.js, SpringBoot or MongoDB, and MySQL. 1.3.2.3. CodeReady Workspaces workspace JWT proxy The JWT proxy is responsible for securing the communication of the CodeReady Workspaces workspace services. An HTTP proxy is used to sign outgoing requests from a workspace service to the CodeReady Workspaces server and to authenticate incoming requests from the IDE client running on a browser. Source code JWT proxy Container image eclipse/che-jwtproxy 1.3.2.4. CodeReady Workspaces plug-ins broker Plug-in brokers are special services that, given a plug-in meta.yaml file: Gather all the information to provide a plug-in definition that the CodeReady Workspaces server knows. Perform preparation actions in the workspace project (download, unpack files, process configuration). The main goal of the plug-in broker is to decouple the CodeReady Workspaces plug-ins definitions from the actual plug-ins that CodeReady Workspaces can support. With brokers, CodeReady Workspaces can support different plug-ins without updating the CodeReady Workspaces server. The CodeReady Workspaces server starts the plug-in broker. The plug-in broker runs in the same OpenShift project as the workspace. It has access to the plug-ins and project persistent volumes. A plug-ins broker is defined as a container image (for example, eclipse/che-plugin-broker ). The plug-in type determines the type of the broker that is started. Two types of plug-ins are supported: Che Plugin and Che Editor . Source code CodeReady Workspaces Plug-in broker Container image quay.io/eclipse/che-plugin-artifacts-broker eclipse/che-plugin-metadata-broker 1.3.3. CodeReady Workspaces workspace creation flow The following is a CodeReady Workspaces workspace creation flow: A user starts a CodeReady Workspaces workspace defined by: An editor (the default is Che-Theia) A list of plug-ins (for example, Java and OpenShift tools) A list of runtime applications CodeReady Workspaces server retrieves the editor and plug-in metadata from the plug-in registry. For every plug-in type, CodeReady Workspaces server starts a specific plug-in broker. The CodeReady Workspaces plug-ins broker transforms the plug-in metadata into a Che Plugin definition. It executes the following steps: Downloads a plug-in and extracts its content. Processes the plug-in meta.yaml file and sends it back to CodeReady Workspaces server in the format of a Che Plugin . CodeReady Workspaces server starts the editor and the plug-in sidecars. The editor loads the plug-ins from the plug-in persistent volume. 1.4. CodeReady Workspaces architecture with Dev Workspace Technology preview feature Managing workspaces with the Dev Workspace engine is an experimental feature. Don't use this workspace engine in production. Known limitations Workspaces are not secured. Whoever knows the URL of a workspace can have access to it and leak the user credentials. Figure 1.6. High-level CodeReady Workspaces architecture with the Dev Workspace operator When CodeReady Workspaces is running with the Dev Workspace operator, it runs on three groups of components: CodeReady Workspaces server components Manage User project and workspaces. The main component is the User dashboard, from which users control their workspaces. Dev Workspace operator Creates and controls the necessary OpenShift objects to run User workspaces. Including Pods , Services , and PeristentVolumes . User workspaces Container-based development environments, the IDE included. The role of these OpenShift features is central: Dev Workspace Custom Resources Valid OpenShift objects representing the User workspaces and manipulated by CodeReady Workspaces. It is the communication channel for the three groups of components. OpenShift role-based access control (RBAC) Controls access to all resources. Additional resources Section 1.5, "CodeReady Workspaces server components" Section 1.5.2, "Dev Workspace operator" Section 1.6, "User workspaces" https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/installation_guide/index#enabling-dev-workspace-operator.adoc Dev Workspace Operator repository Kubernetes documentation - Custom Resources 1.5. CodeReady Workspaces server components Technology preview feature Managing workspaces with the Dev Workspace engine is an experimental feature. Don't use this workspace engine in production. Known limitations Workspaces are not secured. Whoever knows the URL of a workspace can have access to it and leak the user credentials. The CodeReady Workspaces server components ensure multi-tenancy and workspaces management. Figure 1.7. CodeReady Workspaces server components interacting with the Dev Workspace operator Additional resources Section 1.5.1, "CodeReady Workspaces operator" Section 1.5.2, "Dev Workspace operator" Section 1.5.3, "Gateway" Section 1.5.4, "User dashboard" Section 1.5.5, "Devfile registries" Section 1.5.6, "CodeReady Workspaces server" Section 1.5.7, "PostgreSQL" Section 1.5.8, "Plug-in registry" 1.5.1. CodeReady Workspaces operator The CodeReady Workspaces operator ensure full lifecycle management of the CodeReady Workspaces server components. It introduces: CheCluster custom resource definition (CRD) Defines the CheCluster OpenShift object. CodeReady Workspaces controller Creates and controls the necessary OpenShift objects to run a CodeReady Workspaces instance, such as pods, services, and persistent volumes. CheCluster custom resource (CR) On a cluster with the CodeReady Workspaces operator, it is possible to create a CheCluster custom resource (CR). The CodeReady Workspaces operator ensures the full lifecycle management of the CodeReady Workspaces server components on this CodeReady Workspaces instance: Section 1.5.2, "Dev Workspace operator" Section 1.5.3, "Gateway" Section 1.5.4, "User dashboard" Section 1.5.5, "Devfile registries" Section 1.5.6, "CodeReady Workspaces server" Section 1.5.7, "PostgreSQL" Section 1.5.8, "Plug-in registry" Additional resources https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/installation_guide/index#configuring-the-che-installation.adoc https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/installation_guide/index#installing-che.adoc 1.5.2. Dev Workspace operator Technology preview feature Managing workspaces with the Dev Workspace engine is an experimental feature. Don't use this workspace engine in production. Known limitations Workspaces are not secured. Whoever knows the URL of a workspace can have access to it and leak the user credentials. The Dev Workspace operator extends OpenShift to provide Dev Workspace support. It introduces: Dev Workspace custom resource definition Defines the Dev Workspace OpenShift object from the Devfile v2 specification. Dev Workspace controller Creates and controls the necessary OpenShift objects to run a Dev Workspace, such as pods, services, and persistent volumes. Dev Workspace custom resource On a cluster with the Dev Workspace operator, it is possible to create Dev Workspace custom resources (CR). A Dev Workspace CR is a OpenShift representation of a Devfile. It defines a User workspaces in a OpenShift cluster. Additional resources https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/installation_guide/index#enabling-dev-workspace-operator.adoc Devfile API repository 1.5.3. Gateway The CodeReady Workspaces gateway has following roles: Routing requests. It uses Traefik . Authenticating users with OpenID Connect (OIDC). It uses OpenShift OAuth2 proxy . Applying OpenShift Role based access control (RBAC) policies to control access to any CodeReady Workspaces resource. It uses `kube-rbac-proxy` . The CodeReady Workspaces operator manages it as the che-gateway Deployment. It controls access to: Section 1.5.4, "User dashboard" Section 1.5.5, "Devfile registries" Section 1.5.6, "CodeReady Workspaces server" Section 1.5.8, "Plug-in registry" Section 1.6, "User workspaces" Figure 1.8. CodeReady Workspaces gateway interactions with other components Additional resources Chapter 12, Managing identities and authorizations 1.5.4. User dashboard The user dashboard is the landing page of Red Hat CodeReady Workspaces. CodeReady Workspaces end-users browse the user dashboard to access and manage their workspaces. It is a React application. The CodeReady Workspaces deployment starts it in the codeready-dashboard Deployment. It need access to: Section 1.5.5, "Devfile registries" Section 1.5.6, "CodeReady Workspaces server" Section 1.5.8, "Plug-in registry" OpenShift API Figure 1.9. User dashboard interactions with other components When the user requests the user dashboard to start a workspace, the user dashboard executes this sequence of actions: Collects the devfile from the Section 1.5.5, "Devfile registries" , when the user is Creating a workspace from a code sample . Sends the repository URL to Section 1.5.6, "CodeReady Workspaces server" and expects a devfile in return, when the user is Creating a workspace from remote devfile . Reads the devfile describing the workspace. Collects the additional metadata from the Section 1.5.8, "Plug-in registry" . Converts the information into a Dev Workspace Custom Resource. Creates the Dev Workspace Custom Resource in the user project using the OpenShift API. Watches the Dev Workspace Custom Resource status. Redirects the user to the running workspace IDE. Additional resources https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/end-user_guide/index#navigating-che.adoc 1.5.5. Devfile registries The CodeReady Workspaces devfile registries are services providing a list of sample devfiles to create ready-to-use workspaces. The Section 1.5.4, "User dashboard" displays the samples list on the Dashboard Create Workspace page. Each sample includes a Devfile v2. The CodeReady Workspaces deployment starts one devfile registry instance in the devfile-registry deployment. Figure 1.10. Devfile registries interactions with other components Additional resources https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/end-user_guide/index#creating-a-workspace-from-a-code-sample.adoc Devfile v2 documentation devfile registry latest community version online instance CodeReady Workspaces devfile registry repository 1.5.6. CodeReady Workspaces server The CodeReady Workspaces server main functions are: Creating user namespaces. Provisioning user namespaces with required secrets and config maps. Integrating with Git services providers, to fetch and validate devfiles and authentication. The CodeReady Workspaces server is a Java web service exposing an HTTP REST API and needs access to: Section 1.5.7, "PostgreSQL" Git service providers OpenShift API Figure 1.11. CodeReady Workspaces server interactions with other components Additional resources https://access.redhat.com/documentation/en-us/red_hat_codeready_workspaces/2.15/html-single/installation_guide/index#advanced-configuration-options-for-the-che-server-component.adoc 1.5.7. PostgreSQL CodeReady Workspaces server uses the PostgreSQL database to persist user configurations such as workspaces metadata. The CodeReady Workspaces deployment starts a dedicated PostgreSQL instance in the postgres Deployment. You can use an external database instead. Figure 1.12. PostgreSQL interactions with other components Additional resources Section 8.8, "Backups of PostgreSQL" quay.io/eclipse/che-postgres container image CodeReady Workspaces Postgres repository 1.5.8. Plug-in registry Each CodeReady Workspaces workspace starts with a specific editor and set of associated extensions. The CodeReady Workspaces plug-in registry provides the list of available editors and editor extensions. A Devfile v2 describes each editor or extension. The Section 1.5.4, "User dashboard" is reading the content of the registry. Figure 1.13. Plug-in registries interactions with other components Additional resources Editors definitions in the CodeReady Workspaces plug-in registry repository Plug-ins definitions in the CodeReady Workspaces plug-in registry repository Plug-in registry latest community version online instance 1.6. User workspaces Figure 1.14. User workspaces interactions with other components User workspaces are web IDEs running in containers. A User workspace is a web application. It consists of microservices running in containers providing all the services of a modern IDE running in your browser: Editor Language auto-completion Language server Debugging tools Plug-ins Application runtimes A workspace is one OpenShift Deployment containing the workspace containers and enabled plug-ins, plus related OpenShift components: Containers ConfigMaps Services Endpoints Ingresses or Routes Secrets Persistent Volumes (PVs) A CodeReady Workspaces workspace contains the source code of the projects, persisted in a OpenShift Persistent Volume (PV). Microservices have read-write access to this shared directory. Use the devfile v2 format to specify the tools and runtime applications of a CodeReady Workspaces workspace. The following diagram shows one running CodeReady Workspaces workspace and its components. Figure 1.15. CodeReady Workspaces workspace components In the diagram, there is one running workspaces. | null | https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.15/html/administration_guide/assembly_architecture-overview_crw |
Chapter 17. Identity Management | Chapter 17. Identity Management 17.1. Identity Management packages are installed as a module In RHEL 8, the packages necessary for installing an Identity Management (IdM) server and client are distributed as a module. The client stream is the default stream of the idm module, and you can download the packages necessary for installing the client without enabling the stream. The IdM server module stream is called DL1 and contains multiple profiles that correspond to the different types of IdM servers: server : an IdM server without integrated DNS dns : an IdM server with integrated DNS adtrust : an IdM server that has a trust agreement with Active Directory client : an IdM client To download the packages in a specific profile of the DL1 stream: Enable the stream: Switch to the RPMs delivered through the stream: Install the selected profile: Replace profile with one of the specific profiles defined above. For details, see Installing packages required for an Identity Management server and Packages required to install an Identity Management client . 17.2. Adding a RHEL 9 replica in FIPS mode to an IdM deployment in FIPS mode that was initialized with RHEL 8.6 or earlier fails The default RHEL 9 FIPS cryptographic policy aiming to comply with FIPS 140-3 does not allow the use of the AES HMAC-SHA1 encryption types' key derivation function as defined by RFC3961, section 5.1. This constraint does not allow you to add a RHEL 9 IdM replica in FIPS mode to a RHEL 8 IdM environment in FIPS mode in which the first server was installed on a RHEL 8.6 or earlier systems. This is because there are no common encryption types between RHEL 9 and the RHEL versions, which commonly use the AES HMAC-SHA1 encryption types but do not use the AES HMAC-SHA2 encryption types. Note You can view the encryption type of your IdM master key by entering the following command on the first IdM server in the RHEL 8 deployment: If the string in the output contains the sha1 term, you must enable the use of AES HMAC-SHA1 on the RHEL 9 replica. We are working on a solution to generate missing AES HMAC-SHA2-encrypted Kerberos keys on RHEL 7 and RHEL 8 servers. This will achieve FIPS 140-3 compliance on the RHEL 9 replica. However, this process cannot be fully automated, because the design of Kerberos key cryptography makes it impossible to convert existing keys to different encryption types. The only way is to ask users to renew their passwords. 17.3. Active Directory users can now administer Identity Management In Red Hat Enterprise Linux (RHEL) 7, external group membership allows AD users and groups to access IdM resources in a POSIX environment with the help of the System Security Services Daemon (SSSD). The IdM LDAP server has its own mechanisms to grant access control. RHEL 8 introduces an update that allows adding an ID user override for an AD user as a member of an IdM group. An ID override is a record describing what a specific Active Directory user or group properties should look like within a specific ID view, in this case the Default Trust View. As a consequence of the update, the IdM LDAP server is able to apply access control rules for the IdM group to the AD user. AD users are now able to use the self service features of IdM UI, for example to upload their SSH keys, or change their personal data. An AD administrator is able to fully administer IdM without having two different accounts and passwords. Note Currently, selected features in IdM may still be unavailable to AD users. For example, setting passwords for IdM users as an AD user from the IdM admins group might fail. 17.4. IdM supports Ansible roles and modules for installation and management Red Hat Enterprise Linux 8.1 introduces the ansible-freeipa package, which provides Ansible roles and modules for Identity Management (IdM) deployment and management. You can use Ansible roles to install and uninstall IdM servers, replicas, and clients. You can use Ansible modules to manage IdM groups, topology, and users. There are also example playbooks available. This update simplifies the installation and configuration of IdM based solutions. 17.5. ansible-freeipa is available in the AppStream repository with all dependencies Starting with RHEL 8.6, installing the ansible-freeipa package automatically installs the ansible-core package, a more basic version of ansible , as a dependency. Both ansible-freeipa and ansible-core are available in the rhel-9-for-x86_64-appstream-rpms repository. ansible-freeipa in RHEL 8.6 contains all the modules that it contained prior to RHEL 8.6. Prior to RHEL 8.6, you first had to enable the Ansible repository and install the ansible package. Only then could you install ansible-freeipa . 17.6. An alternative to the traditional RHEL ansible-freeipa repository: Ansible Automation Hub As of Red Hat Enterprise Linux 8.6, you can download ansible-freeipa modules from the Ansible Automation Hub (AAH) instead of downloading them from the standard RHEL repository. By using AAH, you can benefit from the faster updates of the ansible-freeipa modules available in this repository. In AAH, ansible-freeipa roles and modules are distributed in the collection format. Note that you need an Ansible Automation Platform (AAP) subscription to access the content on the AAH portal. You also need ansible version 2.14 or later. The redhat.rhel_idm collection has the same content as the traditional ansible-freeipa package. However, the collection format uses a fully qualified collection name (FQCN) that consists of a namespace and the collection name. For example, the redhat.rhel_idm.ipadnsconfig module corresponds to the ipadnsconfig module in ansible-freeipa provided by a RHEL repository. The combination of a namespace and a collection name ensures that the objects are unique and can be shared without any conflicts. 17.7. Identity Management users can use external identity providers to authenticate to IdM As of RHEL 8.10, you can associate Identity Management (IdM) users with external identity providers (IdPs) that support the OAuth 2 device authorization flow. Examples of such IdPs include Red Hat build of Keycloak, Azure Entra ID, Github, Google, and Facebook. If an IdP reference and an associated IdP user ID exist in IdM, you can use them to enable an IdM user to authenticate at the external IdP. After performing authentication and authorization at the external IdP, the IdM user receives a Kerberos ticket with single sign-on capabilities. The user must authenticate with the SSSD version available in RHEL 8.7 or later. You can also use the idp ansible-freeipa module to configure IdP authentication for IdM users. 17.8. Session recording solution for RHEL 8 added A session recording solution has been added to Red Hat Enterprise Linux 8 (RHEL 8). A new tlog package and its associated web console session player enable to record and playback the user terminal sessions. The recording can be configured per user or user group via the System Security Services Daemon (SSSD) service. All terminal input and output is captured and stored in a text-based format in a system journal. The input is inactive by default for security reasons not to intercept raw passwords and other sensitive information. The solution can be used for auditing of user sessions on security-sensitive systems. In the event of a security breach, the recorded sessions can be reviewed as a part of a forensic analysis. The system administrators are now able to configure the session recording locally and view the result from the RHEL 8 web console interface or from the Command-Line Interface using the tlog-play utility. 17.9. Removed Identity Management functionality 17.9.1. No NTP Server IdM server role Because ntpd has been deprecated in favor of chronyd in RHEL 8, IdM servers are no longer configured as Network Time Protocol (NTP) servers and are only configured as NTP clients. The RHEL 7 NTP Server IdM server role has also been deprecated in RHEL 8. 17.9.2. NSS databases not supported in OpenLDAP The OpenLDAP suite in versions of Red Hat Enterprise Linux (RHEL) used the Mozilla Network Security Services (NSS) for cryptographic purposes. With RHEL 8, OpenSSL, which is supported by the OpenLDAP community, replaces NSS. OpenSSL does not support NSS databases for storing certificates and keys. However, it still supports privacy enhanced mail (PEM) files that serve the same purpose. 17.9.3. Selected Python Kerberos packages have been replaced In Red Hat Enterprise Linux (RHEL) 8, the python-gssapi package has replaced Python Kerberos packages such as python-krbV , python-kerberos , python-requests-kerberos , and python-urllib2_kerberos . Notable benefits include: python-gssapi is easier to use than python-kerberos and python-krbV . python-gssapi supports both python 2 and python 3 whereas python-krbV does not. Additional Kerberos packages, python-requests-gssapi and python-urllib-gssapi , are currently available in the Extra Packages for Enterprise Linux (EPEL) repository. The GSSAPI-based packages allow the use of other Generic Security Services API (GSSAPI) mechanisms in addition to Kerberos, such as the NT LAN Manager NTLM for backward compatibility reasons. This update improves the maintainability and debuggability of GSSAPI in RHEL 8. 17.10. SSSD 17.10.1. AD GPOs are now enforced by default In RHEL 8, the default setting for the ad_gpo_access_control option is enforcing , which ensures that access control rules based on Active Directory Group Policy Objects (GPOs) are evaluated and enforced. In contrast, the default for this option in RHEL 7 is permissive , which evaluates but does not enforce GPO-based access control rules. With permissive mode, a syslog message is recorded every time a user would be denied access by a GPO, but those users are still allowed to log in. Note Red Hat recommends ensuring GPOs are configured correctly in Active Directory before upgrading from RHEL 7 to RHEL 8. Misconfigured GPOs that do not affect authorization in default RHEL 7 hosts may affect default RHEL 8 hosts. For more information about GPOs, see Applying Group Policy Object access control in RHEL and the ad_gpo_access_control entry in the sssd-ad Manual page. 17.10.2. authselect replaces authconfig In RHEL 8, the authselect utility replaces the authconfig utility. authselect comes with a safer approach to PAM stack management that makes the PAM configuration changes simpler for system administrators. authselect can be used to configure authentication methods such as passwords, certificates, smart cards and fingerprint. authselect does not configure services required to join remote domains. This task is performed by specialized tools, such as realmd or ipa-client-install . 17.10.3. KCM replaces KEYRING as the default credential cache storage In RHEL 8, the default credential cache storage is the Kerberos Credential Manager (KCM) which is backed by the sssd-kcm deamon. KCM overcomes the limitations of the previously used KEYRING, such as its being difficult to use in containerized environments because it is not namespaced, and to view and manage quotas. With this update, RHEL 8 contains a credential cache that is better suited for containerized environments and that provides a basis for building more features in future releases. 17.10.4. sssctl prints an HBAC rules report for an IdM domain With this update, the sssctl utility of the System Security Services Daemon (SSSD) can print an access control report for an Identity Management (IdM) domain. This feature meets the need of certain environments to see, for regulatory reasons, a list of users and groups that can access a specific client machine. Running sssctl access-report domain_name on an IdM client prints the parsed subset of host-based access control (HBAC) rules in the IdM domain that apply to the client machine. Note that no other providers than IdM support this feature. 17.10.5. As of RHEL 8.8, SSSD no longer caches local users by default nor serves them through the nss_sss module In RHEL 8.8 and later, the System Security Services Daemon (SSSD) files provider, which serves users and groups from the /etc/passwd and /etc/group files, is disabled by default. The default value of the enable_files_domain setting in the /etc/sssd/sssd.conf configuration file is false . For RHEL 8.7 and earlier versions, the SSSD files provider is enabled by default. The default value of the enable_files_domain setting in the sssd.conf configuration file is true , and the sss nsswitch module precedes files in the /etc/nsswitch.conf file. 17.10.6. SSSD now allows you to select one of the multiple smart-card authentication devices By default, the System Security Services Daemon (SSSD) tries to detect a device for smart-card authentication automatically. If there are multiple devices connected, SSSD selects the first one it detects. Consequently, you cannot select a particular device, which sometimes leads to failures. With this update, you can configure a new p11_uri option for the [pam] section of the sssd.conf configuration file. This option enables you to define which device is used for smart-card authentication. For example, to select a reader with the slot id 2 detected by the OpenSC PKCS#11 module, add: to the [pam] section of sssd.conf . For details, see the man sssd.conf page. 17.11. Removed SSSD functionality 17.11.1. sssd-secrets has been removed The sssd-secrets component of the System Security Services Daemon (SSSD) has been removed in Red Hat Enterprise Linux 8. This is because Custodia, a secrets service provider, is no longer actively developed. Use other Identity Management tools to store secrets, for example the Identity Management Vault. 17.11.2. The SSSD version of libwbclient has been removed The SSSD implementation of the libwbclient package allowed the Samba smbd service to retrieve user and group information from AD without the need to run the winbind service. As Samba now requires that the winbind service is running and handling communication with AD, the related code has been removed from smdb for security reasons. As this additional required functionality is not part of SSSD and the SSSD implementation of libwbclient cannot be used with recent versions of Samba, the SSSD implementation of libwbclient has been removed in RHEL 8.5. | [
"yum module enable idm:DL1",
"yum distro-sync",
"yum module install idm:DL1/ profile",
"kadmin.local getprinc K/M | grep -E '^Key:'",
"p11_uri = library-description=OpenSC%20smartcard%20framework;slot-id=2"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/considerations_in_adopting_rhel_8/identity-management_considerations-in-adopting-rhel-8 |
Deploying a hyperconverged infrastructure | Deploying a hyperconverged infrastructure Red Hat OpenStack Platform 17.1 Understand and configure Hyperconverged Infrastructure on the Red Hat OpenStack Platform overcloud OpenStack Documentation Team [email protected] | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_a_hyperconverged_infrastructure/index |
Chapter 10. Caching policy for object buckets | Chapter 10. Caching policy for object buckets A cache bucket is a namespace bucket with a hub target and a cache target. The hub target is an S3 compatible large object storage bucket. The cache bucket is the local Multicloud Object Gateway (MCG) bucket. You can create a cache bucket that caches an AWS bucket or an IBM COS bucket. AWS S3 IBM COS 10.1. Creating an AWS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the namespacestore. Replace <AWS ACCESS KEY> and <AWS SECRET ACCESS KEY> with an AWS access key ID and secret access key you created for this purpose. Replace <bucket-name> with an existing AWS bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First create a secret with credentials: You must supply and encode your own AWS access key ID and secret access key using Base64, and use the results in place of <AWS ACCESS KEY ID ENCODED IN BASE64> and <AWS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <namespacestore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-cache-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim (OBC) resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. 10.2. Creating an IBM COS cache bucket Prerequisites Download the Multicloud Object Gateway (MCG) command-line interface binary from the customer portal and make it executable. Note Choose the correct product variant according to your architecture. Available platforms are Linux(x86_64), Windows, and Mac OS. Procedure Create a NamespaceStore resource. A NamespaceStore represents an underlying storage to be used as a read or write target for the data in the MCG namespace buckets. From the MCG command-line interface, run the following command: Replace <namespacestore> with the name of the NamespaceStore. Replace <IBM ACCESS KEY> , <IBM SECRET ACCESS KEY> , <IBM COS ENDPOINT> with an IBM access key ID, secret access key and the appropriate regional endpoint that corresponds to the location of the existing IBM bucket. Replace <bucket-name> with an existing IBM bucket name. This argument tells the MCG which bucket to use as a target bucket for its backing store, and subsequently, data storage and administration. You can also add storage resources by applying a YAML. First, Create a secret with the credentials: You must supply and encode your own IBM COS access key ID and secret access key using Base64, and use the results in place of <IBM COS ACCESS KEY ID ENCODED IN BASE64> and <IBM COS SECRET ACCESS KEY ENCODED IN BASE64> . Replace <namespacestore-secret-name> with a unique name. Then apply the following YAML: Replace <namespacestore> with a unique name. Replace <IBM COS ENDPOINT> with the appropriate IBM COS endpoint. Replace <backingstore-secret-name> with the secret created in the step. Replace <namespace-secret> with the namespace used to create the secret in the step. Replace <target-bucket> with the AWS S3 bucket you created for the namespacestore. Run the following command to create a bucket class: Replace <my-bucket-class> with a unique bucket class name. Replace <backing-store> with the relevant backing store. You can list one or more backingstores separated by commas in this field. Replace <namespacestore> with the namespacestore created in the step. Run the following command to create a bucket using an Object Bucket Claim resource that uses the bucket class defined in step 2. Replace <my-bucket-claim> with a unique name. Replace <custom-bucket-class> with the name of the bucket class created in step 2. | [
"noobaa namespacestore create aws-s3 <namespacestore> --access-key <AWS ACCESS KEY> --secret-key <AWS SECRET ACCESS KEY> --target-bucket <bucket-name>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: AWS_ACCESS_KEY_ID: <AWS ACCESS KEY ID ENCODED IN BASE64> AWS_SECRET_ACCESS_KEY: <AWS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: awsS3: secret: name: <namespacestore-secret-name> namespace: <namespace-secret> targetBucket: <target-bucket> type: aws-s3",
"noobaa bucketclass create namespace-bucketclass cache <my-cache-bucket-class> --backingstores <backing-store> --hub-resource <namespacestore>",
"noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>",
"noobaa namespacestore create ibm-cos <namespacestore> --endpoint <IBM COS ENDPOINT> --access-key <IBM ACCESS KEY> --secret-key <IBM SECRET ACCESS KEY> --target-bucket <bucket-name>",
"apiVersion: v1 kind: Secret metadata: name: <namespacestore-secret-name> type: Opaque data: IBM_COS_ACCESS_KEY_ID: <IBM COS ACCESS KEY ID ENCODED IN BASE64> IBM_COS_SECRET_ACCESS_KEY: <IBM COS SECRET ACCESS KEY ENCODED IN BASE64>",
"apiVersion: noobaa.io/v1alpha1 kind: NamespaceStore metadata: finalizers: - noobaa.io/finalizer labels: app: noobaa name: <namespacestore> namespace: openshift-storage spec: s3Compatible: endpoint: <IBM COS ENDPOINT> secret: name: <backingstore-secret-name> namespace: <namespace-secret> signatureVersion: v2 targetBucket: <target-bucket> type: ibm-cos",
"noobaa bucketclass create namespace-bucketclass cache <my-bucket-class> --backingstores <backing-store> --hubResource <namespacestore>",
"noobaa obc create <my-bucket-claim> my-app --bucketclass <custom-bucket-class>"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/managing_hybrid_and_multicloud_resources/caching-policy-for-object-buckets_rhodf |
33.8. Defining DNS Query Policy | 33.8. Defining DNS Query Policy To resolve host names within the DNS domain, a DNS client issues a query to the DNS name server. For some security contexts or for performance, it might be advisable to restrict what clients can query DNS records in the zone. DNS queries can be configured when the zone is created or when it is modified by using the --allow-query option with the ipa dnszone-mod command to set a list of clients which are allowed to issue queries. For example: The default --allow-query value is any , which allows the zone to be queried by any client. | [
"[user@server ~]USD ipa dnszone-mod --allow-query=192.0.2.0/24;2001:DB8::/32;203.0.113.1 example.com"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/dns-queries |
Backing up and restoring Red Hat Directory Server | Backing up and restoring Red Hat Directory Server Red Hat Directory Server 12 Backing up and restoring the Red Hat Directory Server Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/backing_up_and_restoring_red_hat_directory_server/index |
Chapter 2. integrating RHOSO storage services | Chapter 2. integrating RHOSO storage services Red Hat only certifies Red Hat OpenStack Services on OpenShift (RHOSO) Storage drivers that are distributed by Red Hat. Conversely, Red Hat does not certify drivers that are distributed directly by the Partner. For Red Hat to distribute a Partner's storage driver, ensure that the following requirements are met: Prerequisites The driver must be present in the upstream OpenStack project, such as openstack-cinder or openstack-manila . For information on contributing upstream to OpenStack, see the OpenStack Contributor Guide and the upstream guides for OpenStack Block Storage (Cinder) or OpenStack Shared Filesystems (Manila) . The Cinder project also has specific guidelines for contributing drivers that you can view here: All About Cinder Drivers . Partners must contribute the driver patches upstream, and the patches must be accepted by the upstream community before they can be included in RHOSO. Red Hat does not accept patches or code modifications to a driver that have not been accepted by the upstream community. The driver must be present in the RHOSO release. This requirement is automatically met when the driver is present in the upstream OpenStack release associated with the corresponding version of RHOSO. RHOSO 18 is based on the upstream OpenStack 2023.1 Antelope release. Storage drivers present in the upstream 2023.1 release are automatically included in RHOSO 18. When an upstream driver or updates to that driver is absent in the corresponding upstream release, the Partner can submit requests for Red Hat to back port upstream patches into RHOSO by creating a JIRA issue in the following project: OSPRH . Additional integration tasks To integrate a storage driver with RHOSO, you must perform the following actions in addition to the prerequisites: Configuring the storage driver. Adding software dependencies required by the driver. Accessing extra files required by the driver. 2.1. Configuring the storage driver Red Hat OpenStack Services on OpenShift (RHOSO) uses OpenShift custom resource definitions (CRDs), which you deploy using an OpenStackControlPlane custom resource (CR). The OpenStackControlPlane CR includes specification templates that govern the openstack-cinder and openstack-manila service deployments, which include sections for configuring back end storage drivers. The syntax for configuring storage backends is similar to the openstack-cinder and openstack-manila syntax. For more information on how to configure and deploy the openstack-cinder and openstack-manila storage services, see the Configuring persistent storage guide. 2.2. Adding software dependencies for the storage driver Red Hat OpenStack Services on OpenShift (RHOSO) openstack-cinder and openstack-manila services execute in Linux containers that are built by Red Hat. These container images include the necessary software to support a large number of drivers. However, some drivers require additional software components that are not included in the RHOSO container images. These are typically python modules that are supplied by the Partner and not available for inclusion in Red Hat's container images. Drivers that are fully self-contained in RHOSO container images are considered "in-tree," as opposed to drivers that have external software dependencies. Early in the integration process, Partners must determine whether their driver has an external software dependency. The following information only applies when the driver has external dependencies: When a Partner's driver has an external software dependency, Partners must provide a container image that adds an additional layer on top of Red Hat's RHOSO container image. Partner container images for RHOSO are similar to a Partner's container images for director-based RHOSP. The purpose of providing a Partner container image is to satisfy the external software dependencies required by the Partner's driver. You cannot use container images to deploy a modified version of the Partner's driver or a version of the driver that differs from the one provided by Red Hat in the underlying RHOSO container image. Partners are responsible for generating their container images, and the image has to go through container image certification procedure before the Red Hat OpenStack certification. Details on how to certify a container image are provided later in this guide. NOTE: Container image certification is not the same as Red Hat OpenStack certification. Container images must undergo a separate certification procedure in order to be delivered in the Red Hat Ecosystem Catalog . After a Partner's storage driver has passed Red Hat OpenStack Certification, the Partner is responsible for generating a certified container image for every subsequent minor update to the RHOSO release. For each minor RHOSO 18 update, Partners must generate an updated container image for the updated release, and publish the updated container image in the Red Hat Ecosystem Catalog. Container images for older RHOSO 18 minor updates must remain in the Red Hat Ecosystem Catalog. This ensures that customers that are not using the latest RHOSO release can still access the Partner's container image that was built for their RHOSO version. 2.2.1. Building partner container images A Partner must provide a Red Hat certified container image if their storage driver has external software dependencies that are not supplied by Red Hat's corresponding container image. An example is the cinder-volume service, which includes Partner drivers for many block storage back ends. When a Partner's driver has external software dependencies, they must provide a cinder-volume container image to layer that software on top of Red Hat's RHOSO cinder-volume container image. Create a Containerfile for generating the container image: The following example shows a sample Containerfile or Dockerfile for generating a cinder-volume container image that includes external software dependencies required by a Partner's openstack-cinder driver. The example can be adapted to generate a manila-share container image that includes external software dependencies required by a Partner's openstack-manila driver. FROM registry.redhat.io/rhoso/openstack-cinder-volume-rhel9:18.0.0 1 LABEL name="rhoso18/openstack-cinder-volume-partnerX-plugin" \ maintainer="[email protected]" \ vendor="PartnerX" \ summary="RHOSO 18.0 cinder-volume PartnerX PluginY" \ description="RHOSO 18.0 cinder-volume PartnerX PluginY" 2 # Switch to root to install software dependencies USER root # Enable a repo to install a package 3 COPY vendorX.repo /etc/yum.repos.d/vendorX.repo RUN dnf clean all && dnf install -y vendorX-plugin # Install a package over the network 4 RUN dnf install -y http://vendorX.com/partnerX-plugin.rpm # Install a local package 5 COPY partnerX-plugin.rpm /tmp RUN dnf install -y /tmp/partnerX-plugin.rpm && \ rm -f /tmp/partnerX-plugin.rpm # Install a python package from PyPI 6 RUN curl -OL https://bootstrap.pypa.io/get-pip.py && \ python3 get-pip.py --no-setuptools --no-wheel && \ pip3 install partnerX-plugin && \ rm -f get-pip.py # Add required license as text file(s) in /licenses directory # (GPL, MIT, APACHE, Partner End User Agreement, etc) RUN mkdir /licenses COPY licensing.txt /licenses # Switch to cinder user USER cinder 1 Use the FROM clause to specify the Red Hat's RHOSO base image, which in this example is the cinder-volume service. The 18.0 tag specifies the current latest release. To generate an image based on a specific minor release, modify the tag to specify that release, for example 18.0.1, or openstack-cinder-volume-rhel9:*18.0.1*. For RHOSO 18 GA, use the URL: registry.redhat.io/rhoso/openstack-cinder-volume-rhel9:18.0 . 2 The labels in the sample Containerfile override the corresponding labels in the base image to uniquely identify the Partner's image. 3 You can install the software dependencies by this method, or the method at 4, 5, or 6. 4 You can install the software dependencies by this method, or the method at 3, 5, or 6. 5 You can install the software dependencies by this method, or the method at 3, 4, or 6. 6 You can install the software dependencies by this method, or the method at 3, 4, or 5. Build, tag, and upload the container image: You can use the podman build or buildah build commands to build the container image. For more information on how Partners chose a registry and provide an access token to the registry for the certification, see the Red Hat Software Certification Workflow Guide . Tag the image to match the corresponding RHOSO 18 base image. For example, when the base image is version 18.0.0, the Partner's image is also tagged as version 18.0.0. You can also use the above example procedure with the file storage service, openstack-manila . Ensure that you use the appropriate RHOSO openstack-manila-share base image in place of the openstack-cinder-volume base image. Certify and publish the container image: For information on how to certify the container image, see Red Hat Enterprise Linux Software Certification Policy Guide and Red Hat Software Certification Workflow Guide . You can publish container images in the Red Hat Ecosystem Catalog . 2.2.2. Maintaining partner container images and image tags When a Partner certifies their storage solution, and if the solution includes a container image, then the Partner is responsible for rebuilding that image every time the underlying RHOSO container image changes. This happens with each RHOSO maintenance release, but it can also happen when RHOSO container images are updated to address a CVE. For example, if a Partner certified their solution against RHOSO 18.0.1, the Partner's container image needs two tags: 18.0.1 to indicate the specific release. 18.0 to indicate this is the latest version associated with RHOSO 18. Later, when Red Hat releases version 18.0.2, the Partner must rebuild their image and update the images and tags: The tag for the new image is 18.0.2. The older 18.0.1 image must remain in the Red Hat Ecosystem Catalog . Partners must not remove old images. Remove the 18.0 tag from the older 18.0.1 image, and add it to the new 18.0.2 image. 2.2.3. Deploying partner container images With director-based Red Hat Openstack Platform (RHOSP), you can only specify a single cinder-volume container image despite the number of back ends. However, with RHOSO, you can customize the container image per back end and configure a multi-storage backend within a single RHOSO deployment. You can use the OpenStackVersion CRD to override the container image for any service. In the following example, the CR configures two cinder-volume back ends named backend-X1 and backend-X2 to use Partner X's container image, and backend-Y to use Partner Y's container image. It also configures a manila-share back end named backend-Z to use Partner Z's manila-share container image. apiVersion: core.openstack.org/v1beta1 kind: OpenStackVersion metadata: name: openstack spec: customContainerImages: cinderVolumeImages: backend-X1: registry.connect.redhat.com/partnerX/openstack-cinder-volume-partnerX-plugin backend-X2: registry.connect.redhat.com/partnerX/openstack-cinder-volume-partnerX-plugin 1 backend-Y: registry.connect.redhat.com/partnerY/openstack-cinder-volume-partnerY-plugin manilaShareImages: 2 backend-Z: registry.connect.redhat.com/partnerZ/openstack-manila-share-partnerZ-plugin 1 The cinder-volume back ends named backend-X1 and backend-X2 are both associated with Partner X's cinder-volume driver , which requires a plugin. Both back ends must specify Partner X's custom container image. 2 Use the same procedure when you deploy a Partner's custom manila-share container image. 2.3. Accessing extra files for the storage driver You can use the OpenStackControlPlane CRD extraMounts feature to provide files to the openstack-cinder and openstack-manila storage services, for example extra files that might be required by a Partner's storage driver. Consider a situation where a Partner's openstack-cinder driver requires a config.xml file that contains authentication credentials in order to access the Partner's back end storage array. You can store the contents of the XML file in a kubernetes secret, which can be created from a YAML file: apiVersion: v1 kind: Secret metadata: name: cinder-volume-example-config 1 type: Opaque stringData: config.xml: | 2 <example-credentials>example</example-credentials> 3 1 The secret name is arbitrary, but the example includes the storage service and the Partner's name, "Example", for clarity. 2 The name of the file required by the Example storage driver is config.xml. 3 Sample XML data. An extraMounts entry in the cinder section of the OpenStackControlPlane CR mounts the config.xml into the cinder-volume pod associated with the cinder back end. apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderApi: ... cinderScheduler: ... cinderVolumes: example: 1 customServiceConfig: | [example] volume_backend_name=example volume_driver=cinder....ExampleDriver networkAttachments: - storage replicas: 1 extraMounts: 2 - extraVol: - mounts: - name: example-config mountPath: /etc/cinder/config.xml 3 subPath: config.xml 4 readOnly: true propagation: - example 5 volumes: - name: example-config secret: secretName: cinder-volume-example-config 6 1 This section is for the cinder-volume back end configuration for the Example driver. 2 This section is for the extraMounts configuration for the cinder section of the OpenStackControlPlane. Note that the extraMounts are not nested under the cinderVolumes section. 3 This section is for the mount point where the config.xml file appears in the cinder-volume pod. 4 This section is for the subPath to specify the config.xml filename. This is necessary to mount a single file in the /etc/cinder directory. 5 This section is for the propagation to specify that it applies to only the Example cinder-volume back end. The value matches the back end name in <1>. 6 The secretName matches the name of the secret created previously. | [
"FROM registry.redhat.io/rhoso/openstack-cinder-volume-rhel9:18.0.0 1 LABEL name=\"rhoso18/openstack-cinder-volume-partnerX-plugin\" maintainer=\"[email protected]\" vendor=\"PartnerX\" summary=\"RHOSO 18.0 cinder-volume PartnerX PluginY\" description=\"RHOSO 18.0 cinder-volume PartnerX PluginY\" 2 Switch to root to install software dependencies USER root Enable a repo to install a package 3 COPY vendorX.repo /etc/yum.repos.d/vendorX.repo RUN dnf clean all && dnf install -y vendorX-plugin Install a package over the network 4 RUN dnf install -y http://vendorX.com/partnerX-plugin.rpm Install a local package 5 COPY partnerX-plugin.rpm /tmp RUN dnf install -y /tmp/partnerX-plugin.rpm && rm -f /tmp/partnerX-plugin.rpm Install a python package from PyPI 6 RUN curl -OL https://bootstrap.pypa.io/get-pip.py && python3 get-pip.py --no-setuptools --no-wheel && pip3 install partnerX-plugin && rm -f get-pip.py Add required license as text file(s) in /licenses directory (GPL, MIT, APACHE, Partner End User Agreement, etc) RUN mkdir /licenses COPY licensing.txt /licenses Switch to cinder user USER cinder",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackVersion metadata: name: openstack spec: customContainerImages: cinderVolumeImages: backend-X1: registry.connect.redhat.com/partnerX/openstack-cinder-volume-partnerX-plugin backend-X2: registry.connect.redhat.com/partnerX/openstack-cinder-volume-partnerX-plugin 1 backend-Y: registry.connect.redhat.com/partnerY/openstack-cinder-volume-partnerY-plugin manilaShareImages: 2 backend-Z: registry.connect.redhat.com/partnerZ/openstack-manila-share-partnerZ-plugin",
"apiVersion: v1 kind: Secret metadata: name: cinder-volume-example-config 1 type: Opaque stringData: config.xml: | 2 <example-credentials>example</example-credentials> 3",
"apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: cinder: template: cinderApi: cinderScheduler: cinderVolumes: example: 1 customServiceConfig: | [example] volume_backend_name=example volume_driver=cinder....ExampleDriver networkAttachments: - storage replicas: 1 extraMounts: 2 - extraVol: - mounts: - name: example-config mountPath: /etc/cinder/config.xml 3 subPath: config.xml 4 readOnly: true propagation: - example 5 volumes: - name: example-config secret: secretName: cinder-volume-example-config 6"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/integrating_partner_content/integrating-rhoso-storage-services_osp |
14.9. Rebasing a Backing File of an Image | 14.9. Rebasing a Backing File of an Image The qemu-img rebase changes the backing file of an image. The backing file is changed to backing_file and (if the format of filename supports the feature), the backing file format is changed to backing_format . Note Only the qcow2 format supports changing the backing file (rebase). There are two different modes in which rebase can operate: safe and unsafe . safe mode is used by default and performs a real rebase operation. The new backing file may differ from the old one and the qemu-img rebase command will take care of keeping the guest virtual machine-visible content of filename unchanged. In order to achieve this, any clusters that differ between backing_file and old backing file of filename are merged into filename before making any changes to the backing file. Note that safe mode is an expensive operation, comparable to converting an image. The old backing file is required for it to complete successfully. unsafe mode is used if the -u option is passed to qemu-img rebase . In this mode, only the backing file name and format of filename is changed, without any checks taking place on the file contents. Make sure the new backing file is specified correctly or the guest-visible content of the image will be corrupted. This mode is useful for renaming or moving the backing file. It can be used without an accessible old backing file. For instance, it can be used to fix an image whose backing file has already been moved or renamed. | [
"qemu-img rebase [-f fmt ] [-t cache ] [-p] [-u] -b backing_file [-F backing_fmt ] filename"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-using_qemu_img-re_basing_a_backing_file_of_an_image |
Preface | Preface As a data scientist, you can enhance your data science projects on OpenShift AI by building portable machine learning (ML) workflows with data science pipelines, using Docker containers. This enables you to standardize and automate machine learning workflows to enable you to develop and deploy your data science models. For example, the steps in a machine learning workflow might include items such as data extraction, data processing, feature extraction, model training, model validation, and model serving. Automating these activities enables your organization to develop a continuous process of retraining and updating a model based on newly received data. This can help address challenges related to building an integrated machine learning deployment and continuously operating it in production. You can also use the Elyra JupyterLab extension to create and run data science pipelines within JupyterLab. For more information, see Working with pipelines in JupyterLab . Data science pipelines in OpenShift AI are now based on KubeFlow Pipelines (KFP) version 2.0 . For more information, see Migrating to data science pipelines 2.0 . To use a data science pipeline in OpenShift AI, you need the following components: Pipeline server : A server that is attached to your data science project and hosts your data science pipeline. Pipeline : A pipeline defines the configuration of your machine learning workflow and the relationship between each component in the workflow. Pipeline code: A definition of your pipeline in a YAML file. Pipeline graph: A graphical illustration of the steps executed in a pipeline run and the relationship between them. Pipeline experiment : A workspace where you can try different configurations of your pipelines. You can use experiments to organize your runs into logical groups. Archived pipeline experiment: An archived pipeline experiment. Pipeline artifact: An output artifact produced by a pipeline component. Pipeline execution: The execution of a task in a pipeline. Pipeline run : An execution of your pipeline. Active run: A pipeline run that is executing, or stopped. Scheduled run: A pipeline run that is scheduled to execute at least once. Archived run: An archived pipeline run. This feature is based on Kubeflow Pipelines 2.0. Use the latest Kubeflow Pipelines 2.0 SDK to build your data science pipeline in Python code. After you have built your pipeline, use the SDK to compile it into an Intermediate Representation (IR) YAML file. The OpenShift AI user interface enables you to track and manage pipelines, experiments, and pipeline runs. To view a record of previously executed, scheduled, and archived runs, you can go to Data Science Pipelines Runs , or you can select an experiment from the Experiments Experiments and Runs to access all of its pipeline runs. You can manage incremental changes to pipelines in OpenShift AI by using versioning. This allows you to develop and deploy pipelines iteratively, preserving a record of your changes. You can store your pipeline artifacts in an S3-compatible object storage bucket so that you do not consume local storage. To do this, you must first configure write access to your S3 bucket on your storage account. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_data_science_pipelines/pr01 |
10.5. Managing Translator Settings Using Management CLI | 10.5. Managing Translator Settings Using Management CLI To manage JBoss Data Virtualization translator settings, you can use the same commands as those used for the base JBoss Data Virtualization settings, specifying a particular translator in the command. For example: Available translator names are listed under translator when you run the following command to output current JBoss Data Virtualization settings: | [
"/subsystem=teiid/translator= TRANSLATOR_NAME :read-resource",
"/subsystem=teiid:read-resource"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/administration_and_configuration_guide/Managing_Translator_Settings_Using_Management_CLI |
Chapter 4. Fence Devices | Chapter 4. Fence Devices This chapter documents the fence devices currently supported in Red Hat Enterprise Linux High-Availability Add-On. Table 4.1, "Fence Device Summary" lists the fence devices, the fence device agents associated with the fence devices, and provides a reference to the table documenting the parameters for the fence devices. Table 4.1. Fence Device Summary Fence Device Fence Agent Reference to Parameter Description APC Power Switch (telnet/SSH) fence_apc Table 4.2, "APC Power Switch (telnet/SSH)" APC Power Switch over SNMP fence_apc_snmp Table 4.3, "APC Power Switch over SNMP" Brocade Fabric Switch fence_brocade Table 4.4, "Brocade Fabric Switch" Cisco MDS fence_cisco_mds Table 4.5, "Cisco MDS" Cisco UCS fence_cisco_ucs Table 4.6, "Cisco UCS" Dell DRAC 5 fence_drac5 Table 4.7, "Dell DRAC 5" Dell iDRAC fence_idrac Table 4.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" Eaton Network Power Switch (SNMP Interface) fence_eaton_snmp Table 4.8, "Eaton Network Power Controller (SNMP Interface) (Red Hat Enterprise Linux 6.4 and later)" Egenera BladeFrame fence_egenera Table 4.9, "Egenera BladeFrame" Emerson Network Power Switch (SNMP Interface) fence_emerson Table 4.10, "Emerson Network Power Switch (SNMP interface) (Red Hat Enterprise Linux 6.7 and later)" ePowerSwitch fence_eps Table 4.11, "ePowerSwitch" Fence virt (Serial/VMChannel Mode) fence_virt Table 4.12, "Fence virt (Serial/VMChannel Mode)" Fence virt (fence_xvm/Multicast Mode) fence_xvm Table 4.13, "Fence virt (Multicast Mode) " Fujitsu Siemens Remoteview Service Board (RSB) fence_rsb Table 4.14, "Fujitsu Siemens Remoteview Service Board (RSB)" HP BladeSystem fence_hpblade Table 4.15, "HP BladeSystem (Red Hat Enterprise Linux 6.4 and later)" HP iLO Device fence_ilo Table 4.16, "HP iLO (Integrated Lights Out) and HP iLO2" HP iLO over SSH Device fence_ilo3_ssh Table 4.17, "HP iLO over SSH, HP iLO3 over SSH, HPiLO4 over SSH (Red Hat Enterprise Linux 6.7 and later)" HP iLO4 Device fence_ilo4 Table 4.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" HP iLO4 over SSH Device fence_ilo4_ssh Table 4.17, "HP iLO over SSH, HP iLO3 over SSH, HPiLO4 over SSH (Red Hat Enterprise Linux 6.7 and later)" HP iLO MP fence_ilo_mp Table 4.18, "HP iLO (Integrated Lights Out) MP" HP Moonshot iLO fence_ilo_moonshot Table 4.19, "HP Moonshot iLO (Red Hat Enterprise Linux 6.7 and later)" IBM BladeCenter fence_bladecenter Table 4.20, "IBM BladeCenter" IBM BladeCenter SNMP fence_ibmblade Table 4.21, "IBM BladeCenter SNMP" IBM Integrated Management Module fence_imm Table 4.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" IBM iPDU fence_ipdu Table 4.22, "IBM iPDU (Red Hat Enterprise Linux 6.4 and later)" IF MIB fence_ifmib Table 4.23, "IF MIB" Intel Modular fence_intelmodular Table 4.24, "Intel Modular" IPMI (Intelligent Platform Management Interface) Lan fence_ipmilan Table 4.25, "IPMI (Intelligent Platform Management Interface) LAN, Dell iDrac, IBM Integrated Management Module, HPiLO3, HPiLO4" Fence kdump fence_kdump Table 4.26, "Fence kdump" Multipath Persistent Reservation Fencing fence_mpath Table 4.27, "Multipath Persistent Reservation Fencing (Red Hat Enterprise Linux 6.7 and later)" RHEV-M fencing fence_rhevm Table 4.28, "RHEV-M REST API (RHEL 6.2 and later against RHEV 3.0 and later)" SCSI Fencing fence_scsi Table 4.29, "SCSI Reservation Fencing" VMware Fencing (SOAP Interface) fence_vmware_soap Table 4.30, "VMware Fencing (SOAP Interface) (Red Hat Enterprise Linux 6.2 and later)" WTI Power Switch fence_wti Table 4.31, "WTI Power Switch" 4.1. APC Power Switch over Telnet and SSH Table 4.2, "APC Power Switch (telnet/SSH)" lists the fence device parameters used by fence_apc , the fence agent for APC over telnet/SSH. Table 4.2. APC Power Switch (telnet/SSH) luci Field cluster.conf Attribute Description Name name A name for the APC device connected to the cluster into which the fence daemon logs by means of telnet/ssh. IP Address or Hostname ipaddr The IP address or host name assigned to the device. IP Port (optional) ipport The TCP port to use to connect to the device. The default port is 23, or 22 if Use SSH is selected. Login login The login name used to access the device. Password passwd The password used to authenticate the connection to the device. Password Script (optional) passwd_script The script that supplies a password for access to the fence device. Using this supersedes the Password parameter. Power Wait (seconds) power_wait Number of seconds to wait after issuing a power off or power on command. Power Timeout (seconds) power_timeout Number of seconds to continue testing for a status change after issuing a power off or power on command. The default value is 20. Shell Timeout (seconds) shell_timeout Number of seconds to wait for a command prompt after issuing a command. The default value is 3. Login Timeout (seconds) login_timeout Number of seconds to wait for a command prompt after login. The default value is 5. Times to Retry Power On Operation retry_on Number of attempts to retry a power on operation. The default value is 1. Port port The port. Switch (optional) switch The switch number for the APC switch that connects to the node when you have multiple daisy-chained switches. Delay (optional) delay The number of seconds to wait before fencing is started. The default value is 0. Use SSH secure Indicates that system will use SSH to access the device. When using SSH, you must specify either a password, a password script, or an identity file. SSH Options ssh_options SSH options to use. The default value is -1 -c blowfish . Path to SSH Identity File identity_file The identity file for SSH. Figure 4.1, "APC Power Switch" shows the configuration screen for adding an APC Power Switch fence device. Figure 4.1. APC Power Switch The following command creates a fence device instance for a APC device: The following is the cluster.conf entry for the fence_apc device: | [
"ccs -f cluster.conf --addfencedev apc agent=fence_apc ipaddr=192.168.0.1 login=root passwd=password123",
"<fencedevices> <fencedevice agent=\"fence_apc\" name=\"apc\" ipaddr=\"apc-telnet.example.com\" login=\"root\" passwd=\"password123\"/> </fencedevices>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/ch-fence-devices-ca |
Chapter 5. Configuring OAuth authorization | Chapter 5. Configuring OAuth authorization This section describes how to connect Red Hat CodeReady Workspaces as an OAuth application to supported OAuth providers. Configuring GitHub OAuth Configuring OpenShift OAuth 5.1. Configuring GitHub OAuth OAuth for GitHub allows for automatic SSH key upload to GitHub. Procedure Set up the GitHub OAuth client . The Authorization callback URL is filled in the steps. Go to the RH-SSO administration console and select the Identity Providers tab. Select the GitHub identity provider in the drop-down list. Paste the Redirect URI to the Authorization callback URL of the GitHub OAuth application. Fill the Client ID and Client Secret from the GitHub oauth app. Enable Store Tokens . Save the changes of the Github Identity provider and click Register application in the GitHub oauth app page. 5.2. Configuring OpenShift OAuth For users to interact with OpenShift, they must first authenticate to the OpenShift cluster. OpenShift OAuth is a process in which users prove themselves to a cluster through an API with obtained OAuth access tokens. Authentication with the OpenShift connector plugin is a possible way for CodeReady Workspaces users to authenticate with an OpenShift cluster. The following section describes the OpenShift OAuth configuration options and its use with a CodeReady Workspaces. Prerequisites The OpenShift command-line tool, oc is installed. Procedure To enable OpenShift OAuth automatically, deployed CodeReady Workspaces using the crwctl with the --os-oauth option. See the crwctl server:start specification chapter. For CodeReady Workspaces deployed in single-user mode: Register CodeReady Workspaces OAuth client in OpenShift. See the Register an OAuth client in OpenShift chapter. Add the OpenShift SSL certificate to the CodeReady Workspaces Java trust store. See Adding self-signed SSL certificates to CodeReady Workspaces . Update the OpenShift deployment configuration. <client-ID> a name specified in the OpenShift OAuthClient. <openshift-secret> a secret specified in the OpenShift OAuthClient. <oauth-endpoint> the URL of the OpenShift OAuth service: For OpenShift 3 specify the OpenShift master URL. For OpenShift 4 specify the oauth-openshift route. <verify-token-url> request URL that is used to verify the token. <OpenShift master url>/api can be used for OpenShift 3 and 4. See CodeReady Workspaces configMaps and their behavior . | [
"oc create -f <(echo ' kind: OAuthClient apiVersion: oauth.openshift.io/v1 metadata: name: che secret: \"<random set of symbols>\" redirectURIs: - \"<CodeReady Workspaces api url>/oauth/callback\" grantMethod: prompt ')",
"CHE_OAUTH_OPENSHIFT_CLIENTID: <client-ID> CHE_OAUTH_OPENSHIFT_CLIENTSECRET: <openshift-secret> CHE_OAUTH_OPENSHIFT_OAUTH__ENDPOINT: <oauth-endpoint> CHE_OAUTH_OPENSHIFT_VERIFY__TOKEN__URL: <verify-token-url>"
]
| https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.1/html/end-user_guide/configuring-oauth-authorization_crw |
Chapter 5. CIDR range definitions | Chapter 5. CIDR range definitions If your cluster uses OVN-Kubernetes, you must specify non-overlapping ranges for Classless Inter-Domain Routing (CIDR) subnet ranges. Important For OpenShift Dedicated 4.17 and later versions, clusters use 169.254.0.0/17 for IPv4 and fd69::/112 for IPv6 as the default masquerade subnet. These ranges should also be avoided by users. For upgraded clusters, there is no change to the default masquerade subnet. The following subnet types and are mandatory for a cluster that uses OVN-Kubernetes: Join: Uses a join switch to connect gateway routers to distributed routers. A join switch reduces the number of IP addresses for a distributed router. For a cluster that uses the OVN-Kubernetes plugin, an IP address from a dedicated subnet is assigned to any logical port that attaches to the join switch. Masquerade: Prevents collisions for identical source and destination IP addresses that are sent from a node as hairpin traffic to the same node after a load balancer makes a routing decision. Transit: A transit switch is a type of distributed switch that spans across all nodes in the cluster. A transit switch routes traffic between different zones. For a cluster that uses the OVN-Kubernetes plugin, an IP address from a dedicated subnet is assigned to any logical port that attaches to the transit switch. Note You can change the join, masquerade, and transit CIDR ranges for your cluster as a post-installation task. When specifying subnet CIDR ranges, ensure that the subnet CIDR range is within the defined Machine CIDR. You must verify that the subnet CIDR ranges allow for enough IP addresses for all intended workloads depending on which platform the cluster is hosted. OVN-Kubernetes, the default network provider in OpenShift Dedicated 4.14 and later versions, internally uses the following IP address subnet ranges: V4JoinSubnet : 100.64.0.0/16 V6JoinSubnet : fd98::/64 V4TransitSwitchSubnet : 100.88.0.0/16 V6TransitSwitchSubnet : fd97::/64 defaultV4MasqueradeSubnet : 169.254.0.0/17 defaultV6MasqueradeSubnet : fd69::/112 Important The list includes join, transit, and masquerade IPv4 and IPv6 address subnets. If your cluster uses OVN-Kubernetes, do not include any of these IP address subnet ranges in any other CIDR definitions in your cluster or infrastructure. 5.1. Machine CIDR In the Machine classless inter-domain routing (CIDR) field, you must specify the IP address range for machines or cluster nodes. Note Machine CIDR ranges cannot be changed after creating your cluster. This range must encompass all CIDR address ranges for your virtual private cloud (VPC) subnets. Subnets must be contiguous. A minimum IP address range of 128 addresses, using the subnet prefix /25 , is supported for single availability zone deployments. A minimum address range of 256 addresses, using the subnet prefix /24 , is supported for deployments that use multiple availability zones. The default is 10.0.0.0/16 . This range must not conflict with any connected networks. 5.2. Service CIDR In the Service CIDR field, you must specify the IP address range for services. It is recommended, but not required, that the address block is the same between clusters. This will not create IP address conflicts. The range must be large enough to accommodate your workload. The address block must not overlap with any external service accessed from within the cluster. The default is 172.30.0.0/16 . 5.3. Pod CIDR In the pod CIDR field, you must specify the IP address range for pods. It is recommended, but not required, that the address block is the same between clusters. This will not create IP address conflicts. The range must be large enough to accommodate your workload. The address block must not overlap with any external service accessed from within the cluster. The default is 10.128.0.0/14 . 5.4. Host Prefix In the Host Prefix field, you must specify the subnet prefix length assigned to pods scheduled to individual machines. The host prefix determines the pod IP address pool for each machine. For example, if the host prefix is set to /23 , each machine is assigned a /23 subnet from the pod CIDR address range. The default is /23 , allowing 512 cluster nodes, and 512 pods per node (both of which are beyond our maximum supported). | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/networking/cidr-range-definitions |
Chapter 4. ClusterCSIDriver [operator.openshift.io/v1] | Chapter 4. ClusterCSIDriver [operator.openshift.io/v1] Description ClusterCSIDriver object allows management and configuration of a CSI driver operator installed by default in OpenShift. Name of the object must be name of the CSI driver it operates. See CSIDriverName type for list of allowed values. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 4.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description driverConfig object driverConfig can be used to specify platform specific driver configuration. When omitted, this means no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. logLevel string logLevel is an intent based logging for an overall component. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for their operands. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". managementState string managementState indicates whether and how the operator should manage the component observedConfig `` observedConfig holds a sparse config that controller has observed from the cluster state. It exists in spec because it is an input to the level for the operator operatorLogLevel string operatorLogLevel is an intent based logging for the operator itself. It does not give fine grained control, but it is a simple way to manage coarse grained logging choices that operators have to interpret for themselves. Valid values are: "Normal", "Debug", "Trace", "TraceAll". Defaults to "Normal". storageClassState string StorageClassState determines if CSI operator should create and manage storage classes. If this field value is empty or Managed - CSI operator will continuously reconcile storage class and create if necessary. If this field value is Unmanaged - CSI operator will not reconcile any previously created storage class. If this field value is Removed - CSI operator will delete the storage class it created previously. When omitted, this means the user has no opinion and the platform chooses a reasonable default, which is subject to change over time. The current default behaviour is Managed. unsupportedConfigOverrides `` unsupportedConfigOverrides holds a sparse config that will override any previously set options. It only needs to be the fields to override it will end up overlaying in the following order: 1. hardcoded defaults 2. observedConfig 3. unsupportedConfigOverrides 4.1.2. .spec.driverConfig Description driverConfig can be used to specify platform specific driver configuration. When omitted, this means no opinion and the platform is left to choose reasonable defaults. These defaults are subject to change over time. Type object Required driverType Property Type Description aws object aws is used to configure the AWS CSI driver. azure object azure is used to configure the Azure CSI driver. driverType string driverType indicates type of CSI driver for which the driverConfig is being applied to. Valid values are: AWS, Azure, GCP, vSphere and omitted. Consumers should treat unknown values as a NO-OP. gcp object gcp is used to configure the GCP CSI driver. vSphere object vsphere is used to configure the vsphere CSI driver. 4.1.3. .spec.driverConfig.aws Description aws is used to configure the AWS CSI driver. Type object Property Type Description kmsKeyARN string kmsKeyARN sets the cluster default storage class to encrypt volumes with a user-defined KMS key, rather than the default KMS key used by AWS. The value may be either the ARN or Alias ARN of a KMS key. 4.1.4. .spec.driverConfig.azure Description azure is used to configure the Azure CSI driver. Type object Property Type Description diskEncryptionSet object diskEncryptionSet sets the cluster default storage class to encrypt volumes with a customer-managed encryption set, rather than the default platform-managed keys. 4.1.5. .spec.driverConfig.azure.diskEncryptionSet Description diskEncryptionSet sets the cluster default storage class to encrypt volumes with a customer-managed encryption set, rather than the default platform-managed keys. Type object Required name resourceGroup subscriptionID Property Type Description name string name is the name of the disk encryption set that will be set on the default storage class. The value should consist of only alphanumberic characters, underscores (_), hyphens, and be at most 80 characters in length. resourceGroup string resourceGroup defines the Azure resource group that contains the disk encryption set. The value should consist of only alphanumberic characters, underscores (_), parentheses, hyphens and periods. The value should not end in a period and be at most 90 characters in length. subscriptionID string subscriptionID defines the Azure subscription that contains the disk encryption set. The value should meet the following conditions: 1. It should be a 128-bit number. 2. It should be 36 characters (32 hexadecimal characters and 4 hyphens) long. 3. It should be displayed in five groups separated by hyphens (-). 4. The first group should be 8 characters long. 5. The second, third, and fourth groups should be 4 characters long. 6. The fifth group should be 12 characters long. An Example SubscrionID: f2007bbf-f802-4a47-9336-cf7c6b89b378 4.1.6. .spec.driverConfig.gcp Description gcp is used to configure the GCP CSI driver. Type object Property Type Description kmsKey object kmsKey sets the cluster default storage class to encrypt volumes with customer-supplied encryption keys, rather than the default keys managed by GCP. 4.1.7. .spec.driverConfig.gcp.kmsKey Description kmsKey sets the cluster default storage class to encrypt volumes with customer-supplied encryption keys, rather than the default keys managed by GCP. Type object Required keyRing name projectID Property Type Description keyRing string keyRing is the name of the KMS Key Ring which the KMS Key belongs to. The value should correspond to an existing KMS key ring and should consist of only alphanumeric characters, hyphens (-) and underscores (_), and be at most 63 characters in length. location string location is the GCP location in which the Key Ring exists. The value must match an existing GCP location, or "global". Defaults to global, if not set. name string name is the name of the customer-managed encryption key to be used for disk encryption. The value should correspond to an existing KMS key and should consist of only alphanumeric characters, hyphens (-) and underscores (_), and be at most 63 characters in length. projectID string projectID is the ID of the Project in which the KMS Key Ring exists. It must be 6 to 30 lowercase letters, digits, or hyphens. It must start with a letter. Trailing hyphens are prohibited. 4.1.8. .spec.driverConfig.vSphere Description vsphere is used to configure the vsphere CSI driver. Type object Property Type Description topologyCategories array (string) topologyCategories indicates tag categories with which vcenter resources such as hostcluster or datacenter were tagged with. If cluster Infrastructure object has a topology, values specified in Infrastructure object will be used and modifications to topologyCategories will be rejected. 4.1.9. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description conditions array conditions is a list of conditions and their status conditions[] object OperatorCondition is just the standard condition fields. generations array generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. generations[] object GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. observedGeneration integer observedGeneration is the last generation change you've dealt with readyReplicas integer readyReplicas indicates how many replicas are ready and at the desired state version string version is the level this availability applies to 4.1.10. .status.conditions Description conditions is a list of conditions and their status Type array 4.1.11. .status.conditions[] Description OperatorCondition is just the standard condition fields. Type object Property Type Description lastTransitionTime string message string reason string status string type string 4.1.12. .status.generations Description generations are used to determine when an item needs to be reconciled or has changed in a way that needs a reaction. Type array 4.1.13. .status.generations[] Description GenerationStatus keeps track of the generation for a given resource so that decisions about forced updates can be made. Type object Property Type Description group string group is the group of the thing you're tracking hash string hash is an optional field set for resources without generation that are content sensitive like secrets and configmaps lastGeneration integer lastGeneration is the last generation of the workload controller involved name string name is the name of the thing you're tracking namespace string namespace is where the thing you're tracking is resource string resource is the resource type of the thing you're tracking 4.2. API endpoints The following API endpoints are available: /apis/operator.openshift.io/v1/clustercsidrivers DELETE : delete collection of ClusterCSIDriver GET : list objects of kind ClusterCSIDriver POST : create a ClusterCSIDriver /apis/operator.openshift.io/v1/clustercsidrivers/{name} DELETE : delete a ClusterCSIDriver GET : read the specified ClusterCSIDriver PATCH : partially update the specified ClusterCSIDriver PUT : replace the specified ClusterCSIDriver /apis/operator.openshift.io/v1/clustercsidrivers/{name}/status GET : read status of the specified ClusterCSIDriver PATCH : partially update status of the specified ClusterCSIDriver PUT : replace status of the specified ClusterCSIDriver 4.2.1. /apis/operator.openshift.io/v1/clustercsidrivers Table 4.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ClusterCSIDriver Table 4.2. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.3. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ClusterCSIDriver Table 4.4. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 4.5. HTTP responses HTTP code Reponse body 200 - OK ClusterCSIDriverList schema 401 - Unauthorized Empty HTTP method POST Description create a ClusterCSIDriver Table 4.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.7. Body parameters Parameter Type Description body ClusterCSIDriver schema Table 4.8. HTTP responses HTTP code Reponse body 200 - OK ClusterCSIDriver schema 201 - Created ClusterCSIDriver schema 202 - Accepted ClusterCSIDriver schema 401 - Unauthorized Empty 4.2.2. /apis/operator.openshift.io/v1/clustercsidrivers/{name} Table 4.9. Global path parameters Parameter Type Description name string name of the ClusterCSIDriver Table 4.10. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ClusterCSIDriver Table 4.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 4.12. Body parameters Parameter Type Description body DeleteOptions schema Table 4.13. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ClusterCSIDriver Table 4.14. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.15. HTTP responses HTTP code Reponse body 200 - OK ClusterCSIDriver schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ClusterCSIDriver Table 4.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.17. Body parameters Parameter Type Description body Patch schema Table 4.18. HTTP responses HTTP code Reponse body 200 - OK ClusterCSIDriver schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ClusterCSIDriver Table 4.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.20. Body parameters Parameter Type Description body ClusterCSIDriver schema Table 4.21. HTTP responses HTTP code Reponse body 200 - OK ClusterCSIDriver schema 201 - Created ClusterCSIDriver schema 401 - Unauthorized Empty 4.2.3. /apis/operator.openshift.io/v1/clustercsidrivers/{name}/status Table 4.22. Global path parameters Parameter Type Description name string name of the ClusterCSIDriver Table 4.23. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ClusterCSIDriver Table 4.24. Query parameters Parameter Type Description resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset Table 4.25. HTTP responses HTTP code Reponse body 200 - OK ClusterCSIDriver schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ClusterCSIDriver Table 4.26. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.27. Body parameters Parameter Type Description body Patch schema Table 4.28. HTTP responses HTTP code Reponse body 200 - OK ClusterCSIDriver schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ClusterCSIDriver Table 4.29. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 4.30. Body parameters Parameter Type Description body ClusterCSIDriver schema Table 4.31. HTTP responses HTTP code Reponse body 200 - OK ClusterCSIDriver schema 201 - Created ClusterCSIDriver schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operator_apis/clustercsidriver-operator-openshift-io-v1 |
Chapter 3. Configuring proxy support for Red Hat Ansible Automation Platform | Chapter 3. Configuring proxy support for Red Hat Ansible Automation Platform You can configure Red Hat Ansible Automation Platform to communicate with traffic using a proxy. Proxy servers act as an intermediary for requests from clients seeking resources from other servers. A client connects to the proxy server, requesting some service or available resource from a different server, and the proxy server evaluates the request as a way to simplify and control its complexity. The following sections describe the supported proxy configurations and how to set them up. 3.1. Enable proxy support To provide proxy server support, automation controller handles proxied requests (such as ALB, NLB , HAProxy, Squid, Nginx and tinyproxy in front of automation controller) via the REMOTE_HOST_HEADERS list variable in the automation controller settings. By default, REMOTE_HOST_HEADERS is set to ["REMOTE_ADDR", "REMOTE_HOST"] . To enable proxy server support, edit the REMOTE_HOST_HEADERS field in the settings page for your automation controller: Procedure On your automation controller, navigate to Settings Miscellaneous System . In the REMOTE_HOST_HEADERS field, enter the following values: [ "HTTP_X_FORWARDED_FOR", "REMOTE_ADDR", "REMOTE_HOST" ] Automation controller determines the remote host's IP address by searching through the list of headers in REMOTE_HOST_HEADERS until the first IP address is located. 3.2. Known proxies When automation controller is configured with REMOTE_HOST_HEADERS = ['HTTP_X_FORWARDED_FOR', 'REMOTE_ADDR', 'REMOTE_HOST'] , it assumes that the value of X-Forwarded-For has originated from the proxy/load balancer sitting in front of automation controller. If automation controller is reachable without use of the proxy/load balancer, or if the proxy does not validate the header, the value of X-Forwarded-For can be falsified to fake the originating IP addresses. Using HTTP_X_FORWARDED_FOR in the REMOTE_HOST_HEADERS setting poses a vulnerability. To avoid this, you can configure a list of known proxies that are allowed using the PROXY_IP_ALLOWED_LIST field in the settings menu on your automation controller. Load balancers and hosts that are not on the known proxies list will result in a rejected request. 3.2.1. Configuring known proxies To configure a list of known proxies for your automation controller, add the proxy IP addresses to the PROXY_IP_ALLOWED_LIST field in the settings page for your automation controller. Procedure On your automation controller, navigate to Settings Miscellaneous System . In the PROXY_IP_ALLOWED_LIST field, enter IP addresses that are allowed to connect to your automation controller, following the syntax in the example below: Example PROXY_IP_ALLOWED_LIST entry Important PROXY_IP_ALLOWED_LIST requires proxies in the list are properly sanitizing header input and correctly setting an X-Forwarded-For value equal to the real source IP of the client. Automation controller can rely on the IP addresses and hostnames in PROXY_IP_ALLOWED_LIST to provide non-spoofed values for the X-Forwarded-For field. Do not configure HTTP_X_FORWARDED_FOR as an item in `REMOTE_HOST_HEADERS`unless all of the following conditions are satisfied: You are using a proxied environment with ssl termination; The proxy provides sanitization or validation of the X-Forwarded-For header to prevent client spoofing; /etc/tower/conf.d/remote_host_headers.py defines PROXY_IP_ALLOWED_LIST that contains only the originating IP addresses of trusted proxies or load balancers. 3.3. Configuring a reverse proxy You can support a reverse proxy server configuration by adding HTTP_X_FORWARDED_FOR to the REMOTE_HOST_HEADERS field in your automation controller settings. The X-Forwarded-For (XFF) HTTP header field identifies the originating IP address of a client connecting to a web server through an HTTP proxy or load balancer. Procedure On your automation controller, navigate to Settings Miscellaneous System . In the REMOTE_HOST_HEADERS field, enter the following values: [ "HTTP_X_FORWARDED_FOR", "REMOTE_ADDR", "REMOTE_HOST" ] 3.4. Enable sticky sessions By default, an Application Load Balancer routes each request independently to a registered target based on the chosen load-balancing algorithm. To avoid authentication errors when running multiple instances of automation hub behind a load balancer, you must enable sticky sessions. Enabling sticky sessions sets a custom application cookie that matches the cookie configured on the load balancer to enable stickiness. This custom cookie can include any of the cookie attributes required by the application. Additional resources Refer to Sticky sessions for your Application Load Balancer for more information about enabling sticky sessions. Disclaimer : Links contained in this note to external website(s) are provided for convenience only. Red Hat has not reviewed the links and is not responsible for the content or its availability. The inclusion of any link to an external website does not imply endorsement by Red Hat of the website or their entities, products or services. You agree that Red Hat is not responsible or liable for any loss or expenses that may result due to your use of (or reliance on) the external site or content. | [
"[ \"HTTP_X_FORWARDED_FOR\", \"REMOTE_ADDR\", \"REMOTE_HOST\" ]",
"[ \"example1.proxy.com:8080\", \"example2.proxy.com:8080\" ]",
"[ \"HTTP_X_FORWARDED_FOR\", \"REMOTE_ADDR\", \"REMOTE_HOST\" ]"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_operations_guide/assembly-configuring-proxy-support |
1.2. High Availability Add-On Introduction | 1.2. High Availability Add-On Introduction The High Availability Add-On is an integrated set of software components that can be deployed in a variety of configurations to suit your needs for performance, high availability, load balancing, scalability, file sharing, and economy. The High Availability Add-On consists of the following major components: Cluster infrastructure - Provides fundamental functions for nodes to work together as a cluster: configuration-file management, membership management, lock management, and fencing. High availability Service Management - Provides failover of services from one cluster node to another in case a node becomes inoperative. Cluster administration tools - Configuration and management tools for setting up, configuring, and managing the High Availability Add-On. The tools are for use with the Cluster Infrastructure components, the high availability and Service Management components, and storage. You can supplement the High Availability Add-On with the following components: Red Hat GFS2 (Global File System 2) - Part of the Resilient Storage Add-On, this provides a cluster file system for use with the High Availability Add-On. GFS2 allows multiple nodes to share storage at a block level as if the storage were connected locally to each cluster node. GFS2 cluster file system requires a cluster infrastructure. Cluster Logical Volume Manager (CLVM) - Part of the Resilient Storage Add-On, this provides volume management of cluster storage. CLVM support also requires cluster infrastructure. Load Balancer Add-On - Routing software that provides IP-Load-balancing. the Load Balancer Add-On runs in a pair of redundant virtual servers that distributes client requests evenly to real servers that are behind the virtual servers. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/high_availability_add-on_overview/s1-rhcs-intro-cso |
Chapter 1. Installing dynamic plugins with the Red Hat Developer Hub Operator | Chapter 1. Installing dynamic plugins with the Red Hat Developer Hub Operator You can store the configuration for dynamic plugins in a ConfigMap object that your Backstage custom resource (CR) can reference. Note If the pluginConfig field references environment variables, you must define the variables in your secrets-rhdh secret. Procedure From the OpenShift Container Platform web console, select the ConfigMaps tab. Click Create ConfigMap . From the Create ConfigMap page, select the YAML view option in Configure via and edit the file, if needed. Example ConfigMap object using the GitHub dynamic plugin kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: './dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic' disabled: false pluginConfig: catalog: providers: github: organization: "USD{GITHUB_ORG}" schedule: frequency: { minutes: 1 } timeout: { minutes: 1 } initialDelay: { seconds: 100 } Click Create . Go to the Topology view. Click on the overflow menu for the Red Hat Developer Hub instance that you want to use and select Edit Backstage to load the YAML view of the Red Hat Developer Hub instance. Add the dynamicPluginsConfigMapName field to your Backstage CR. For example: apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: my-rhdh spec: application: # ... dynamicPluginsConfigMapName: dynamic-plugins-rhdh # ... Click Save . Navigate back to the Topology view and wait for the Red Hat Developer Hub pod to start. Click the Open URL icon to start using the Red Hat Developer Hub platform with the new configuration changes. Verification Ensure that the dynamic plugins configuration has been loaded, by appending /api/dynamic-plugins-info/loaded-plugins to your Red Hat Developer Hub root URL and checking the list of plugins: Example list of plugins [ { "name": "backstage-plugin-catalog-backend-module-github-dynamic", "version": "0.5.2", "platform": "node", "role": "backend-plugin-module" }, { "name": "backstage-plugin-techdocs", "version": "1.10.0", "role": "frontend-plugin", "platform": "web" }, { "name": "backstage-plugin-techdocs-backend-dynamic", "version": "1.9.5", "platform": "node", "role": "backend-plugin" }, ] | [
"kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: './dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic' disabled: false pluginConfig: catalog: providers: github: organization: \"USD{GITHUB_ORG}\" schedule: frequency: { minutes: 1 } timeout: { minutes: 1 } initialDelay: { seconds: 100 }",
"apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: my-rhdh spec: application: dynamicPluginsConfigMapName: dynamic-plugins-rhdh",
"[ { \"name\": \"backstage-plugin-catalog-backend-module-github-dynamic\", \"version\": \"0.5.2\", \"platform\": \"node\", \"role\": \"backend-plugin-module\" }, { \"name\": \"backstage-plugin-techdocs\", \"version\": \"1.10.0\", \"role\": \"frontend-plugin\", \"platform\": \"web\" }, { \"name\": \"backstage-plugin-techdocs-backend-dynamic\", \"version\": \"1.9.5\", \"platform\": \"node\", \"role\": \"backend-plugin\" }, ]"
]
| https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/installing_and_viewing_dynamic_plugins/proc-config-dynamic-plugins-rhdh-operator_title-plugins-rhdh-about |
Part II. Installing Red Hat Certificate System | Part II. Installing Red Hat Certificate System This section describes the requirements and procedures for installing Red Hat Certificate System. | null | https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/installing_rhcs |
Chapter 96. MyBatis | Chapter 96. MyBatis Since Camel 2.7 Both producer and consumer are supported The MyBatis component allows you to query, poll, insert, update and delete data in a relational database using MyBatis . 96.1. Dependencies When using mybatis with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mybatis-starter</artifactId> </dependency> 96.2. URI format Where statementName is the statement name in the MyBatis XML mapping file which maps to the query, insert, update or delete operation you choose to evaluate. You can append query options to the URI in the following format, ?option=value&option=value&... This component will by default load the MyBatis SqlMapConfig file from the root of the classpath with the expected name of SqlMapConfig.xml . If the file is located in another location, you will need to configure the configurationUri option on the MyBatisComponent component. 96.3. Configuring Options Camel components are configured on two separate levels: component level endpoint level 96.3.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 96.3.2. Configuring Endpoint Options Endpoints have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL and DataFormat DSL as a type safe way of configuring endpoints and data formats in Java. Use Property Placeholders to configure options that allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 96.4. Component Options The MyBatis component supports 5 options, which are listed below. Name Description Default Type configurationUri (common) Location of MyBatis xml configuration file. The default value is: SqlMapConfig.xml loaded from the classpath. SqlMapConfig.xml String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean sqlSessionFactory (advanced) To use the SqlSessionFactory. SqlSessionFactory 96.5. Endpoint Options The MyBatis endpoint is configured using URI syntax: Following are the path and query parameters. 96.5.1. Path Parameters (1 parameters) Name Description Default Type statement (common) Required The statement name in the MyBatis XML mapping file which maps to the query, insert, update or delete operation you wish to evaluate. String 96.5.2. Query Parameters (30 parameters) Name Description Default Type maxMessagesPerPoll (consumer) This option is intended to split results returned by the database pool into the batches and deliver them in multiple exchanges. This integer defines the maximum messages to deliver in single exchange. By default, no maximum is set. Can be used to set a limit of e.g. 1000 to avoid when starting up the server that there are thousands of files. Set a value of 0 or negative to disable it. 0 int onConsume (consumer) Statement to run after data has been processed in the route. String routeEmptyResultSet (consumer) Whether allow empty resultset to be routed to the hop. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean transacted (consumer) Enables or disables transaction. If enabled then if processing an exchange failed then the consumer breaks out processing any further exchanges to cause a rollback eager. false boolean useIterator (consumer) Process resultset individually or as a list. true boolean bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: * InOnly * InOut * InOptionalOut ExchangePattern pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy processingStrategy (consumer (advanced)) To use a custom MyBatisProcessingStrategy. MyBatisProcessingStrategy executorType (producer) The executor type to be used while executing statements. simple - executor does nothing special. reuse - executor reuses prepared statements. batch - executor reuses statements and batches updates. Enum values: * SIMPLE * REUSE * BATCH SIMPLE ExecutorType inputHeader (producer) User the header value for input parameters instead of the message body. By default, inputHeader == null and the input parameters are taken from the message body. If outputHeader is set, the value is used and query parameters will be taken from the header instead of the body. String outputHeader (producer) Store the query result in a header instead of the message body. By default, outputHeader == null and the query result is stored in the message body, any existing content in the message body is discarded. If outputHeader is set, the value is used as the name of the header to store the query result and the original message body is preserved. Setting outputHeader will also omit populating the default CamelMyBatisResult header since it would be the same as outputHeader all the time. String statementType (producer) Mandatory to specify for the producer to control which kind of operation to invoke. Enum values: * SelectOne * SelectList * Insert * InsertList * Update * UpdateList * Delete * DeleteList StatementType lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: * TRACE * DEBUG * INFO * WARN * ERROR * OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: * NANOSECONDS * MICROSECONDS * MILLISECONDS * SECONDS * MINUTES * HOURS * DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 96.6. Message Headers The MyBatis component supports 2 message headers that are listed below. Name Description Default Type CamelMyBatisResult (producer) Constant: MYBATIS_RESULT The response returned from MtBatis in any of the operations. For instance an INSERT could return the auto-generated key, or number of rows etc. Object CamelMyBatisStatementName (common) Constant: MYBATIS_STATEMENT_NAME The statementName used (for example: insertAccount). String 96.7. Message Body The response from MyBatis will only be set as the body if it is a SELECT statement. For example, for INSERT statements Camel will not replace the body. This allows you to continue routing and keep the original body. The response from MyBatis is always stored in the header with the key CamelMyBatisResult . 96.8. Samples For example if you wish to consume beans from a JMS queue and insert them into a database you could do the following: from("activemq:queue:newAccount") .to("mybatis:insertAccount?statementType=Insert"); You must specify the statementType as you need to instruct Camel which kind of operation to invoke. Where insertAccount is the MyBatis ID in the SQL mapping file: <!-- Insert example, using the Account parameter class --> <insert id="insertAccount" parameterType="Account"> insert into ACCOUNT ( ACC_ID, ACC_FIRST_NAME, ACC_LAST_NAME, ACC_EMAIL ) values ( #{id}, #{firstName}, #{lastName}, #{emailAddress} ) </insert> 96.9. Using StatementType for better control of MyBatis When routing to an MyBatis endpoint you will want more fine grained control so you can control whether the SQL statement to be executed is a SELECT , UPDATE , DELETE or INSERT etc. So for instance if we want to route to an MyBatis endpoint in which the IN body contains parameters to a SELECT statement we can do: In the code above we can invoke the MyBatis statement selectAccountById and the IN body should contain the account id we want to retrieve, such as an Integer type. You can do the same for some of the other operations, such as SelectList : And the same for UPDATE , where you can send an Account object as the IN body to MyBatis: 96.9.1. Using InsertList StatementType MyBatis allows you to insert multiple rows using its for-each batch driver. To use this, you need to use the <foreach> in the mapper XML file. For example as shown below: Then you can insert multiple rows, by sending a Camel message to the mybatis endpoint which uses the InsertList statement type, as shown below: 96.9.2. Using UpdateList StatementType MyBatis allows you to update multiple rows using its for-each batch driver. To use this, you need to use the <foreach> in the mapper XML file. For example as shown below: <update id="batchUpdateAccount" parameterType="java.util.Map"> update ACCOUNT set ACC_EMAIL = #{emailAddress} where ACC_ID in <foreach item="Account" collection="list" open="(" close=")" separator=","> #{Account.id} </foreach> </update> Then you can update multiple rows, by sending a Camel message to the mybatis endpoint which uses the UpdateList statement type, as shown below: from("direct:start") .to("mybatis:batchUpdateAccount?statementType=UpdateList") .to("mock:result"); 96.9.3. Using DeleteList StatementType MyBatis allows you to delete multiple rows using its for-each batch driver. To use this, you need to use the <foreach> in the mapper XML file. For example as shown below: <delete id="batchDeleteAccountById" parameterType="java.util.List"> delete from ACCOUNT where ACC_ID in <foreach item="AccountID" collection="list" open="(" close=")" separator=","> #{AccountID} </foreach> </delete> Then you can delete multiple rows, by sending a Camel message to the mybatis endpoint which uses the DeleteList statement type, as shown below: from("direct:start") .to("mybatis:batchDeleteAccount?statementType=DeleteList") .to("mock:result"); 96.9.4. Notice on InsertList, UpdateList and DeleteList StatementTypes Parameter of any type (List, Map, etc.) can be passed to mybatis and an end user is responsible for handling it as required with the help of mybatis dynamic queries capabilities. 96.9.5. cheduled polling example This component supports scheduled polling and can therefore be used as a Polling Consumer. For example to poll the database every minute: from("mybatis:selectAllAccounts?delay=60000") .to("activemq:queue:allAccounts"); See "ScheduledPollConsumer Options" on Polling Consumer for more options. Alternatively you can use another mechanism for triggering the scheduled polls, such as the Timer or Quartz components. In the sample below we poll the database, every 30 seconds using the Timer component and send the data to the JMS queue: from("timer://pollTheDatabase?delay=30000") .to("mybatis:selectAllAccounts") .to("activemq:queue:allAccounts"); And the MyBatis SQL mapping file used: <!-- Select with no parameters using the result map for Account class. --> <select id="selectAllAccounts" resultMap="AccountResult"> select * from ACCOUNT </select> 96.9.6. Using onConsume This component supports executing statements after data have been consumed and processed by Camel. This allows you to do post updates in the database. Notice all statements must be UPDATE statements. Camel supports executing multiple statements whose names should be separated by commas. The route below illustrates we execute the consumeAccount statement data is processed. This allows us to change the status of the row in the database to processed, so we avoid consuming it twice or more. And the statements in the sqlmap file: 96.9.7. Participating in transactions Setting up a transaction manager under camel-mybatis can be a little bit fiddly, as it involves externalizing the database configuration outside the standard MyBatis SqlMapConfig.xml file. The first part requires the setup of a DataSource . This is typically a pool (either DBCP, or c3p0), which needs to be wrapped in a Spring proxy. This proxy enables non-Spring use of the DataSource to participate in Spring transactions (the MyBatis SqlSessionFactory does just this). <bean id="dataSource" class="org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy"> <constructor-arg> <bean class="com.mchange.v2.c3p0.ComboPooledDataSource"> <property name="driverClass" value="org.postgresql.Driver"/> <property name="jdbcUrl" value="jdbc:postgresql://localhost:5432/myDatabase"/> <property name="user" value="myUser"/> <property name="password" value="myPassword"/> </bean> </constructor-arg> </bean> This has the additional benefit of enabling the database configuration to be externalized using property placeholders. A transaction manager is then configured to manage the outermost DataSource : <bean id="txManager" class="org.springframework.jdbc.datasource.DataSourceTransactionManager"> <property name="dataSource" ref="dataSource"/> </bean> A mybatis-spring SqlSessionFactoryBean then wraps that same DataSource : <bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean"> <property name="dataSource" ref="dataSource"/> <!-- standard mybatis config file --> <property name="configLocation" value="/META-INF/SqlMapConfig.xml"/> <!-- externalised mappers --> <property name="mapperLocations" value="classpath*:META-INF/mappers/**/*.xml"/> </bean> The camel-mybatis component is then configured with that factory: <bean id="mybatis" class="org.apache.camel.component.mybatis.MyBatisComponent"> <property name="sqlSessionFactory" ref="sqlSessionFactory"/> </bean> Finally, a transaction policy is defined over the top of the transaction manager, which can then be used as usual: <bean id="PROPAGATION_REQUIRED" class="org.apache.camel.spring.spi.SpringTransactionPolicy"> <property name="transactionManager" ref="txManager"/> <property name="propagationBehaviorName" value="PROPAGATION_REQUIRED"/> </bean> <camelContext id="my-model-context" xmlns="http://camel.apache.org/schema/spring"> <route id="insertModel"> <from uri="direct:insert"/> <transacted ref="PROPAGATION_REQUIRED"/> <to uri="mybatis:myModel.insert?statementType=Insert"/> </route> </camelContext> 96.10. MyBatis Spring Boot Starter integration Spring Boot users can use mybatis-spring-boot-starter artifact provided by the mybatis team <dependency> <groupId>org.mybatis.spring.boot</groupId> <artifactId>mybatis-spring-boot-starter</artifactId> <version>2.3.0</version> </dependency> in particular AutoConfigured beans from mybatis-spring-boot-starter can be used as follow: #application.properties camel.component.mybatis.sql-session-factory=#sqlSessionFactory 96.11. Spring Boot Auto-Configuration The component supports 11 options, which are listed below. Name Description Default Type camel.component.mybatis-bean.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.mybatis-bean.configuration-uri Location of MyBatis xml configuration file. The default value is: SqlMapConfig.xml loaded from the classpath. SqlMapConfig.xml String camel.component.mybatis-bean.enabled Whether to enable auto configuration of the mybatis-bean component. This is enabled by default. Boolean camel.component.mybatis-bean.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.mybatis-bean.sql-session-factory To use the SqlSessionFactory. The option is a org.apache.ibatis.session.SqlSessionFactory type. SqlSessionFactory camel.component.mybatis.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.mybatis.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.mybatis.configuration-uri Location of MyBatis xml configuration file. The default value is: SqlMapConfig.xml loaded from the classpath. SqlMapConfig.xml String camel.component.mybatis.enabled Whether to enable auto configuration of the mybatis component. This is enabled by default. Boolean camel.component.mybatis.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.mybatis.sql-session-factory To use the SqlSessionFactory. The option is a org.apache.ibatis.session.SqlSessionFactory type. SqlSessionFactory | [
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-mybatis-starter</artifactId> </dependency>",
"mybatis:statementName[?options]",
"mybatis:statement",
"from(\"activemq:queue:newAccount\") .to(\"mybatis:insertAccount?statementType=Insert\");",
"<!-- Insert example, using the Account parameter class --> <insert id=\"insertAccount\" parameterType=\"Account\"> insert into ACCOUNT ( ACC_ID, ACC_FIRST_NAME, ACC_LAST_NAME, ACC_EMAIL ) values ( #{id}, #{firstName}, #{lastName}, #{emailAddress} ) </insert>",
"<update id=\"batchUpdateAccount\" parameterType=\"java.util.Map\"> update ACCOUNT set ACC_EMAIL = #{emailAddress} where ACC_ID in <foreach item=\"Account\" collection=\"list\" open=\"(\" close=\")\" separator=\",\"> #{Account.id} </foreach> </update>",
"from(\"direct:start\") .to(\"mybatis:batchUpdateAccount?statementType=UpdateList\") .to(\"mock:result\");",
"<delete id=\"batchDeleteAccountById\" parameterType=\"java.util.List\"> delete from ACCOUNT where ACC_ID in <foreach item=\"AccountID\" collection=\"list\" open=\"(\" close=\")\" separator=\",\"> #{AccountID} </foreach> </delete>",
"from(\"direct:start\") .to(\"mybatis:batchDeleteAccount?statementType=DeleteList\") .to(\"mock:result\");",
"from(\"mybatis:selectAllAccounts?delay=60000\") .to(\"activemq:queue:allAccounts\");",
"from(\"timer://pollTheDatabase?delay=30000\") .to(\"mybatis:selectAllAccounts\") .to(\"activemq:queue:allAccounts\");",
"<!-- Select with no parameters using the result map for Account class. --> <select id=\"selectAllAccounts\" resultMap=\"AccountResult\"> select * from ACCOUNT </select>",
"<bean id=\"dataSource\" class=\"org.springframework.jdbc.datasource.TransactionAwareDataSourceProxy\"> <constructor-arg> <bean class=\"com.mchange.v2.c3p0.ComboPooledDataSource\"> <property name=\"driverClass\" value=\"org.postgresql.Driver\"/> <property name=\"jdbcUrl\" value=\"jdbc:postgresql://localhost:5432/myDatabase\"/> <property name=\"user\" value=\"myUser\"/> <property name=\"password\" value=\"myPassword\"/> </bean> </constructor-arg> </bean>",
"<bean id=\"txManager\" class=\"org.springframework.jdbc.datasource.DataSourceTransactionManager\"> <property name=\"dataSource\" ref=\"dataSource\"/> </bean>",
"<bean id=\"sqlSessionFactory\" class=\"org.mybatis.spring.SqlSessionFactoryBean\"> <property name=\"dataSource\" ref=\"dataSource\"/> <!-- standard mybatis config file --> <property name=\"configLocation\" value=\"/META-INF/SqlMapConfig.xml\"/> <!-- externalised mappers --> <property name=\"mapperLocations\" value=\"classpath*:META-INF/mappers/**/*.xml\"/> </bean>",
"<bean id=\"mybatis\" class=\"org.apache.camel.component.mybatis.MyBatisComponent\"> <property name=\"sqlSessionFactory\" ref=\"sqlSessionFactory\"/> </bean>",
"<bean id=\"PROPAGATION_REQUIRED\" class=\"org.apache.camel.spring.spi.SpringTransactionPolicy\"> <property name=\"transactionManager\" ref=\"txManager\"/> <property name=\"propagationBehaviorName\" value=\"PROPAGATION_REQUIRED\"/> </bean> <camelContext id=\"my-model-context\" xmlns=\"http://camel.apache.org/schema/spring\"> <route id=\"insertModel\"> <from uri=\"direct:insert\"/> <transacted ref=\"PROPAGATION_REQUIRED\"/> <to uri=\"mybatis:myModel.insert?statementType=Insert\"/> </route> </camelContext>",
"<dependency> <groupId>org.mybatis.spring.boot</groupId> <artifactId>mybatis-spring-boot-starter</artifactId> <version>2.3.0</version> </dependency>",
"#application.properties camel.component.mybatis.sql-session-factory=#sqlSessionFactory"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-mybatis-component |
Chapter 3. Deploying a Red Hat Enterprise Linux image as an EC2 instance on Amazon Web Services | Chapter 3. Deploying a Red Hat Enterprise Linux image as an EC2 instance on Amazon Web Services To set up a High Availability (HA) deployment of RHEL on Amazon Web Services (AWS), you can deploy EC2 instances of RHEL to a cluster on AWS. Important While you can create a custom VM from an ISO image, Red Hat recommends that you use the Red Hat Image Builder product to create customized images for use on specific cloud providers. With Image Builder, you can create and upload an Amazon Machine Image (AMI) in the ami format. See Composing a Customized RHEL System Image for more information. Note For a list of Red Hat products that you can use securely on AWS, see Red Hat on Amazon Web Services . Prerequisites Sign up for a Red Hat Customer Portal account. Sign up for AWS and set up your AWS resources. See Setting Up with Amazon EC2 for more information. 3.1. Red Hat Enterprise Linux Image options on AWS The following table lists image choices and notes the differences in the image options. Table 3.1. Image options Image option Subscriptions Sample scenario Considerations Deploy a Red Hat Gold Image. Use your existing Red Hat subscriptions. Select a Red Hat Gold Image on AWS. For details on Gold Images and how to access them on Azure, see the Red Hat Cloud Access Reference Guide . The subscription includes the Red Hat product cost; you pay Amazon for all other instance costs. Red Hat provides support directly for Cloud Access images. Deploy a custom image that you move to AWS. Use your existing Red Hat subscriptions. Upload your custom image, and attach your subscriptions. The subscription includes the Red Hat product cost; you pay Amazon for all other instance costs. Red Hat provides support directly for custom RHEL images. Deploy an existing Amazon image that includes RHEL. The AWS EC2 images include a Red Hat product. Select a RHEL image when you launch an instance on the AWS Management Console , or choose an image from the AWS Marketplace . You pay Amazon hourly on a pay-as-you-go model. Such images are called "on-demand" images. Amazon provides support for on-demand images. Red Hat provides updates to the images. AWS makes the updates available through the Red Hat Update Infrastructure (RHUI). Note You can create a custom image for AWS by using Red Hat Image Builder. See Composing a Customized RHEL System Image for more information. Important You cannot convert an on-demand instance to a custom RHEL instance. To change from an on-demand image to a custom RHEL bring-your-own-subscription (BYOS) image: Create a new custom RHEL instance and migrate data from your on-demand instance. Cancel your on-demand instance after you migrate your data to avoid double billing. Additional resources Composing a Customized RHEL System Image AWS Management Console AWS Marketplace 3.2. Understanding base images To create a base VM from an ISO image, you can use preconfigured base images and their configuration settings. 3.2.1. Using a custom base image To manually configure a virtual machine (VM), first create a base (starter) VM image. Then, you can modify configuration settings and add the packages the VM requires to operate on the cloud. You can make additional configuration changes for your specific application after you upload the image. Additional resources Red Hat Enterprise Linux 3.2.2. Virtual machine configuration settings Cloud VMs must have the following configuration settings. Table 3.2. VM configuration settings Setting Recommendation ssh ssh must be enabled to provide remote access to your VMs. dhcp The primary virtual adapter should be configured for dhcp. 3.3. Creating a base VM from an ISO image To create a RHEL 9 base image from an ISO image, enable your host machine for virtualization and create a RHEL virtual machine (VM). Prerequisites Virtualization is enabled on your host machine. You have downloaded the latest Red Hat Enterprise Linux ISO image from the Red Hat Customer Portal and moved the image to /var/lib/libvirt/images . 3.3.1. Creating a VM from the RHEL ISO image Procedure Ensure that you have enabled your host machine for virtualization. See Enabling virtualization in RHEL 9 for information and procedures. Create and start a basic Red Hat Enterprise Linux VM. For instructions, see Creating virtual machines . If you use the command line to create your VM, ensure that you set the default memory and CPUs to the capacity you want for the VM. Set your virtual network interface to virtio . For example, the following command creates a kvmtest VM by using the /home/username/Downloads/rhel9.iso image: If you use the web console to create your VM, follow the procedure in Creating virtual machines by using the web console , with these caveats: Do not check Immediately Start VM . Change your Memory size to your preferred settings. Before you start the installation, ensure that you have changed Model under Virtual Network Interface Settings to virtio and change your vCPUs to the capacity settings you want for the VM. 3.3.2. Completing the RHEL installation To finish the installation of a RHEL system that you want to deploy on Amazon Web Services (AWS), customize the Installation Summary view, begin the installation, and enable root access once the VM launches. Procedure Choose the language you want to use during the installation process. On the Installation Summary view: Click Software Selection and check Minimal Install . Click Done . Click Installation Destination and check Custom under Storage Configuration . Verify at least 500 MB for /boot . You can use the remaining space for root / . Standard partitions are recommended, but you can use Logical Volume Manager (LVM). You can use xfs, ext4, or ext3 for the file system. Click Done when you are finished with changes. Click Begin Installation . Set a Root Password . Create other users as applicable. Reboot the VM and log in as root once the installation completes. Configure the image. Register the VM and enable the Red Hat Enterprise Linux 9 repository. Ensure that the cloud-init package is installed and enabled. Important: This step is only for VMs you intend to upload to AWS. For AMD64 or Intel 64 (x86_64)VMs, install the nvme , xen-netfront , and xen-blkfront drivers. For ARM 64 (aarch64) VMs, install the nvme driver. Including these drivers removes the possibility of a dracut time-out. Alternatively, you can add the drivers to /etc/dracut.conf.d/ and then enter dracut -f to overwrite the existing initramfs file. Power down the VM. Additional resources Introduction to cloud-init 3.4. Uploading the Red Hat Enterprise Linux image to AWS To be able to run a RHEL instance on Amazon Web Services (AWS), you must first upload your RHEL image to AWS. 3.4.1. Installing the AWS CLI Many of the procedures required to manage HA clusters in AWS include using the AWS CLI. Prerequisites You have created an AWS Access Key ID and an AWS Secret Access Key, and have access to them. For instructions and details, see Quickly Configuring the AWS CLI . Procedure Install the AWS command line tools by using the dnf command. Use the aws --version command to verify that you installed the AWS CLI. Configure the AWS command line client according to your AWS access details. Additional resources Quickly Configuring the AWS CLI AWS command line tools 3.4.2. Creating an S3 bucket Importing to AWS requires an Amazon S3 bucket. An Amazon S3 bucket is an Amazon resource where you store objects. As part of the process for uploading your image, you need to create an S3 bucket and then move your image to the bucket. Procedure Launch the Amazon S3 Console . Click Create Bucket . The Create Bucket dialog appears. In the Name and region view: Enter a Bucket name . Enter a Region . Click . In the Configure options view, select the desired options and click . In the Set permissions view, change or accept the default options and click . Review your bucket configuration. Click Create bucket . Note Alternatively, you can use the AWS CLI to create a bucket. For example, the aws s3 mb s3://my-new-bucket command creates an S3 bucket named my-new-bucket . See the AWS CLI Command Reference for more information about the mb command. Additional resources Amazon S3 Console AWS CLI Command Reference 3.4.3. Creating the vmimport role To be able to import a RHEL virtual machine (VM) to Amazon Web Services (AWS) by using the VM Import service, you need to create the vmimport role. For more information, see Importing a VM as an image using VM Import/Export in the Amazon documentation. Procedure Create a file named trust-policy.json and include the following policy. Save the file on your system and note its location. Use the create role command to create the vmimport role. Specify the full path to the location of the trust-policy.json file. Prefix file:// to the path. For example: Create a file named role-policy.json and include the following policy. Replace s3-bucket-name with the name of your S3 bucket. Use the put-role-policy command to attach the policy to the role you created. Specify the full path of the role-policy.json file. For example: Additional resources VM Import Service Role Required Service Role 3.4.4. Converting and pushing your image to S3 By using the qemu-img command, you can convert your image, so that you can push it to S3. The samples are representative; they convert an image formatted in the qcow2 file format to raw format. Amazon accepts images in OVA , VHD , VHDX , VMDK , and raw formats. See How VM Import/Export Works for more information about image formats that Amazon accepts. Procedure Run the qemu-img command to convert your image. For example: Push the image to S3. Note This procedure could take a few minutes. After completion, you can check that your image uploaded successfully to your S3 bucket by using the AWS S3 Console . Additional resources How VM Import/Export Works AWS S3 Console 3.4.5. Importing your image as a snapshot To launch a RHEL instance in the Amazon Elastic Cloud Compute (EC2) service, you require an Amazon Machine Image (AMI). To create an AMI of your system, you must first upload a snapshot of your RHEL system image to EC2. Procedure Create a file to specify a bucket and path for your image. Name the file containers.json . In the sample that follows, replace s3-bucket-name with your bucket name and s3-key with your key. You can get the key for the image by using the Amazon S3 Console. Import the image as a snapshot. This example uses a public Amazon S3 file; you can use the Amazon S3 Console to change permissions settings on your bucket. The terminal displays a message such as the following. Note the ImportTaskID within the message. Track the progress of the import by using the describe-import-snapshot-tasks command. Include the ImportTaskID . The returned message shows the current status of the task. When complete, Status shows completed . Within the status, note the snapshot ID. Additional resources Amazon S3 Console Importing a Disk as a Snapshot Using VM Import/Export 3.4.6. Creating an AMI from the uploaded snapshot To launch a RHEL instance in Amazon Elastic Cloud Compute (EC2) service, you require an Amazon Machine Image (AMI). To create an AMI of your system, you can use a RHEL system snapshot that you previously uploaded. Procedure Go to the AWS EC2 Dashboard. Under Elastic Block Store , select Snapshots . Search for your snapshot ID (for example, snap-0e718930bd72bcda0 ). Right-click on the snapshot and select Create image . Name your image. Under Virtualization type , choose Hardware-assisted virtualization . Click Create . In the note regarding image creation, there is a link to your image. Click on the image link. Your image shows up under Images>AMIs . Note Alternatively, you can use the AWS CLI register-image command to create an AMI from a snapshot. See register-image for more information. An example follows. You must specify the root device volume /dev/sda1 as your root-device-name . For conceptual information about device mapping for AWS, see Example block device mapping . 3.4.7. Launching an instance from the AMI To launch and configure an Amazon Elastic Compute Cloud (EC2) instance, use an Amazon Machine Image (AMI). Procedure From the AWS EC2 Dashboard, select Images and then AMIs . Right-click on your image and select Launch . Choose an Instance Type that meets or exceeds the requirements of your workload. See Amazon EC2 Instance Types for information about instance types. Click : Configure Instance Details . Enter the Number of instances you want to create. For Network , select the VPC you created when setting up your AWS environment . Select a subnet for the instance or create a new subnet. Select Enable for Auto-assign Public IP. Note These are the minimum configuration options necessary to create a basic instance. Review additional options based on your application requirements. Click : Add Storage . Verify that the default storage is sufficient. Click : Add Tags . Note Tags can help you manage your AWS resources. See Tagging Your Amazon EC2 Resources for information about tagging. Click : Configure Security Group . Select the security group you created when setting up your AWS environment . Click Review and Launch . Verify your selections. Click Launch . You are prompted to select an existing key pair or create a new key pair. Select the key pair you created when setting up your AWS environment . Note Verify that the permissions for your private key are correct. Use the command options chmod 400 <keyname>.pem to change the permissions, if necessary. Click Launch Instances . Click View Instances . You can name the instance(s). You can now launch an SSH session to your instance(s) by selecting an instance and clicking Connect . Use the example provided for A standalone SSH client . Note Alternatively, you can launch an instance by using the AWS CLI. See Launching, Listing, and Terminating Amazon EC2 Instances in the Amazon documentation for more information. Additional resources AWS Management Console Setting Up with Amazon EC2 Amazon EC2 Instances Amazon EC2 Instance Types 3.4.8. Attaching Red Hat subscriptions Using the subscription-manager command, you can register and attach your Red Hat subscription to a RHEL instance. Prerequisites You must have enabled your subscriptions. Procedure Register your system. Attach your subscriptions. You can use an activation key to attach subscriptions. See Creating Red Hat Customer Portal Activation Keys for more information. Alternatively, you can manually attach a subscription by using the ID of the subscription pool (Pool ID). See Attaching a host-based subscription to hypervisors . Optional: To collect various system metrics about the instance in the Red Hat Hybrid Cloud Console , you can register the instance with Red Hat Insights . For information on further configuration of Red Hat Insights, see Client Configuration Guide for Red Hat Insights . Additional resources Creating Red Hat Customer Portal Activation Keys Attaching a host-based subscription to hypervisors Client Configuration Guide for Red Hat Insights 3.4.9. Setting up automatic registration on AWS Gold Images To make deploying RHEL 9 virtual machines on Amazon Web Services (AWS) faster and more comfortable, you can set up Gold Images of RHEL 9 to be automatically registered to the Red Hat Subscription Manager (RHSM). Prerequisites You have downloaded the latest RHEL 9 Gold Image for AWS. For instructions, see Using Gold Images on AWS . Note An AWS account can only be attached to a single Red Hat account at a time. Therefore, ensure no other users require access to the AWS account before attaching it to your Red Hat one. Procedure Upload the Gold Image to AWS. For instructions, see Uploading the Red Hat Enterprise Linux image to AWS . Create VMs by using the uploaded image. They will be automatically subscribed to RHSM. Verification In a RHEL 9 VM created using the above instructions, verify the system is registered to RHSM by executing the subscription-manager identity command. On a successfully registered system, this displays the UUID of the system. For example: Additional resources AWS Management Console Adding cloud integrations to the Hybrid Cloud Console 3.5. Additional resources Red Hat Cloud Access Reference Guide Red Hat in the Public Cloud Red Hat Enterprise Linux on Amazon EC2 - FAQs Setting Up with Amazon EC2 Red Hat on Amazon Web Services | [
"virt-install --name kvmtest --memory 2048 --vcpus 2 --cdrom /home/username/Downloads/rhel9.iso,bus=virtio --os-variant=rhel9.0",
"subscription-manager register --auto-attach",
"dnf install cloud-init systemctl enable --now cloud-init.service",
"dracut -f --add-drivers \"nvme xen-netfront xen-blkfront\"",
"dracut -f --add-drivers \"nvme\"",
"dnf install awscli",
"aws --version aws-cli/1.19.77 Python/3.6.15 Linux/5.14.16-201.fc34.x86_64 botocore/1.20.77",
"aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]:",
"{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"vmie.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": { \"StringEquals\":{ \"sts:Externalid\": \"vmimport\" } } } ] }",
"aws iam create-role --role-name vmimport --assume-role-policy-document file:///home/sample/ImportService/trust-policy.json",
"{ \"Version\":\"2012-10-17\", \"Statement\":[ { \"Effect\":\"Allow\", \"Action\":[ \"s3:GetBucketLocation\", \"s3:GetObject\", \"s3:ListBucket\" ], \"Resource\":[ \"arn:aws:s3:::s3-bucket-name\", \"arn:aws:s3:::s3-bucket-name/*\" ] }, { \"Effect\":\"Allow\", \"Action\":[ \"ec2:ModifySnapshotAttribute\", \"ec2:CopySnapshot\", \"ec2:RegisterImage\", \"ec2:Describe*\" ], \"Resource\":\"*\" } ] }",
"aws iam put-role-policy --role-name vmimport --policy-name vmimport --policy-document file:///home/sample/ImportService/role-policy.json",
"qemu-img convert -f qcow2 -O raw rhel-9.0-sample.qcow2 rhel-9.0-sample.raw",
"aws s3 cp rhel-9.0-sample.raw s3://s3-bucket-name",
"{ \"Description\": \"rhel-9.0-sample.raw\", \"Format\": \"raw\", \"UserBucket\": { \"S3Bucket\": \"s3-bucket-name\", \"S3Key\": \"s3-key\" } }",
"aws ec2 import-snapshot --disk-container file://containers.json",
"{ \"SnapshotTaskDetail\": { \"Status\": \"active\", \"Format\": \"RAW\", \"DiskImageSize\": 0.0, \"UserBucket\": { \"S3Bucket\": \"s3-bucket-name\", \"S3Key\": \"rhel-9.0-sample.raw\" }, \"Progress\": \"3\", \"StatusMessage\": \"pending\" }, \"ImportTaskId\": \"import-snap-06cea01fa0f1166a8\" }",
"aws ec2 describe-import-snapshot-tasks --import-task-ids import-snap-06cea01fa0f1166a8",
"aws ec2 register-image --name \"myimagename\" --description \"myimagedescription\" --architecture x86_64 --virtualization-type hvm --root-device-name \"/dev/sda1\" --ena-support --block-device-mappings \"{\\\"DeviceName\\\": \\\"/dev/sda1\\\",\\\"Ebs\\\": {\\\"SnapshotId\\\": \\\"snap-0ce7f009b69ab274d\\\"}}\"",
"subscription-manager register --auto-attach",
"insights-client register --display-name <display-name-value>",
"subscription-manager identity system identity: fdc46662-c536-43fb-a18a-bbcb283102b7 name: 192.168.122.222 org name: 6340056 org ID: 6340056"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/deploying_rhel_9_on_amazon_web_services/assembly_deploying-a-virtual-machine-on-aws_cloud-content-aws |
Creating a custom LLM using RHEL AI | Creating a custom LLM using RHEL AI Red Hat Enterprise Linux AI 1.1 Creating files for customizing LLMs and running the end-to-end workflow Red Hat RHEL AI Documentation Team | [
"Phoenix (constellation) **Phoenix** is a minor [constellation](constellation \"wikilink\") in the [southern sky](southern_sky \"wikilink\"). Named after the mythical [phoenix](Phoenix_(mythology) \"wikilink\"), it was first depicted on a celestial atlas by [Johann Bayer](Johann_Bayer \"wikilink\") in his 1603 *[Uranometria](Uranometria \"wikilink\")*. The French explorer and astronomer [Nicolas Louis de Lacaille](Nicolas_Louis_de_Lacaille \"wikilink\") charted the brighter stars and gave their [Bayer designations](Bayer_designation \"wikilink\") in 1756. The constellation stretches from roughly −39 degrees to −57 degrees [declination](declination \"wikilink\"), and from 23.5h to 2.5h of [right ascension](right_ascension \"wikilink\"). The constellations Phoenix, [Grus](Grus_(constellation) \"wikilink\"), [Pavo](Pavo_(constellation) \"wikilink\") and [Tucana](Tucana \"wikilink\"), are known as the Southern Birds. The brightest star, [Alpha Phoenicis](Alpha_Phoenicis \"wikilink\"), is named Ankaa, an [Arabic](Arabic \"wikilink\") word meaning 'the Phoenix'. It is an orange giant of apparent magnitude 2.4. Next is [Beta Phoenicis](Beta_Phoenicis \"wikilink\"), actually a [binary](Binary_star \"wikilink\") system composed of two yellow giants with a combined apparent magnitude of 3.3. [Nu Phoenicis](Nu_Phoenicis \"wikilink\") has a dust disk, while the constellation has ten star systems with known planets and the recently discovered [galaxy clusters](galaxy_cluster \"wikilink\") [El Gordo](El_Gordo_(galaxy_cluster) \"wikilink\") and the [Phoenix Cluster](Phoenix_Cluster \"wikilink\")-located 7.2 and 5.7 billion light years away respectively, two of the largest objects in the [visible universe](visible_universe \"wikilink\"). Phoenix is the [radiant](radiant_(meteor_shower) \"wikilink\") of two annual [meteor showers](meteor_shower \"wikilink\"): the [Phoenicids](Phoenicids \"wikilink\") in December, and the July Phoenicids. ## History Phoenix was the largest of the 12 constellations established by [Petrus Plancius](Petrus_Plancius \"wikilink\") from the observations of [Pieter Dirkszoon Keyser](Pieter_Dirkszoon_Keyser \"wikilink\") and [Frederick de Houtman](Frederick_de_Houtman \"wikilink\"). It first appeared on a 35-cm diameter celestial globe published in 1597 (or 1598) in Amsterdam by Plancius with [Jodocus Hondius](Jodocus_Hondius \"wikilink\"). The first depiction of this constellation in a celestial atlas was in [Johann Bayer](Johann_Bayer \"wikilink\")'s *[Uranometria](Uranometria \"wikilink\")* of 1603. De Houtman included it in his southern star catalog the same year under the Dutch name *Den voghel Fenicx*, \"The Bird Phoenix\", symbolising the [phoenix](Phoenix_(mythology) \"wikilink\") of classical mythology. One name of the brightest star [Alpha Phoenicis](Alpha_Phoenicis \"wikilink\")-Ankaa-is derived from the Arabic: ?l`nq??, romanized: al-'anqa', lit. 'the phoenix', and was coined sometime after 1800 in relation to the constellation. Celestial historian Richard Allen noted that unlike the other constellations introduced by Plancius and [La Caille](La_Caille \"wikilink\"), Phoenix has actual precedent in ancient astronomy, as the Arabs saw this formation as representing young ostriches, *Al Ri'al*, or as a griffin or eagle. In addition, the same group of stars was sometimes imagined by the Arabs as a boat, *Al Zaurak*, on the nearby river Eridanus. He observed, \"the introduction of a Phoenix into modern astronomy was, in a measure, by adoption rather than by invention.\" The Chinese incorporated Phoenix's brightest star, Ankaa (Alpha Phoenicis), and stars from the adjacent constellation [Sculptor](Sculptor_(constellation) \"wikilink\") to depict *Bakui*, a net for catching birds. Phoenix and the neighbouring constellation of [Grus](Grus_(constellation) \"wikilink\") together were seen by [Julius Schiller](Julius_Schiller \"wikilink\") as portraying [Aaron](Aaron \"wikilink\") the High Priest. These two constellations, along with nearby [Pavo](Pavo_(constellation) \"wikilink\") and [Tucana](Tucana \"wikilink\"), are called the Southern Birds. ## Characteristics Phoenix is a small constellation bordered by [Fornax](Fornax \"wikilink\") and Sculptor to the north, Grus to the west, Tucana to the south, touching on the corner of [Hydrus](Hydrus \"wikilink\") to the south, and [Eridanus](Eridanus_(constellation) \"wikilink\") to the east and southeast. The bright star [Achernar](Achernar \"wikilink\") is nearby. The three-letter abbreviation for the constellation, as adopted by the [International Astronomical Union](International_Astronomical_Union \"wikilink\") in 1922, is \"Phe\". The official constellation boundaries, as set by Belgian astronomer [Eugene Delporte](Eugene_Joseph_Delporte \"wikilink\") in 1930, are defined by a polygon of 10 segments. In the [equatorial coordinate system](equatorial_coordinate_system \"wikilink\"), the [right ascension](right_ascension \"wikilink\") coordinates of these borders lie between 23<sup>h</sup> 26.5<sup>m</sup> and 02<sup>h</sup> 25.0<sup>m</sup>, while the [declination](declination \"wikilink\") coordinates are between −39.31deg and −57.84deg. This means it remains below the horizon to anyone living north of the [40th parallel](40th_parallel_north \"wikilink\") in the [Northern Hemisphere](Northern_Hemisphere \"wikilink\"), and remains low in the sky for anyone living north of the [equator](equator \"wikilink\"). It is most visible from locations such as Australia and South Africa during late [Southern Hemisphere](Southern_Hemisphere \"wikilink\") spring. Most of the constellation lies within, and can be located by, forming a triangle of the bright stars Achernar, [Fomalhaut](Fomalhaut \"wikilink\") and [Beta Ceti](Beta_Ceti \"wikilink\")-Ankaa lies roughly in the centre of this.",
"taxonomy/knowledge/technical_documents/product_customer_cases/qna.yaml",
"ilab taxonomy diff",
"knowledge/technical_documents/product_customer_cases/qna.yaml Taxonomy in /taxonomy/ is valid :)",
"9:15 error syntax error: mapping values are not allowed here (syntax) Reading taxonomy failed with the following error: 1 taxonomy with errors! Exiting.",
"version: 3 1 domain: astronomy 2 created_by: <user-name> 3 seed_examples: - context: | 4 **Phoenix** is a minor [constellation](constellation \"wikilink\") in the [southern sky](southern_sky \"wikilink\"). Named after the mythical [phoenix](Phoenix_(mythology) \"wikilink\"), it was first depicted on a celestial atlas by [Johann Bayer](Johann_Bayer \"wikilink\") in his 1603 *[Uranometria](Uranometria \"wikilink\")*. The French explorer and astronomer [Nicolas Louis de Lacaille](Nicolas_Louis_de_Lacaille \"wikilink\") charted the brighter stars and gave their [Bayer designations](Bayer_designation \"wikilink\") in 1756. The constellation stretches from roughly −39 degrees to −57 degrees [declination](declination \"wikilink\"), and from 23.5h to 2.5h of [right ascension](right_ascension \"wikilink\"). The constellations Phoenix, [Grus](Grus_(constellation) \"wikilink\"), [Pavo](Pavo_(constellation) \"wikilink\") and [Tucana](Tucana \"wikilink\"), are known as the Southern Birds. Birds. questions_and_answers: - question: | 5 What is the Phoenix constellation? answer: | 6 Phoenix is a minor constellation in the southern sky. - question: | Who charted the Phoenix constellation? answer: | The Phoenix constellation was charted by french explorer and astronomer Nicolas Louis de Lacaille. - question: | How far does the Phoenix constellation stretch? answer: | The phoenix constellation stretches from roughly −39deg to −57deg declination, and from 23.5h to 2.5h of right ascension. - context: | Phoenix was the largest of the 12 constellations established by [Petrus Plancius](Petrus_Plancius \"wikilink\") from the observations of [Pieter Dirkszoon Keyser](Pieter_Dirkszoon_Keyser \"wikilink\") and [Frederick de Houtman](Frederick_de_Houtman \"wikilink\"). It first appeared on a 35cm diameter celestial globe published in 1597 (or 1598) in Amsterdam by Plancius with [Jodocus Hondius](Jodocus_Hondius \"wikilink\"). The first depiction of this constellation in a celestial atlas was in [Johann Bayer](Johann_Bayer \"wikilink\")'s *[Uranometria](Uranometria \"wikilink\")* of 1603. De Houtman included it in his southern star catalog the same year under the Dutch name *Den voghel Fenicx*, \"The Bird Phoenix\", symbolising the [phoenix](Phoenix_(mythology) \"wikilink\") of classical mythology. One name of the brightest star [Alpha Phoenicis](Alpha_Phoenicis \"wikilink\")-Ankaa-is derived from the Arabic: ?l`nq??, romanized: al-'anqa', lit. 'the phoenix', and was coined sometime after 1800 in relation to the constellation. questions_and_answers: - question: | What is the brightest star in the Phoenix constellation called? answer: | Alpha Phoenicis or Ankaa is the brightest star in the Phoenix Constellation. - question: Where did the Phoenix constellation first appear? answer: | The Phoenix constellation first appeared on a 35-cm diameter celestial globe published in 1597 (or 1598) in Amsterdam by Plancius with Jodocus Hondius. - question: | What does \"The Bird Phoenix\" symbolize? answer: | \"The Bird Phoenix\" symbolizes the phoenix of classical mythology. - context: | Phoenix is a small constellation bordered by [Fornax](Fornax \"wikilink\") and Sculptor to the north, Grus to the west, Tucana to the south, touching on the corner of [Hydrus](Hydrus \"wikilink\") to the south, and [Eridanus](Eridanus_(constellation) \"wikilink\") to the east and southeast. The bright star [Achernar](Achernar \"wikilink\") is nearby. The three-letter abbreviation for the constellation, as adopted by the [International Astronomical Union](International_Astronomical_Union \"wikilink\") in 1922, is \"Phe\". The official constellation boundaries, as set by Belgian astronomer [Eugene Delporte](Eugene_Joseph_Delporte \"wikilink\") in 1930, are defined by a polygon of 10 segments. In the [equatorial coordinate system](equatorial_coordinate_system \"wikilink\"), the [right ascension](right_ascension \"wikilink\") coordinates of these borders lie between 23<sup>h</sup> 26.5<sup>m</sup> and 02<sup>h</sup> 25.0<sup>m</sup>, while the [declination](declination \"wikilink\") coordinates are between −39.31deg and −57.84deg. This means it remains below the horizon to anyone living north of the [40th parallel](40th_parallel_north \"wikilink\") in the [Northern Hemisphere](Northern_Hemisphere \"wikilink\"), and remains low in the sky for anyone living north of the [equator](equator \"wikilink\"). It is most visible from locations such as Australia and South Africa during late [Southern Hemisphere](Southern_Hemisphere \"wikilink\") spring. Most of the constellation lies within, and can be located by, forming a triangle of the bright stars Achernar, [Fomalhaut](Fomalhaut \"wikilink\") and [Beta Ceti](Beta_Ceti \"wikilink\")-Ankaa lies roughly in the centre of this. questions_and_answers: - question: What are the characteristics of the Phoenix constellation? answer: | Phoenix is a small constellation bordered by Fornax and Sculptor to the north, Grus to the west, Tucana to the south, touching on the corner of Hydrus to the south, and Eridanus to the east and southeast. The bright star Achernar is nearby. - question: | When is the phoenix constellation most visible? answer: | Phoenix is most visible from locations such as Australia and South Africa during late Southern Hemisphere spring. - question: | What are the Phoenix Constellation boundaries? answer: | The official constellation boundaries for Phoenix, as set by Belgian astronomer Eugene Delporte in 1930, are defined by a polygon of 10 segments. - context: | Ten stars have been found to have planets to date, and four planetary systems have been discovered with the [SuperWASP](SuperWASP \"wikilink\") project. [HD 142](HD_142 \"wikilink\") is a yellow giant that has an apparent magnitude of 5.7, and has a planet ([HD 142b](HD_142_b \"wikilink\")) 1.36 times the mass of Jupiter which orbits every 328 days. [HD 2039](HD_2039 \"wikilink\") is a yellow subgiant with an apparent magnitude of 9.0 around 330 light years away which has a planet ([HD 2039 b](HD_2039_b \"wikilink\")) six times the mass of Jupiter. [WASP-18](WASP-18 \"wikilink\") is a star of magnitude 9.29 which was discovered to have a hot Jupiter-like planet ([WASP-18b](WASP-18b \"wikilink\")) taking less than a day to orbit the star. The planet is suspected to be causing WASP-18 to appear older than it really is. [WASP-4](WASP-4 \"wikilink\") and [WASP-5](WASP-5 \"wikilink\") are solar-type yellow stars around 1000 light years distant and of 13th magnitude, each with a single planet larger than Jupiter. [WASP-29](WASP-29 \"wikilink\") is an orange dwarf of spectral type K4V and visual magnitude 11.3, which has a planetary companion of similar size and mass to Saturn. The planet completes an orbit every 3.9 days. questions_and_answers: - question: In the Phoenix constellation, how many stars have planets? answer: | In the Phoenix constellation, ten stars have been found to have planets to date, and four planetary systems have been discovered with the SuperWASP project. - question: | What is HD 142? answer: | HD 142 is a yellow giant that has an apparent magnitude of 5.7, and has a planet (HD 142 b) 1.36 times the mass of Jupiter which orbits every 328 days. - question: | Are WASP-4 and WASP-5 solar-type yellow stars? answer: | Yes, WASP-4 and WASP-5 are solar-type yellow stars around 1000 light years distant and of 13th magnitude, each with a single planet larger than Jupiter. - context: | The constellation does not lie on the [galactic plane](galactic_plane \"wikilink\") of the Milky Way, and there are no prominent star clusters. [NGC 625](NGC_625 \"wikilink\") is a dwarf [irregular galaxy](irregular_galaxy \"wikilink\") of apparent magnitude 11.0 and lying some 12.7 million light years distant. Only 24000 light years in diameter, it is an outlying member of the [Sculptor Group](Sculptor_Group \"wikilink\"). NGC 625 is thought to have been involved in a collision and is experiencing a burst of [active star formation](Active_galactic_nucleus \"wikilink\"). [NGC 37](NGC_37 \"wikilink\") is a [lenticular galaxy](lenticular_galaxy \"wikilink\") of apparent magnitude 14.66. It is approximately 42 [kiloparsecs](kiloparsecs \"wikilink\") (137,000 [light-years](light-years \"wikilink\")) in diameter and about 12.9 billion years old. [Robert's Quartet](Robert's_Quartet \"wikilink\") (composed of the irregular galaxy [NGC 87](NGC_87 \"wikilink\"), and three spiral galaxies [NGC 88](NGC_88 \"wikilink\"), [NGC 89](NGC_89 \"wikilink\") and [NGC 92](NGC_92 \"wikilink\")) is a group of four galaxies located around 160 million light-years away which are in the process of colliding and merging. They are within a circle of radius of 1.6 arcmin, corresponding to about 75,000 light-years. Located in the galaxy ESO 243-49 is [HLX-1](HLX-1 \"wikilink\"), an [intermediate-mass black hole](intermediate-mass_black_hole \"wikilink\")-the first one of its kind identified. It is thought to be a remnant of a dwarf galaxy that was absorbed in a [collision](Interacting_galaxy \"wikilink\") with ESO 243-49. Before its discovery, this class of black hole was only hypothesized. questions_and_answers: - question: | Is the Phoenix Constellation part of the Milky Way? answer: | The Phoenix constellation does not lie on the galactic plane of the Milky Way, and there are no prominent star clusters. - question: | How many light years away is NGC 625? answer: | NGC 625 is 24000 light years in diameter and is an outlying member of the Sculptor Group. - question: | What is Robert's Quartet composed of? answer: | Robert's Quartet is composed of the irregular galaxy NGC 87, and three spiral galaxies NGC 88, NGC 89 and NGC 92. document_outline: | 7 Information about the Phoenix Constellation including the history, characteristics, and features of the stars in the constellation. document: repo: https://github.com/<profile>/<repo-name> / 8 commit: <commit hash> 9 patterns: - phoenix_constellation.md 10",
"Title of work: Phoenix (constellation) Link to work: https://en.wikipedia.org/wiki/Phoenix_(constellation) Revision: https://en.wikipedia.org/w/index.php?title=Phoenix_(constellation)&oldid=1237187773 License of the work: CC-BY-SA-4.0 Creator names: Wikipedia Authors",
"ilab data generate",
"Starting a temporary vLLM server at http://127.0.0.1:47825/v1 INFO 2024-08-22 17:01:09,461 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 1/120 INFO 2024-08-22 17:01:14,213 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 2/120 INFO 2024-08-22 17:01:19,142 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:47825/v1, this might take a moment... Attempt: 3/120",
"INFO 2024-08-22 15:16:38,933 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:49311/v1, this might take a moment... Attempt: 73/120 INFO 2024-08-22 15:16:43,497 instructlab.model.backends.backends:480: Waiting for the vLLM server to start at http://127.0.0.1:49311/v1, this might take a moment... Attempt: 74/120 INFO 2024-08-22 15:16:45,949 instructlab.model.backends.backends:487: vLLM engine successfully started at http://127.0.0.1:49311/v1 Generating synthetic data using '/usr/share/instructlab/sdg/pipelines/agentic' pipeline, '/var/home/cloud-user/.cache/instructlab/models/mixtral-8x7b-instruct-v0-1' model, '/var/home/cloud-user/.local/share/instructlab/taxonomy' taxonomy, against http://127.0.0.1:49311/v1 server INFO 2024-08-22 15:16:46,594 instructlab.sdg:375: Synthesizing new instructions. If you aren't satisfied with the generated instructions, interrupt training (Ctrl-C) and try adjusting your YAML files. Adding more examples may help.",
"INFO 2024-08-16 17:12:46,548 instructlab.sdg.datamixing:200: Mixed Dataset saved to /home/example-user/.local/share/instructlab/datasets/skills_train_msgs_2024-08-16T16_50_11.jsonl INFO 2024-08-16 17:12:46,549 instructlab.sdg:438: Generation took 1355.74s",
"ls ~/.local/share/instructlab/datasets/",
"knowledge_recipe_2024-08-13T20_54_21.yaml skills_recipe_2024-08-13T20_54_21.yaml knowledge_train_msgs_2024-08-13T20_54_21.jsonl skills_train_msgs_2024-08-13T20_54_21.jsonl messages_granite-7b-lab-Q4_K_M_2024-08-13T20_54_21.jsonl node_datasets_2024-08-13T15_12_12/",
"cat ~/.local/share/datasets/<jsonl-dataset>",
"{\"messages\":[{\"content\":\"I am, Red Hat\\u00ae Instruct Model based on Granite 7B, an AI language model developed by Red Hat and IBM Research, based on the Granite-7b-base language model. My primary function is to be a chat assistant.\",\"role\":\"system\"},{\"content\":\"<|user|>\\n### Deep-sky objects\\n\\nThe constellation does not lie on the [galactic\\nplane](galactic_plane \\\"wikilink\\\") of the Milky Way, and there are no\\nprominent star clusters. [NGC 625](NGC_625 \\\"wikilink\\\") is a dwarf\\n[irregular galaxy](irregular_galaxy \\\"wikilink\\\") of apparent magnitude\\n11.0 and lying some 12.7 million light years distant. Only 24000 light\\nyears in diameter, it is an outlying member of the [Sculptor\\nGroup](Sculptor_Group \\\"wikilink\\\"). NGC 625 is thought to have been\\ninvolved in a collision and is experiencing a burst of [active star\\nformation](Active_galactic_nucleus \\\"wikilink\\\"). [NGC\\n37](NGC_37 \\\"wikilink\\\") is a [lenticular\\ngalaxy](lenticular_galaxy \\\"wikilink\\\") of apparent magnitude 14.66. It is\\napproximately 42 [kiloparsecs](kiloparsecs \\\"wikilink\\\") (137,000\\n[light-years](light-years \\\"wikilink\\\")) in diameter and about 12.9\\nbillion years old. [Robert's Quartet](Robert's_Quartet \\\"wikilink\\\")\\n(composed of the irregular galaxy [NGC 87](NGC_87 \\\"wikilink\\\"), and three\\nspiral galaxies [NGC 88](NGC_88 \\\"wikilink\\\"), [NGC 89](NGC_89 \\\"wikilink\\\")\\nand [NGC 92](NGC_92 \\\"wikilink\\\")) is a group of four galaxies located\\naround 160 million light-years away which are in the process of\\ncolliding and merging. They are within a circle of radius of 1.6 arcmin,\\ncorresponding to about 75,000 light-years. Located in the galaxy ESO\\n243-49 is [HLX-1](HLX-1 \\\"wikilink\\\"), an [intermediate-mass black\\nhole](intermediate-mass_black_hole \\\"wikilink\\\")the first one of its kind\\nidentified. It is thought to be a remnant of a dwarf galaxy that was\\nabsorbed in a [collision](Interacting_galaxy \\\"wikilink\\\") with ESO\\n243-49. Before its discovery, this class of black hole was only\\nhypothesized.\\n\\nLying within the bounds of the constellation is the gigantic [Phoenix\\ncluster](Phoenix_cluster \\\"wikilink\\\"), which is around 7.3 million light\\nyears wide and 5.7 billion light years away, making it one of the most\\nmassive [galaxy clusters](galaxy_cluster \\\"wikilink\\\"). It was first\\ndiscovered in 2010, and the central galaxy is producing an estimated 740\\nnew stars a year. Larger still is [El\\nGordo](El_Gordo_(galaxy_cluster) \\\"wikilink\\\"), or officially ACT-CL\\nJ0102-4915, whose discovery was announced in 2012. Located around\\n7.2 billion light years away, it is composed of two subclusters in the\\nprocess of colliding, resulting in the spewing out of hot gas, seen in\\nX-rays and infrared images.\\n\\n### Meteor showers\\n\\nPhoenix is the [radiant](radiant_(meteor_shower) \\\"wikilink\\\") of two\\nannual [meteor showers](meteor_shower \\\"wikilink\\\"). The\\n[Phoenicids](Phoenicids \\\"wikilink\\\"), also known as the December\\nPhoenicids, were first observed on 3 December 1887. The shower was\\nparticularly intense in December 1956, and is thought related to the\\nbreakup of the [short-period comet](short-period_comet \\\"wikilink\\\")\\n[289P\\/Blanpain](289P\\/Blanpain \\\"wikilink\\\"). It peaks around 45 December,\\nthough is not seen every year. A very minor meteor shower peaks\\naround July 14 with around one meteor an hour, though meteors can be\\nseen anytime from July 3 to 18; this shower is referred to as the July\\nPhoenicids.\\n\\nHow many light years wide is the Phoenix cluster?\\n<|assistant|>\\n' 'The Phoenix cluster is around 7.3 million light years wide.'\",\"role\":\"pretraining\"}],\"metadata\":\"{\\\"sdg_document\\\": \\\"### Deep-sky objects\\\\n\\\\nThe constellation does not lie on the [galactic\\\\nplane](galactic_plane \\\\\\\"wikilink\\\\\\\") of the Milky Way, and there are no\\\\nprominent star clusters. [NGC 625](NGC_625 \\\\\\\"wikilink\\\\\\\") is a dwarf\\\\n[irregular galaxy](irregular_galaxy \\\\\\\"wikilink\\\\\\\") of apparent magnitude\\\\n11.0 and lying some 12.7 million light years distant. Only 24000 light\\\\nyears in diameter, it is an outlying member of the [Sculptor\\\\nGroup](Sculptor_Group \\\\\\\"wikilink\\\\\\\"). NGC 625 is thought to have been\\\\ninvolved in a collision and is experiencing a burst of [active star\\\\nformation](Active_galactic_nucleus \\\\\\\"wikilink\\\\\\\"). [NGC\\\\n37](NGC_37 \\\\\\\"wikilink\\\\\\\") is a [lenticular\\\\ngalaxy](lenticular_galaxy \\\\\\\"wikilink\\\\\\\") of apparent magnitude 14.66. It is\\\\napproximately 42 [kiloparsecs](kiloparsecs \\\\\\\"wikilink\\\\\\\") (137,000\\\\n[light-years](light-years \\\\\\\"wikilink\\\\\\\")) in diameter and about 12.9\\\\nbillion years old. [Robert's Quartet](Robert's_Quartet \\\\\\\"wikilink\\\\\\\")\\\\n(composed of the irregular galaxy [NGC 87](NGC_87 \\\\\\\"wikilink\\\\\\\"), and three\\\\nspiral galaxies [NGC 88](NGC_88 \\\\\\\"wikilink\\\\\\\"), [NGC 89](NGC_89 \\\\\\\"wikilink\\\\\\\")\\\\nand [NGC 92](NGC_92 \\\\\\\"wikilink\\\\\\\")) is a group of four galaxies located\\\\naround 160 million light-years away which are in the process of\\\\ncolliding and merging. They are within a circle of radius of 1.6 arcmin,\\\\ncorresponding to about 75,000 light-years. Located in the galaxy ESO\\\\n243-49 is [HLX-1](HLX-1 \\\\\\\"wikilink\\\\\\\"), an [intermediate-mass black\\\\nhole](intermediate-mass_black_hole \\\\\\\"wikilink\\\\\\\")\\the first one of its kind\\\\nidentified. It is thought to be a remnant of a dwarf galaxy that was\\\\nabsorbed in a [collision](Interacting_galaxy \\\\\\\"wikilink\\\\\\\") with ESO\\\\n243-49. Before its discovery, this class of black hole was only\\\\nhypothesized.\\\\n\\\\nLying within the bounds of the constellation is the gigantic [Phoenix\\\\ncluster](Phoenix_cluster \\\\\\\"wikilink\\\\\\\"), which is around 7.3 million light\\\\nyears wide and 5.7 billion light years away, making it one of the most\\\\nmassive [galaxy clusters](galaxy_cluster \\\\\\\"wikilink\\\\\\\"). It was first\\\\ndiscovered in 2010, and the central galaxy is producing an estimated 740\\\\nnew stars a year. Larger still is [El\\\\nGordo](El_Gordo_(galaxy_cluster) \\\\\\\"wikilink\\\\\\\"), or officially ACT-CL\\\\nJ0102-4915, whose discovery was announced in 2012. Located around\\\\n7.2 billion light years away, it is composed of two subclusters in the\\\\nprocess of colliding, resulting in the spewing out of hot gas, seen in\\\\nX-rays and infrared images.\\\\n\\\\n### Meteor showers\\\\n\\\\nPhoenix is the [radiant](radiant_(meteor_shower) \\\\\\\"wikilink\\\\\\\") of two\\\\nannual [meteor showers](meteor_shower \\\\\\\"wikilink\\\\\\\"). The\\\\n[Phoenicids](Phoenicids \\\\\\\"wikilink\\\\\\\"), also known as the December\\\\nPhoenicids, were first observed on 3 December 1887. The shower was\\\\nparticularly intense in December 1956, and is thought related to the\\\\nbreakup of the [short-period comet](short-period_comet \\\\\\\"wikilink\\\\\\\")\\\\n[289P\\/Blanpain](289P\\/Blanpain \\\\\\\"wikilink\\\\\\\"). It peaks around 4\\5 December,\\\\nthough is not seen every year. A very minor meteor shower peaks\\\\naround July 14 with around one meteor an hour, though meteors can be\\\\nseen anytime from July 3 to 18; this shower is referred to as the July\\\\nPhoenicids.\\\", \\\"domain\\\": \\\"astronomy\\\", \\\"dataset\\\": \\\"document_knowledge_qa\\\"}\",\"id\":\"1df7c219-a062-4511-8bae-f55c88927dc1\"}",
"ilab model train --strategy lab-multiphase --phased-phase1-data ~/.local/share/instructlab/datasets/<knowledge-train-messages-jsonl-file> --phased-phase2-data ~/.local/share/instructlab/datasets/<skills-train-messages-jsonl-file>",
"Training Phase 1/2 TrainingArgs for current phase: TrainingArgs(model_path='/opt/app-root/src/.cache/instructlab/models/granite-7b-starter', chat_tmpl_path='/opt/app-root/lib64/python3.11/site-packages/instructlab/training/chat_templates/ibm_generic_tmpl.py', data_path='/tmp/jul19-knowledge-26k.jsonl', ckpt_output_dir='/tmp/e2e/phase1/checkpoints', data_output_dir='/opt/app-root/src/.local/share/instructlab/internal', max_seq_len=4096, max_batch_len=55000, num_epochs=2, effective_batch_size=128, save_samples=0, learning_rate=2e-05, warmup_steps=25, is_padding_free=True, random_seed=42, checkpoint_at_epoch=True, mock_data=False, mock_data_len=0, deepspeed_options=DeepSpeedOptions(cpu_offload_optimizer=False, cpu_offload_optimizer_ratio=1.0, cpu_offload_optimizer_pin_memory=False, save_samples=None), disable_flash_attn=False, lora=LoraOptions(rank=0, alpha=32, dropout=0.1, target_modules=('q_proj', 'k_proj', 'v_proj', 'o_proj'), quantize_data_type=<QuantizeDataType.NONE: None>))",
"MMLU evaluation for Phase 1 INFO 2024-08-15 01:23:40,975 lm-eval:152: Setting random seed to 0 | Setting numpy seed to 1234 | Setting torch manual seed to 1234 INFO 2024-08-15 01:23:40,976 lm-eval:189: Initializing hf model, with arguments: {'pretrained': '/tmp/e2e/phase1/checkpoints/hf_format/samples_26112', 'dtype': 'bfloat16'} Loading checkpoint shards: 0%| | 0/3 [00:00<?, ?it/s] Loading checkpoint shards: 33%|███▎ | 1/3 [00:01<00:02, 1.28s/it] Loading checkpoint shards: 67%|██████▋ | 2/3 [00:02<00:01, 1.15s/it] Loading checkpoint shards: 100%|██████████| 3/3 [00:02<00:00, 1.36it/s] Loading checkpoint shards: 100%|██████████| 3/3 [00:02<00:00, 1.16it/s]",
"Training Phase 2/2 TrainingArgs for current phase: TrainingArgs(model_path='/tmp/e2e/phase1/checkpoints/hf_format/samples_52096', chat_tmpl_path='/opt/app-root/lib64/python3.11/site-packages/instructlab/training/chat_templates/ibm_generic_tmpl.py', data_path='/usr/share/instructlab/sdg/datasets/skills.jsonl', ckpt_output_dir='/tmp/e2e/phase2/checkpoints', data_output_dir='/opt/app-root/src/.local/share/instructlab/internal', max_seq_len=4096, max_batch_len=55000, num_epochs=2, effective_batch_size=3840, save_samples=0, learning_rate=2e-05, warmup_steps=25, is_padding_free=True, random_seed=42, checkpoint_at_epoch=True, mock_data=False, mock_data_len=0, deepspeed_options=DeepSpeedOptions(cpu_offload_optimizer=False, cpu_offload_optimizer_ratio=1.0, cpu_offload_optimizer_pin_memory=False, save_samples=None), disable_flash_attn=False, lora=LoraOptions(rank=0, alpha=32, dropout=0.1, target_modules=('q_proj', 'k_proj', 'v_proj', 'o_proj'), quantize_data_type=<QuantizeDataType.NONE: None>))",
"MT-Bench evaluation for Phase 2 Using gpus from --gpus or evaluate config and ignoring --tensor-parallel-size configured in serve vllm_args INFO 2024-08-15 10:04:51,065 instructlab.model.backends.backends:437: Trying to connect to model server at http://127.0.0.1:8000/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.vllm:208: vLLM starting up on pid 79388 at http://127.0.0.1:54265/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.backends:450: Starting a temporary vLLM server at http://127.0.0.1:54265/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 1/300 INFO 2024-08-15 10:04:58,003 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 2/300 INFO 2024-08-15 10:05:02,314 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 3/300 moment... Attempt: 3/300 INFO 2024-08-15 10:06:07,611 instructlab.model.backends.backends:472: vLLM engine successfully started at http://127.0.0.1:54265/v1",
"Training finished! Best final checkpoint: samples_1945 with score: 6.813759384",
"ls ~/.local/share/instructlab/phase/<phase1-or-phase2>/checkpoints/",
"samples_1711 samples_1945 samples_1456 samples_1462 samples_1903",
"ilab model evaluate --benchmark mmlu_branch --model ~/.local/share/instructlab/phased/<phase1-or-phase2>/checkpoints/hf_format/<checkpoint> --tasks-dir ~/.local/share/instructlab/datasets/<node-dataset> --base-model ~/.cache/instructlab/models/granite-7b-starter",
"KNOWLEDGE EVALUATION REPORT ## BASE MODEL /home/<example-user>/.local/share/instructlab/models/instructlab/granite-7b-starter ## MODEL /home/<example-user>/.local/share/instructlab/models/instructlab/granite-7b-starter ### AVERAGE: +1.0(across 1)",
"ilab model evaluate --benchmark mt_bench_branch --model ~/.local/share/checkpoints/hf_format/<checkpoint> --judge-model ~/.cache/instructlab/models/prometheus-8x7b-v2-0 --branch <worker-branch> --base-branch main --gpus <num-gpus> --enable-serving-output",
"SKILL EVALUATION REPORT ## BASE MODEL /home/example/.local/share/instructlab/models/instructlab/granite-7b-lab ## MODEL /home/example/.local/share/instructlab/models/instructlab/granite-7b-lab ### IMPROVEMENTS: 1. compositional_skills/extraction/receipt/markdown/qna.yaml (+4.0) ### REGRESSIONS: 1. compositional_skills/extraction/abstractive/title/qna.yaml (-5.0) ### NO CHANGE: 2. compositional_skills/extraction/commercial_lease_agreement/csv/qna.yaml ### ERROR RATE: 0.32",
"ilab model evaluate --benchmark mmlu --model ~/.local/share/checkpoints/hf_format/<checkpoint>",
"KNOWLEDGE EVALUATION REPORT ## MODEL /home/<example-user>/.local/share/instructlab/models/instructlab/granite-7b-lab ### AVERAGE: 0.45 (across 3) ### SCORES: mmlu_abstract_algebra - 0.35 mmlu_anatomy - 0.44 mmlu_astronomy - 0.55",
"ilab model evaluate --benchmark mt_bench --model ~/.local/share/instructlab/checkpoints/hf_format/<checkpoint> --enable-serving-output",
"SKILL EVALUATION REPORT ## MODEL /home/<example-user>/.local/share/instructlab/models/instructlab/granite-7b-lab ### AVERAGE: 8.07 (across 91) ### TURN ONE: 8.64 ### TURN TWO: 7.19 ### ERROR RATE: 0.43",
"ilab model serve --model-path <path-to-best-performed-checkpoint>",
"ilab model serve --model-path ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_1945/",
"ilab model serve --model-path ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/<checkpoint> INFO 2024-03-02 02:21:11,352 lab.py:201 Using model /home/example-user/.local/share/instructlab/checkpoints/hf_format/checkpoint_1945 with -1 gpu-layers and 4096 max context size. Starting server process After application startup complete see http://127.0.0.1:8000/docs for API. Press CTRL+C to shut down the server.",
"ilab model chat --model <path-to-best-performed-checkpoint-file>",
"ilab model chat --model ~/.local/share/instructlab/phased/phase2/checkpoints/hf_format/samples_1945",
"ilab model chat ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────── system ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ Welcome to InstructLab Chat w/ CHECKPOINT_1945 (type /h for help) │ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ >>> [S][default]"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.1/html-single/creating_a_custom_llm_using_rhel_ai/index |
Chapter 3. Managing access to repositories | Chapter 3. Managing access to repositories As a Red Hat Quay user, you can create your own repositories and make them accessible to other users on your Red Hat Quay instance. As an alternative, you can create organizations to allow access to repositories based on teams. In both user and organization repositories, you can allow access to those repositories by creating credentials associated with robot accounts. Robot accounts make it easy for a variety of container clients (such as docker or podman) to access your repos, without requiring that the client have a Red Hat Quay user account. 3.1. Allowing access to user repositories When you create a repository in a user namespace, you can add access to that repository to user accounts or through robot accounts. 3.1.1. Allowing user access to a user repository To allow access to a repository associated with a user account, do the following: Log into your Red Hat Quay user account. Select a repository under your user namespace to which you want to share access. Select the Settings icon from the left column. Type the name of the user to which you want to grant access to your repository. The user name should appear as you type, as shown in the following figure: In the permissions box, select one of the following: Read - Allows the user to view the repository and pull from it. Write - Allows the user to view the repository, as well as pull images from or push images to the repository. Admin - Allows all administrative settings to the repository, as well as all Read and Write permissions. Select the Add Permission button. The user now has the assigned permission. To remove the user permissions to the repository, select the Options icon to the right of the user entry, then select Delete Permission. 3.2. Allowing robot access to a user repository Robot accounts are used to set up automated access to the repositories in your Red Hat Quay registry. They are similar to OpenShift service accounts. When you set up a robot account, you: Generate credentials that are associated with the robot account Identify repositories and images that the robot can push images to or pull images from Copy and paste generated credentials to use with different container clients (such as Docker, podman, Kubernetes, Mesos and others) to access each defined repository Keep in mind that each robot account is limited to a single user namespace or organization. So, for example, the robot could provide access to all repositories accessible to a user jsmith, but not to any that are not in the user's list of repositories. The following procedure steps you through setting up a robot account to allow access to your repositories. Select Robot icon: From the Repositories view, select the Robot icon from the left column. Create Robot account: Select the Create Robot Account button. Set Robot name: Enter the name and description, then select the Create robot account button. The robot name becomes a combination of your user name, plus the robot name you set (for example, jsmith+myrobot) Add permission to the robot account: From the Add permissions screen for the robot account, define the repositories you want the robot to access as follows: Put a check mark to each repository the robot can access For each repository, select one of the following, and click Add permissions: None - Robot has no permission to the repository Read - Robot can view and pull from the repository Write - Robot can read (pull) from and write (push) to the repository Admin - Full access to pull from and push to the repository, plus the ability to do administrative tasks associated with the repository Select the Add permissions button to apply the settings Get credentials to access repositories via the robot: Back on the Robot Accounts page, select the Robot account name to see credential information for that robot. Get the token: Select Robot Token, as shown in the following figure, to see the token that was generated for the robot. If you want to reset the token, select Regenerate Token. Note It is important to understand that regenerating a token makes any tokens for this robot invalid. Get credentials: Once you are satisfied with the generated token, get the resulting credentials in the following ways: Kubernetes Secret: Select this to download credentials in the form of a Kubernetes pull secret yaml file. rkt Configuration: Select this to download credentials for the rkt container runtime in the form of a json file. Docker Login: Select this to copy a full docker login command line that includes the credentials. Docker Configuration: Select this to download a file to use as a Docker config.json file, to permanently store the credentials on your client system. Mesos Credentials: Select this to download a tarball that provides the credentials that can be identified in the uris field of a Mesos configuration file. 3.3. Allowing access to organization repositories Once you have created an organization, you can associate a set of repositories directly to that organization. To add access to the repositories in that organization, you can add Teams (sets of users with the same permissions) and individual users. Essentially, an organization has the same ability to create repositories and robot accounts as a user does, but an organization is intended to set up shared repositories through groups of users (in teams or individually). Other things to know about organizations: You cannot have an organization in another organization. To subdivide an organization, you use teams. Organizations can't contain users directly. You must first add a team, then add one or more users to each team. Teams can be set up in organizations as just members who use the repos and associated images or as administrators with special privileges for managing the organization 3.3.1. Adding a Team to an organization When you create a team for your organization you can select the team name, choose which repositories to make available to the team, and decide the level of access to the team. From the Organization view, select the Teams and Membership icon from the left column. You will see that an owners Team exists with Admin privilege for the user who created the Organization. Select Create New Team. You are prompted for the new team name to be associated with the organization. Type the team name, which must start with a lowercase letter, with the rest of the team name as any combination of lowercase letters and numbers (no capitals or special characters allowed). Select the Create team button. The Add permissions window appears, displaying a list of repositories in the organization. Check each repository you want the team to be able to access. Then select one of the following permissions for each: Read - Team members are able to view and pull images Write - Team members can view, pull, and push images Admin - Team members have full read/write privilege, plus the ability to do administrative tasks related to the repository Select Add permissions to save the repository permissions for the team. 3.3.2. Setting a Team role After you have added a team, you can set the role of that team within the organization. From the Teams and Membership screen within the organization, select the TEAM ROLE drop-down menu, as shown in the following figure: For the selected team, choose one of the following roles: Member - Inherits all permissions set for the team Creator - All member permissions, plus the ability to create new repositories Admin - Full administrative access to the organization, including the ability to create teams, add members, and set permissions. 3.3.3. Adding users to a Team As someone with Admin privilege to an organization, you can add users and robots to a team. When you add a user, it sends an email to that user. The user remains pending until that user accepts the invitation. To add users or robots to a team, start from the organization's screen and do the following: Select the team you want to add users or robots to. In the Team Members box, type one of the following: A username from an account on the Red Hat Quay registry The email address for a user account on the registry The name of a robot account. The name must be in the form of orgname+robotname In the case of the robot account, it is immediately added to the team. For a user account, an invitation to join is mailed to the user. Until the user accepts that invitation, the user remains in the INVITED TO JOIN state. , the user accepts the email invitation to join the team. The time the user logs in to the Red Hat Quay instance, the user moves from the INVITED TO JOIN list to the MEMBERS list for the organization. | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/use_red_hat_quay/use-quay-manage-repo |
F.6. About Runnable Interfaces | F.6. About Runnable Interfaces A Runnable Interface (also known as a Runnable) declares a single run() method, which executes the active part of the class' code. The Runnable object can be executed in its own thread after it is passed to a thread constructor. 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/runnable |
4.222. perl-Net-DNS | 4.222. perl-Net-DNS 4.222.1. RHBA-2011:1271 - perl-Net-DNS bug fix update An updated perl-Net-DNS package that fixes one bug is now available for Red Hat Enterprise Linux 6. The perl-Net-DNS package contains a collection of Perl modules that act as a Domain Name System (DNS) resolver. It allows the programmer to perform DNS queries that are beyond the capabilities of the gethostbyname and gethostbyaddr routines. Bug Fix BZ# 688211 Prior to this update, perl-Net-DNS lacked a complete IPv6 functionality. This update adds the dependencies related to IPv6 and, in addition, prevents the possibility of interactive (re)build. All users of perl-Net-DNS should upgrade to this updated package, which fixes this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/perl-net-dns |
probe::vm.kmalloc_node | probe::vm.kmalloc_node Name probe::vm.kmalloc_node - Fires when kmalloc_node is requested Synopsis vm.kmalloc_node Values caller_function name of the caller function gfp_flag_name type of kmemory to allocate(in string format) call_site address of the function caling this kmemory function gfp_flags type of kmemory to allocate bytes_req requested Bytes name name of the probe point ptr pointer to the kmemory allocated bytes_alloc allocated Bytes | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-vm-kmalloc-node |
Chapter 1. Red Hat 3scale API Management 2.15.3 - Patch release | Chapter 1. Red Hat 3scale API Management 2.15.3 - Patch release 1.1. New features Red Hat 3scale API Management 2.15.3 introduces the following new feature and enhancement: Added compatibility with OpenShift version 4.18. | null | https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/release_notes_for_red_hat_3scale_api_management_2.15_on-premises/red_hat_3scale_api_management_2_15_3_patch_release |
Chapter 10. Advanced migration options | Chapter 10. Advanced migration options You can automate your migrations and modify the MigPlan and MigrationController custom resources in order to perform large-scale migrations and to improve performance. 10.1. Terminology Table 10.1. MTC terminology Term Definition Source cluster Cluster from which the applications are migrated. Destination cluster [1] Cluster to which the applications are migrated. Replication repository Object storage used for copying images, volumes, and Kubernetes objects during indirect migration or for Kubernetes objects during direct volume migration or direct image migration. The replication repository must be accessible to all clusters. Host cluster Cluster on which the migration-controller pod and the web console are running. The host cluster is usually the destination cluster but this is not required. The host cluster does not require an exposed registry route for direct image migration. Remote cluster A remote cluster is usually the source cluster but this is not required. A remote cluster requires a Secret custom resource that contains the migration-controller service account token. A remote cluster requires an exposed secure registry route for direct image migration. Indirect migration Images, volumes, and Kubernetes objects are copied from the source cluster to the replication repository and then from the replication repository to the destination cluster. Direct volume migration Persistent volumes are copied directly from the source cluster to the destination cluster. Direct image migration Images are copied directly from the source cluster to the destination cluster. Stage migration Data is copied to the destination cluster without stopping the application. Running a stage migration multiple times reduces the duration of the cutover migration. Cutover migration The application is stopped on the source cluster and its resources are migrated to the destination cluster. State migration Application state is migrated by copying specific persistent volume claims to the destination cluster. Rollback migration Rollback migration rolls back a completed migration. 1 Called the target cluster in the MTC web console. 10.2. Migrating applications by using the command line You can migrate applications with the MTC API by using the command line interface (CLI) in order to automate the migration. 10.2.1. Migration prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Direct image migration You must ensure that the secure OpenShift image registry of the source cluster is exposed. You must create a route to the exposed registry. Direct volume migration If your clusters use proxies, you must configure an Stunnel TCP proxy. Clusters The source cluster must be upgraded to the latest MTC z-stream release. The MTC version must be the same on all clusters. Network The clusters have unrestricted network access to each other and to the replication repository. If you copy the persistent volumes with move , the clusters must have unrestricted network access to the remote volumes. You must enable the following ports on an OpenShift Container Platform 4 cluster: 6443 (API server) 443 (routes) 53 (DNS) You must enable port 443 on the replication repository if you are using TLS. Persistent volumes (PVs) The PVs must be valid. The PVs must be bound to persistent volume claims. If you use snapshots to copy the PVs, the following additional prerequisites apply: The cloud provider must support snapshots. The PVs must have the same cloud provider. The PVs must be located in the same geographic region. The PVs must have the same storage class. 10.2.2. Creating a registry route for direct image migration For direct image migration, you must create a route to the exposed OpenShift image registry on all remote clusters. Prerequisites The OpenShift image registry must be exposed to external traffic on all remote clusters. The OpenShift Container Platform 4 registry is exposed by default. Procedure To create a route to an OpenShift Container Platform 4 registry, run the following command: USD oc create route passthrough --service=image-registry -n openshift-image-registry 10.2.3. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.15, the MTC inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 10.2.3.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 10.2.3.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 10.2.3.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 10.2.3.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 10.2.3.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 10.2.3.2.1. NetworkPolicy configuration 10.2.3.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 10.2.3.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 10.2.3.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 10.2.3.2.3. Choosing alternate endpoints for data transfer By default, DVM uses an OpenShift Container Platform route as an endpoint to transfer PV data to destination clusters. You can choose another type of supported endpoint, if cluster topologies allow. For each cluster, you can configure an endpoint by setting the rsync_endpoint_type variable on the appropriate destination cluster in your MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route] 10.2.3.2.4. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 10.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 10.2.3.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration 10.2.4. Migrating an application by using the MTC API You can migrate an application from the command line by using the Migration Toolkit for Containers (MTC) API. Procedure Create a MigCluster CR manifest for the host cluster: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF Create a Secret object manifest for each remote cluster: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF 1 Specify the base64-encoded migration-controller service account (SA) token of the remote cluster. You can obtain the token by running the following command: USD oc sa get-token migration-controller -n openshift-migration | base64 -w 0 Create a MigCluster CR manifest for each remote cluster: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF 1 Specify the Cluster CR of the remote cluster. 2 Optional: For direct image migration, specify the exposed registry route. 3 SSL verification is enabled if false . CA certificates are not required or checked if true . 4 Specify the Secret object of the remote cluster. 5 Specify the URL of the remote cluster. Verify that all clusters are in a Ready state: USD oc describe MigCluster <cluster> Create a Secret object manifest for the replication repository: USD cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF 1 Specify the key ID in base64 format. 2 Specify the secret key in base64 format. AWS credentials are base64-encoded by default. For other storage providers, you must encode your credentials by running the following command with each key: USD echo -n "<key>" | base64 -w 0 1 1 Specify the key ID or the secret key. Both keys must be base64-encoded. Create a MigStorage CR manifest for the replication repository: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF 1 Specify the bucket name. 2 Specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. 3 Specify the storage provider. 4 Optional: If you are copying data by using snapshots, specify the Secrets CR of the object storage. You must ensure that the credentials stored in the Secrets CR of the object storage are correct. 5 Optional: If you are copying data by using snapshots, specify the storage provider. Verify that the MigStorage CR is in a Ready state: USD oc describe migstorage <migstorage> Create a MigPlan CR manifest: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF 1 Direct image migration is enabled if false . 2 Direct volume migration is enabled if false . 3 Specify the name of the MigStorage CR instance. 4 Specify one or more source namespaces. By default, the destination namespace has the same name. 5 Specify a destination namespace if it is different from the source namespace. 6 Specify the name of the source cluster MigCluster instance. Verify that the MigPlan instance is in a Ready state: USD oc describe migplan <migplan> -n openshift-migration Create a MigMigration CR manifest to start the migration defined in the MigPlan instance: USD cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF 1 Specify the MigPlan CR name. 2 The pods on the source cluster are stopped before migration if true . 3 A stage migration, which copies most of the data without stopping the application, is performed if true . 4 A completed migration is rolled back if true . Verify the migration by watching the MigMigration CR progress: USD oc watch migmigration <migmigration> -n openshift-migration The output resembles the following: Example output Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration ... Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47 10.2.5. State migration You can perform repeatable, state-only migrations by using Migration Toolkit for Containers (MTC) to migrate persistent volume claims (PVCs) that constitute an application's state. You migrate specified PVCs by excluding other PVCs from the migration plan. You can map the PVCs to ensure that the source and the target PVCs are synchronized. Persistent volume (PV) data is copied to the target cluster. The PV references are not moved, and the application pods continue to run on the source cluster. State migration is specifically designed to be used in conjunction with external CD mechanisms, such as OpenShift Gitops. You can migrate application manifests using GitOps while migrating the state using MTC. If you have a CI/CD pipeline, you can migrate stateless components by deploying them on the target cluster. Then you can migrate stateful components by using MTC. You can perform a state migration between clusters or within the same cluster. Important State migration migrates only the components that constitute an application's state. If you want to migrate an entire namespace, use stage or cutover migration. Prerequisites The state of the application on the source cluster is persisted in PersistentVolumes provisioned through PersistentVolumeClaims . The manifests of the application are available in a central repository that is accessible from both the source and the target clusters. Procedure Migrate persistent volume data from the source to the target cluster. You can perform this step as many times as needed. The source application continues running. Quiesce the source application. You can do this by setting the replicas of workload resources to 0 , either directly on the source cluster or by updating the manifests in GitHub and re-syncing the Argo CD application. Clone application manifests to the target cluster. You can use Argo CD to clone the application manifests to the target cluster. Migrate the remaining volume data from the source to the target cluster. Migrate any new data created by the application during the state migration process by performing a final data migration. If the cloned application is in a quiesced state, unquiesce it. Switch the DNS record to the target cluster to re-direct user traffic to the migrated application. Note MTC 1.6 cannot quiesce applications automatically when performing state migration. It can only migrate PV data. Therefore, you must use your CD mechanisms for quiescing or unquiescing applications. MTC 1.7 introduces explicit Stage and Cutover flows. You can use staging to perform initial data transfers as many times as needed. Then you can perform a cutover, in which the source applications are quiesced automatically. Additional resources See Excluding PVCs from migration to select PVCs for state migration. See Mapping PVCs to migrate source PV data to provisioned PVCs on the destination cluster. See Migrating Kubernetes objects to migrate the Kubernetes objects that constitute an application's state. 10.3. Migration hooks You can add up to four migration hooks to a single migration plan, with each hook running at a different phase of the migration. Migration hooks perform tasks such as customizing application quiescence, manually migrating unsupported data types, and updating applications after migration. A migration hook runs on a source or a target cluster at one of the following migration steps: PreBackup : Before resources are backed up on the source cluster. PostBackup : After resources are backed up on the source cluster. PreRestore : Before resources are restored on the target cluster. PostRestore : After resources are restored on the target cluster. You can create a hook by creating an Ansible playbook that runs with the default Ansible image or with a custom hook container. Ansible playbook The Ansible playbook is mounted on a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan custom resource. The job continues to run until it reaches the default limit of 6 retries or a successful completion. This continues even if the initial pod is evicted or killed. The default Ansible runtime image is registry.redhat.io/rhmtc/openshift-migration-hook-runner-rhel7:1.8 . This image is based on the Ansible Runner image and includes python-openshift for Ansible Kubernetes resources and an updated oc binary. Custom hook container You can use a custom hook container instead of the default Ansible image. 10.3.1. Writing an Ansible playbook for a migration hook You can write an Ansible playbook to use as a migration hook. The hook is added to a migration plan by using the MTC web console or by specifying values for the spec.hooks parameters in the MigPlan custom resource (CR) manifest. The Ansible playbook is mounted onto a hook container as a config map. The hook container runs as a job, using the cluster, service account, and namespace specified in the MigPlan CR. The hook container uses a specified service account token so that the tasks do not require authentication before they run in the cluster. 10.3.1.1. Ansible modules You can use the Ansible shell module to run oc commands. Example shell module - hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces You can use kubernetes.core modules, such as k8s_info , to interact with Kubernetes resources. Example k8s_facts module - hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: "{{ lookup( 'env', 'HOSTNAME') }}" register: pods - name: Print pod name debug: msg: "{{ pods.resources[0].metadata.name }}" You can use the fail module to produce a non-zero exit status in cases where a non-zero exit status would not normally be produced, ensuring that the success or failure of a hook is detected. Hooks run as jobs and the success or failure status of a hook is based on the exit status of the job container. Example fail module - hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: "fail" fail: msg: "Cause a failure" when: do_fail 10.3.1.2. Environment variables The MigPlan CR name and migration namespaces are passed as environment variables to the hook container. These variables are accessed by using the lookup plugin. Example environment variables - hosts: localhost gather_facts: false tasks: - set_fact: namespaces: "{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}" - debug: msg: "{{ item }}" with_items: "{{ namespaces }}" - debug: msg: "{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}" 10.4. Migration plan options You can exclude, edit, and map components in the MigPlan custom resource (CR). 10.4.1. Excluding resources You can exclude resources, for example, image streams, persistent volumes (PVs), or subscriptions, from a Migration Toolkit for Containers (MTC) migration plan to reduce the resource load for migration or to migrate images or PVs with a different tool. By default, the MTC excludes service catalog resources and Operator Lifecycle Manager (OLM) resources from migration. These resources are parts of the service catalog API group and the OLM API group, neither of which is supported for migration at this time. Procedure Edit the MigrationController custom resource manifest: USD oc edit migrationcontroller <migration_controller> -n openshift-migration Update the spec section by adding parameters to exclude specific resources. For those resources that do not have their own exclusion parameters, add the additional_excluded_resources parameter: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2 ... 1 Add disable_image_migration: true to exclude image streams from the migration. imagestreams is added to the excluded_resources list in main.yml when the MigrationController pod restarts. 2 Add disable_pv_migration: true to exclude PVs from the migration plan. persistentvolumes and persistentvolumeclaims are added to the excluded_resources list in main.yml when the MigrationController pod restarts. Disabling PV migration also disables PV discovery when you create the migration plan. 3 You can add OpenShift Container Platform resources that you want to exclude to the additional_excluded_resources list. Wait two minutes for the MigrationController pod to restart so that the changes are applied. Verify that the resource is excluded: USD oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1 The output contains the excluded resources: Example output name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims 10.4.2. Mapping namespaces If you map namespaces in the MigPlan custom resource (CR), you must ensure that the namespaces are not duplicated on the source or the destination clusters because the UID and GID ranges of the namespaces are copied during migration. Two source namespaces mapped to the same destination namespace spec: namespaces: - namespace_2 - namespace_1:namespace_2 If you want the source namespace to be mapped to a namespace of the same name, you do not need to create a mapping. By default, a source namespace and a target namespace have the same name. Incorrect namespace mapping spec: namespaces: - namespace_1:namespace_1 Correct namespace reference spec: namespaces: - namespace_1 10.4.3. Excluding persistent volume claims You select persistent volume claims (PVCs) for state migration by excluding the PVCs that you do not want to migrate. You exclude PVCs by setting the spec.persistentVolumes.pvc.selection.action parameter of the MigPlan custom resource (CR) after the persistent volumes (PVs) have been discovered. Prerequisites MigPlan CR is in a Ready state. Procedure Add the spec.persistentVolumes.pvc.selection.action parameter to the MigPlan CR and set it to skip : apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: ... selection: action: skip 10.4.4. Mapping persistent volume claims You can migrate persistent volume (PV) data from the source cluster to persistent volume claims (PVCs) that are already provisioned in the destination cluster in the MigPlan CR by mapping the PVCs. This mapping ensures that the destination PVCs of migrated applications are synchronized with the source PVCs. You map PVCs by updating the spec.persistentVolumes.pvc.name parameter in the MigPlan custom resource (CR) after the PVs have been discovered. Prerequisites MigPlan CR is in a Ready state. Procedure Update the spec.persistentVolumes.pvc.name parameter in the MigPlan CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: ... persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1 1 Specify the PVC on the source cluster and the PVC on the destination cluster. If the destination PVC does not exist, it will be created. You can use this mapping to change the PVC name during migration. 10.4.5. Editing persistent volume attributes After you create a MigPlan custom resource (CR), the MigrationController CR discovers the persistent volumes (PVs). The spec.persistentVolumes block and the status.destStorageClasses block are added to the MigPlan CR. You can edit the values in the spec.persistentVolumes.selection block. If you change values outside the spec.persistentVolumes.selection block, the values are overwritten when the MigPlan CR is reconciled by the MigrationController CR. Note The default value for the spec.persistentVolumes.selection.storageClass parameter is determined by the following logic: If the source cluster PV is Gluster or NFS, the default is either cephfs , for accessMode: ReadWriteMany , or cephrbd , for accessMode: ReadWriteOnce . If the PV is neither Gluster nor NFS or if cephfs or cephrbd are not available, the default is a storage class for the same provisioner. If a storage class for the same provisioner is not available, the default is the default storage class of the destination cluster. You can change the storageClass value to the value of any name parameter in the status.destStorageClasses block of the MigPlan CR. If the storageClass value is empty, the PV will have no storage class after migration. This option is appropriate if, for example, you want to move the PV to an NFS volume on the destination cluster. Prerequisites MigPlan CR is in a Ready state. Procedure Edit the spec.persistentVolumes.selection values in the MigPlan CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs 1 Allowed values are move , copy , and skip . If only one action is supported, the default value is the supported action. If multiple actions are supported, the default value is copy . 2 Allowed values are snapshot and filesystem . Default value is filesystem . 3 The verify parameter is displayed if you select the verification option for file system copy in the MTC web console. You can set it to false . 4 You can change the default value to the value of any name parameter in the status.destStorageClasses block of the MigPlan CR. If no value is specified, the PV will have no storage class after migration. 5 Allowed values are ReadWriteOnce and ReadWriteMany . If this value is not specified, the default is the access mode of the source cluster PVC. You can only edit the access mode in the MigPlan CR. You cannot edit it by using the MTC web console. 10.4.6. Converting storage classes in the MTC web console You can convert the storage class of a persistent volume (PV) by migrating it within the same cluster. To do so, you must create and run a migration plan in the Migration Toolkit for Containers (MTC) web console. Prerequisites You must be logged in as a user with cluster-admin privileges on the cluster on which MTC is running. You must add the cluster to the MTC web console. Procedure In the left-side navigation pane of the OpenShift Container Platform web console, click Projects . In the list of projects, click your project. The Project details page opens. Click the DeploymentConfig name. Note the name of its running pod. Open the YAML tab of the project. Find the PVs and note the names of their corresponding persistent volume claims (PVCs). In the MTC web console, click Migration plans . Click Add migration plan . Enter the Plan name . The migration plan name must contain 3 to 63 lower-case alphanumeric characters ( a-z, 0-9 ) and must not contain spaces or underscores ( _ ). From the Migration type menu, select Storage class conversion . From the Source cluster list, select the desired cluster for storage class conversion. Click . The Namespaces page opens. Select the required project. Click . The Persistent volumes page opens. The page displays the PVs in the project, all selected by default. For each PV, select the desired target storage class. Click . The wizard validates the new migration plan and shows that it is ready. Click Close . The new plan appears on the Migration plans page. To start the conversion, click the options menu of the new plan. Under Migrations , two options are displayed, Stage and Cutover . Note Cutover migration updates PVC references in the applications. Stage migration does not update PVC references in the applications. Select the desired option. Depending on which option you selected, the Stage migration or Cutover migration notification appears. Click Migrate . Depending on which option you selected, the Stage started or Cutover started message appears. To see the status of the current migration, click the number in the Migrations column. The Migrations page opens. To see more details on the current migration and monitor its progress, select the migration from the Type column. The Migration details page opens. When the migration progresses to the DirectVolume step and the status of the step becomes Running Rsync Pods to migrate Persistent Volume data , you can click View details and see the detailed status of the copies. In the breadcrumb bar, click Stage or Cutover and wait for all steps to complete. Open the PersistentVolumeClaims tab of the OpenShift Container Platform web console. You can see new PVCs with the names of the initial PVCs but ending in new , which are using the target storage class. In the left-side navigation pane, click Pods . See that the pod of your project is running again. Additional resources For details about the move and copy actions, see MTC workflow . For details about the skip action, see Excluding PVCs from migration . For details about the file system and snapshot copy methods, see About data copy methods . 10.4.7. Performing a state migration of Kubernetes objects by using the MTC API After you migrate all the PV data, you can use the Migration Toolkit for Containers (MTC) API to perform a one-time state migration of Kubernetes objects that constitute an application. You do this by configuring MigPlan custom resource (CR) fields to provide a list of Kubernetes resources with an additional label selector to further filter those resources, and then performing a migration by creating a MigMigration CR. The MigPlan resource is closed after the migration. Note Selecting Kubernetes resources is an API-only feature. You must update the MigPlan CR and create a MigMigration CR for it by using the CLI. The MTC web console does not support migrating Kubernetes objects. Note After migration, the closed parameter of the MigPlan CR is set to true . You cannot create another MigMigration CR for this MigPlan CR. You add Kubernetes objects to the MigPlan CR by using one of the following options: Adding the Kubernetes objects to the includedResources section. When the includedResources field is specified in the MigPlan CR, the plan takes a list of group-kind as input. Only resources present in the list are included in the migration. Adding the optional labelSelector parameter to filter the includedResources in the MigPlan . When this field is specified, only resources matching the label selector are included in the migration. For example, you can filter a list of Secret and ConfigMap resources by using the label app: frontend as a filter. Procedure Update the MigPlan CR to include Kubernetes resources and, optionally, to filter the included resources by adding the labelSelector parameter: To update the MigPlan CR to include Kubernetes resources: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: "" - kind: <kind> group: "" 1 Specify the Kubernetes object, for example, Secret or ConfigMap . Optional: To filter the included resources by adding the labelSelector parameter: apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: "" - kind: <kind> group: "" ... labelSelector: matchLabels: <label> 2 1 Specify the Kubernetes object, for example, Secret or ConfigMap . 2 Specify the label of the resources to migrate, for example, app: frontend . Create a MigMigration CR to migrate the selected Kubernetes resources. Verify that the correct MigPlan is referenced in migPlanRef : apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false 10.5. Migration controller options You can edit migration plan limits, enable persistent volume resizing, or enable cached Kubernetes clients in the MigrationController custom resource (CR) for large migrations and improved performance. 10.5.1. Increasing limits for large migrations You can increase the limits on migration objects and container resources for large migrations with the Migration Toolkit for Containers (MTC). Important You must test these changes before you perform a migration in a production environment. Procedure Edit the MigrationController custom resource (CR) manifest: USD oc edit migrationcontroller -n openshift-migration Update the following parameters: ... mig_controller_limits_cpu: "1" 1 mig_controller_limits_memory: "10Gi" 2 ... mig_controller_requests_cpu: "100m" 3 mig_controller_requests_memory: "350Mi" 4 ... mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7 ... 1 Specifies the number of CPUs available to the MigrationController CR. 2 Specifies the amount of memory available to the MigrationController CR. 3 Specifies the number of CPU units available for MigrationController CR requests. 100m represents 0.1 CPU units (100 * 1e-3). 4 Specifies the amount of memory available for MigrationController CR requests. 5 Specifies the number of persistent volumes that can be migrated. 6 Specifies the number of pods that can be migrated. 7 Specifies the number of namespaces that can be migrated. Create a migration plan that uses the updated parameters to verify the changes. If your migration plan exceeds the MigrationController CR limits, the MTC console displays a warning message when you save the migration plan. 10.5.2. Enabling persistent volume resizing for direct volume migration You can enable persistent volume (PV) resizing for direct volume migration to avoid running out of disk space on the destination cluster. When the disk usage of a PV reaches a configured level, the MigrationController custom resource (CR) compares the requested storage capacity of a persistent volume claim (PVC) to its actual provisioned capacity. Then, it calculates the space required on the destination cluster. A pv_resizing_threshold parameter determines when PV resizing is used. The default threshold is 3% . This means that PV resizing occurs when the disk usage of a PV is more than 97% . You can increase this threshold so that PV resizing occurs at a lower disk usage level. PVC capacity is calculated according to the following criteria: If the requested storage capacity ( spec.resources.requests.storage ) of the PVC is not equal to its actual provisioned capacity ( status.capacity.storage ), the greater value is used. If a PV is provisioned through a PVC and then subsequently changed so that its PV and PVC capacities no longer match, the greater value is used. Prerequisites The PVCs must be attached to one or more running pods so that the MigrationController CR can execute commands. Procedure Log in to the host cluster. Enable PV resizing by patching the MigrationController CR: USD oc patch migrationcontroller migration-controller -p '{"spec":{"enable_dvm_pv_resizing":true}}' \ 1 --type='merge' -n openshift-migration 1 Set the value to false to disable PV resizing. Optional: Update the pv_resizing_threshold parameter to increase the threshold: USD oc patch migrationcontroller migration-controller -p '{"spec":{"pv_resizing_threshold":41}}' \ 1 --type='merge' -n openshift-migration 1 The default value is 3 . When the threshold is exceeded, the following status message is displayed in the MigPlan CR status: status: conditions: ... - category: Warn durable: true lastTransitionTime: "2021-06-17T08:57:01Z" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: "False" type: PvCapacityAdjustmentRequired Note For AWS gp2 storage, this message does not appear unless the pv_resizing_threshold is 42% or greater because of the way gp2 calculates volume usage and size. ( BZ#1973148 ) 10.5.3. Enabling cached Kubernetes clients You can enable cached Kubernetes clients in the MigrationController custom resource (CR) for improved performance during migration. The greatest performance benefit is displayed when migrating between clusters in different regions or with significant network latency. Note Delegated tasks, for example, Rsync backup for direct volume migration or Velero backup and restore, however, do not show improved performance with cached clients. Cached clients require extra memory because the MigrationController CR caches all API resources that are required for interacting with MigCluster CRs. Requests that are normally sent to the API server are directed to the cache instead. The cache watches the API server for updates. You can increase the memory limits and requests of the MigrationController CR if OOMKilled errors occur after you enable cached clients. Procedure Enable cached clients by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_enable_cache", "value": true}]' Optional: Increase the MigrationController CR memory limits by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_limits_memory", "value": <10Gi>}]' Optional: Increase the MigrationController CR memory requests by running the following command: USD oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch \ '[{ "op": "replace", "path": "/spec/mig_controller_requests_memory", "value": <350Mi>}]' | [
"oc create route passthrough --service=image-registry -n openshift-image-registry",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_endpoint_type: [NodePort|ClusterIP|Route]",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF",
"oc sa get-token migration-controller -n openshift-migration | base64 -w 0",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF",
"oc describe MigCluster <cluster>",
"cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF",
"echo -n \"<key>\" | base64 -w 0 1",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF",
"oc describe migstorage <migstorage>",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <source_namespace_1> 4 - <source_namespace_2> - <source_namespace_3>:<destination_namespace> 5 srcMigClusterRef: name: <remote_cluster> 6 namespace: openshift-migration EOF",
"oc describe migplan <migplan> -n openshift-migration",
"cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF",
"oc watch migmigration <migmigration> -n openshift-migration",
"Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47",
"- hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces",
"- hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: \"{{ lookup( 'env', 'HOSTNAME') }}\" register: pods - name: Print pod name debug: msg: \"{{ pods.resources[0].metadata.name }}\"",
"- hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: \"fail\" fail: msg: \"Cause a failure\" when: do_fail",
"- hosts: localhost gather_facts: false tasks: - set_fact: namespaces: \"{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}\" - debug: msg: \"{{ item }}\" with_items: \"{{ namespaces }}\" - debug: msg: \"{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}\"",
"oc edit migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 additional_excluded_resources: 3 - resource1 - resource2",
"oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1",
"name: EXCLUDED_RESOURCES value: resource1,resource2,imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims",
"spec: namespaces: - namespace_2 - namespace_1:namespace_2",
"spec: namespaces: - namespace_1:namespace_1",
"spec: namespaces: - namespace_1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: selection: action: skip",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\"",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\" labelSelector: matchLabels: <label> 2",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false",
"oc edit migrationcontroller -n openshift-migration",
"mig_controller_limits_cpu: \"1\" 1 mig_controller_limits_memory: \"10Gi\" 2 mig_controller_requests_cpu: \"100m\" 3 mig_controller_requests_memory: \"350Mi\" 4 mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"enable_dvm_pv_resizing\":true}}' \\ 1 --type='merge' -n openshift-migration",
"oc patch migrationcontroller migration-controller -p '{\"spec\":{\"pv_resizing_threshold\":41}}' \\ 1 --type='merge' -n openshift-migration",
"status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-06-17T08:57:01Z\" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: \"False\" type: PvCapacityAdjustmentRequired",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_enable_cache\", \"value\": true}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_limits_memory\", \"value\": <10Gi>}]'",
"oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_requests_memory\", \"value\": <350Mi>}]'"
]
| https://docs.redhat.com/en/documentation/migration_toolkit_for_containers/1.8/html/migration_toolkit_for_containers/advanced-migration-options-mtc |
Chapter 1. Installing Ansible plug-ins for Red Hat Developer Hub | Chapter 1. Installing Ansible plug-ins for Red Hat Developer Hub Ansible plug-ins for Red Hat Developer Hub deliver an Ansible-specific portal experience with curated learning paths, push-button content creation, integrated development tools, and other opinionated resources. Important The Ansible plug-ins are a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features, see Technology Preview Features Scope . Additional detail on how Red Hat provides support for bundled community dynamic plugins is available on the Red Hat Developer Support Policy page. To install and configure the Ansible plugins, see Installing Ansible plug-ins for Red Hat Developer Hub . | null | https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/configuring_dynamic_plugins/installing-ansible-plug-ins-for-red-hat-developer-hub |
Chapter 8. Viewing the status of the QuayRegistry object | Chapter 8. Viewing the status of the QuayRegistry object Lifecycle observability for a given Red Hat Quay deployment is reported in the status section of the corresponding QuayRegistry object. The Red Hat Quay Operator constantly updates this section, and this should be the first place to look for any problems or state changes in Red Hat Quay or its managed dependencies. 8.1. Viewing the registry endpoint Once Red Hat Quay is ready to be used, the status.registryEndpoint field will be populated with the publicly available hostname of the registry. 8.2. Viewing the version of Red Hat Quay in use The current version of Red Hat Quay that is running will be reported in status.currentVersion . 8.3. Viewing the conditions of your Red Hat Quay deployment Certain conditions will be reported in status.conditions . | null | https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/operator-quayregistry-status |
Chapter 24. system | Chapter 24. system 24.1. system:framework 24.1.1. Description OSGi Framework options. 24.1.2. Syntax system:framework [options] [framework] 24.1.3. Arguments Name Description framework Name of the OSGi framework to use 24.1.4. Options Name Description -nodebug, --disable-debug Disable debug for the OSGi framework --help Display this help message -debug, --enable-debug Enable debug for the OSGi framework 24.2. system:name 24.2.1. Description Show or change Karaf instance name. 24.2.2. Syntax system:name [options] [name] 24.2.3. Arguments Name Description name New name for the instance 24.2.4. Options Name Description --help Display this help message 24.3. system:property 24.3.1. Description Get or set a system property. 24.3.2. Syntax system:property [options] [key] [value] 24.3.3. Arguments Name Description key The system property name value New value for the system property 24.3.4. Options Name Description --help Display this help message -f, --file-dump Dump all system properties in a file (in data folder) -p, --persistent Persist the new value to the etc/system.properties file -u, --unset Show unset know properties with value unset 24.4. system:shutdown 24.4.1. Description Shutdown the Karaf container. 24.4.2. Syntax system:shutdown [options] [time] 24.4.3. Arguments Name Description time Shutdown after a specified delay. The time argument can have different formats. First, it can be an absolute time in the format hh:mm, in which hh is the hour (1 or 2 digits) and mm is the minute of the hour (in two digits). Second, it can be in the format m (or +m), in which m is the number of minutes to wait. The word now is an alias for 0 (or +0). 24.4.4. Options Name Description -c, --clean, --clean-all, -ca Force a clean restart by deleting the data directory --help Display this help message -h, --halt Halt the Karaf container. -cc, --clean-cache, -cc Force a clean restart by deleting the cache directory -f, --force Force the shutdown without confirmation message. -r, --reboot Reboot the Karaf container. 24.5. system:start-level 24.5.1. Description Gets or sets the system start level. 24.5.2. Syntax system:start-level [options] [level] 24.5.3. Arguments Name Description level The new system start level to set 24.5.4. Options Name Description --help Display this help message 24.6. system:version 24.6.1. Description Display the instance version 24.6.2. Syntax system:version [options] 24.6.3. Options Name Description --help Display this help message | null | https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_karaf_console_reference/system |
Chapter 4. Installing a cluster on vSphere using the Assisted Installer | Chapter 4. Installing a cluster on vSphere using the Assisted Installer You can install OpenShift Container Platform on on-premise hardware or on-premise VMs by using the Assisted Installer. Installing OpenShift Container Platform by using the Assisted Installer supports x86_64 , AArch64 , ppc64le , and s390x CPU architectures. The Assisted Installer is a user-friendly installation solution offered on the Red Hat Hybrid Cloud Console. The Assisted Installer supports the various deployment platforms with a focus on the following infrastructures: Bare metal Nutanix vSphere 4.1. Additional resources Installing OpenShift Container Platform with the Assisted Installer | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_vsphere/installing-vsphere-assisted-installer |
Chapter 3. Monitoring a Ceph storage cluster | Chapter 3. Monitoring a Ceph storage cluster As a storage administrator, you can monitor the overall health of the Red Hat Ceph Storage cluster, along with monitoring the health of the individual components of Ceph. Once you have a running Red Hat Ceph Storage cluster, you might begin monitoring the storage cluster to ensure that the Ceph Monitor and Ceph OSD daemons are running, at a high-level. Ceph storage cluster clients connect to a Ceph Monitor and receive the latest version of the storage cluster map before they can read and write data to the Ceph pools within the storage cluster. So the monitor cluster must have agreement on the state of the cluster before Ceph clients can read and write data. Ceph OSDs must peer the placement groups on the primary OSD with the copies of the placement groups on secondary OSDs. If faults arise, peering will reflect something other than the active + clean state. 3.1. Prerequisites A running Red Hat Ceph Storage cluster. 3.2. High-level monitoring of a Ceph storage cluster As a storage administrator, you can monitor the health of the Ceph daemons to ensure that they are up and running. High level monitoring also involves checking the storage cluster capacity to ensure that the storage cluster does not exceed its full ratio . The Red Hat Ceph Storage Dashboard is the most common way to conduct high-level monitoring. However, you can also use the command-line interface, the Ceph admin socket or the Ceph API to monitor the storage cluster. 3.2.1. Prerequisites A running Red Hat Ceph Storage cluster. 3.2.2. Using the Ceph command interface interactively You can interactively interface with the Ceph storage cluster by using the ceph command-line utility. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To run the ceph utility in interactive mode. Bare-metal deployments: Example Container deployments: Red Hat Enterprise Linux 7 Red Hat Enterprise Linux 8 Replace MONITOR_NAME with the name of the Ceph Monitor container, found by running the docker ps or podman ps command respectively. Example This example opens an interactive terminal session on mon01 , where you can start the Ceph interactive shell. 3.2.3. Checking the storage cluster health After you start the Ceph storage cluster, and before you start reading or writing data, check the storage cluster's health first. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure You can check on the health of the Ceph storage cluster with the following: If you specified non-default locations for the configuration or keyring, you can specify their locations: Upon starting the Ceph cluster, you will likely encounter a health warning such as HEALTH_WARN XXX num placement groups stale . Wait a few moments and check it again. When the storage cluster is ready, ceph health should return a message such as HEALTH_OK . At that point, it is okay to begin using the cluster. 3.2.4. Watching storage cluster events You can watch events that are happening with the Ceph storage cluster using the command-line interface. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To watch the cluster's ongoing events on the command line, open a new terminal, and then enter: Ceph will print each event. For example, a tiny Ceph cluster consisting of one monitor and two OSDs may print the following: The output provides: Cluster ID Cluster health status The monitor map epoch and the status of the monitor quorum The OSD map epoch and the status of OSDs The placement group map version The number of placement groups and pools The notional amount of data stored and the number of objects stored The total amount of data stored 3.2.5. How Ceph calculates data usage The used value reflects the actual amount of raw storage used. The xxx GB / xxx GB value means the amount available, the lesser of the two numbers, of the overall storage capacity of the cluster. The notional number reflects the size of the stored data before it is replicated, cloned or snapshotted. Therefore, the amount of data actually stored typically exceeds the notional amount stored, because Ceph creates replicas of the data and may also use storage capacity for cloning and snapshotting. 3.2.6. Understanding the storage clusters usage stats To check a cluster's data usage and data distribution among pools, use the df option. It is similar to the Linux df command. You can run either the ceph df command or ceph df detail command. Example The ceph df detail command gives more details about other pool statistics such as quota objects, quota bytes, used compression, and under compression. Example The RAW STORAGE section of the output provides an overview of the amount of storage the storage cluster uses for data. CLASS: The type of devices used. SIZE: The overall storage capacity managed by the storage cluster. In the above example, if the SIZE is 90 GiB, it is the total size without the replication factor, which is three by default. The total available capacity with the replication factor is 90 GiB/3 = 30 GiB. Based on the full ratio, which is 0.85% by default, the maximum available space is 30 GiB * 0.85 = 25.5 GiB AVAIL: The amount of free space available in the storage cluster. In the above example, if the SIZE is 90 GiB and the USED space is 6 GiB, then the AVAIL space is 84 GiB. The total available space with the replication factor, which is three by default, is 84 GiB/3 = 28 GiB USED: The amount of used space in the storage cluster consumed by user data, internal overhead, or reserved capacity. In the above example, 100 MiB is the total space available after considering the replication factor. The actual available size is 33 MiB. RAW USED: The sum of USED space and the space allocated the db and wal BlueStore partitions. % RAW USED: The percentage of of RAW USED . Use this number in conjunction with the full ratio and near full ratio to ensure that you are not reaching the storage cluster's capacity. The POOLS section of the output provides a list of pools and the notional usage of each pool. The output from this section DOES NOT reflect replicas, clones or snapshots. For example, if you store an object with 1 MB of data, the notional usage will be 1 MB, but the actual usage may be 3 MB or more depending on the number of replicas for example, size = 3 , clones and snapshots. POOL: The name of the pool. ID: The pool ID. STORED: The actual amount of data stored by the user in the pool. OBJECTS: The notional number of objects stored per pool. USED: The notional amount of data stored in kilobytes, unless the number appends M for megabytes or G for gigabytes. It is STORED size * replication factor. %USED: The notional percentage of storage used per pool. MAX AVAIL: An estimate of the notional amount of data that can be written to this pool. It is the amount of data that can be used before the first OSD becomes full. It considers the projected distribution of data across disks from the CRUSH map and uses the first OSD to fill up as the target. In the above example, MAX AVAIL is 153.85 without considering the replication factor, which is three by default. See the KnowledgeBase article ceph df MAX AVAIL is incorrect for simple replicated pool to calculate the value of MAX AVAIL . QUOTA OBJECTS: The number of quota objects. QUOTA BYTES: The number of bytes in the quota objects. USED COMPR: The amount of space allocated for compressed data including his includes compressed data, allocation, replication and erasure coding overhead. UNDER COMPR: The amount of data passed through compression and beneficial enough to be stored in a compressed form. Note The numbers in the POOLS section are notional. They are not inclusive of the number of replicas, snapshots or clones. As a result, the sum of the USED and %USED amounts will not add up to the RAW USED and %RAW USED amounts in the GLOBAL section of the output. Note The MAX AVAIL value is a complicated function of the replication or erasure code used, the CRUSH rule that maps storage to devices, the utilization of those devices, and the configured mon_osd_full_ratio . Additional Resources See How Ceph calculates data usage for details. See Understanding the OSD usage stats for details. 3.2.7. Understanding the OSD usage stats Use the ceph osd df command to view OSD utilization stats. ID: The name of the OSD. CLASS: The type of devices the OSD uses. WEIGHT: The weight of the OSD in the CRUSH map. REWEIGHT: The default reweight value. SIZE: The overall storage capacity of the OSD. USE: The OSD capacity. DATA: The amount of OSD capacity that is used by user data. OMAP: An estimate value of the bluefs storage that is being used to store object map ( omap ) data (key value pairs stored in rocksdb ). META: The bluefs space allocated, or the value set in the bluestore_bluefs_min parameter, whichever is larger, for internal metadata which is calculated as the total space allocated in bluefs minus the estimated omap data size. AVAIL: The amount of free space available on the OSD. %USE: The notional percentage of storage used by the OSD VAR: The variation above or below average utilization. PGS: The number of placement groups in the OSD. MIN/MAX VAR: The minimum and maximum variation across all OSDs. Additional Resources See How Ceph calculates data usage for details. See Understanding the OSD usage stats for details. See CRUSH Weights in Red Hat Ceph Storage Storage Strategies Guide for details. 3.2.8. Checking the Red Hat Ceph Storage cluster status You can check the status of the Red Hat Ceph Storage cluster from the command-line interface. The status sub command or the -s argument will display the current status of the storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To check a storage cluster's status, execute the following: Or: In interactive mode, type status and press Enter : For example, a tiny Ceph cluster consisting of one monitor, and two OSDs can print the following: 3.2.9. Checking the Ceph Monitor status If the storage cluster has multiple Ceph Monitors, which is a requirement for a production Red Hat Ceph Storage cluster, then check the Ceph Monitor quorum status after starting the storage cluster, and before doing any reading or writing of data. A quorum must be present when multiple monitors are running. Check Ceph Monitor status periodically to ensure that they are running. If there is a problem with the Ceph Monitor, that prevents an agreement on the state of the storage cluster, the fault may prevent Ceph clients from reading and writing data. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To display the monitor map, execute the following: or To check the quorum status for the storage cluster, execute the following: Ceph will return the quorum status. A Red Hat Ceph Storage cluster consisting of three monitors may return the following: Example 3.2.10. Using the Ceph administration socket Use the administration socket to interact with a given daemon directly by using a UNIX socket file. For example, the socket enables you to: List the Ceph configuration at runtime Set configuration values at runtime directly without relying on Monitors. This is useful when Monitors are down . Dump historic operations Dump the operation priority queue state Dump operations without rebooting Dump performance counters In addition, using the socket is helpful when troubleshooting problems related to Monitors or OSDs. Important The administration socket is only available while a daemon is running. When you shut down the daemon properly, the administration socket is removed. However, if the daemon terminates unexpectedly, the administration socket might persist. Regardless, if the daemon is not running, a following error is returned when attempting to use the administration socket: Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To use the socket: Syntax Replace: TYPE with the type of the Ceph daemon ( mon , osd , mds ). ID with the daemon ID COMMAND with the command to run. Use help to list the available commands for a given daemon. Example To view a Monitor status of a Ceph Monitor named mon.0 : Alternatively, specify the Ceph daemon by using its socket file: To view the status of an Ceph OSD named osd.2 : To list all socket files for the Ceph processes: Additional Resources See the Red Hat Ceph Storage Troubleshooting Guide for more information. 3.2.11. Understanding the Ceph OSD status An OSD's status is either in the cluster, in , or out of the cluster, out . It is either up and running, up , or it is down and not running, or down . If an OSD is up , it may be either in the storage cluster, where data can be read and written, or it is out of the storage cluster. If it was in the cluster and recently moved out of the cluster, Ceph will migrate placement groups to other OSDs. If an OSD is out of the cluster, CRUSH will not assign placement groups to the OSD. If an OSD is down , it should also be out . Note If an OSD is down and in , there is a problem and the cluster will not be in a healthy state. If you execute a command such as ceph health , ceph -s or ceph -w , you may notice that the cluster does not always echo back HEALTH OK . Don't panic. With respect to OSDs, you should expect that the cluster will NOT echo HEALTH OK in a few expected circumstances: You haven't started the cluster yet, it won't respond. You have just started or restarted the cluster and it's not ready yet, because the placement groups are getting created and the OSDs are in the process of peering. You just added or removed an OSD. You just have modified the cluster map. An important aspect of monitoring OSDs is to ensure that when the cluster is up and running that all OSDs that are in the cluster are up and running, too. To see if all OSDs are running, execute: or The result should tell you the map epoch, eNNNN , the total number of OSDs, x , how many, y , are up , and how many, z , are in : If the number of OSDs that are in the cluster is more than the number of OSDs that are up . Execute the following command to identify the ceph-osd daemons that aren't running: Example Tip The ability to search through a well-designed CRUSH hierarchy may help you troubleshoot the storage cluster by identifying the physical locations faster. If an OSD is down , connect to the node and start it. You can use Red Hat Storage Console to restart the OSD node, or you can use the command line. Example 3.2.12. Additional Resources Red Hat Ceph Storage Dashboard Guide . 3.3. Low-level monitoring of a Ceph storage cluster As a storage administrator, you can monitor the health of a Red Hat Ceph Storage cluster from a low-level perspective. Low-level monitoring typically involves ensuring that Ceph OSDs are peering properly. When peering faults occur, placement groups operate in a degraded state. This degraded state can be the result of many different things, such as hardware failure, a hung or crashed Ceph daemon, network latency, or a complete site outage. 3.3.1. Prerequisites A running Red Hat Ceph Storage cluster. 3.3.2. Monitoring Placement Group Sets When CRUSH assigns placement groups to OSDs, it looks at the number of replicas for the pool and assigns the placement group to OSDs such that each replica of the placement group gets assigned to a different OSD. For example, if the pool requires three replicas of a placement group, CRUSH may assign them to osd.1 , osd.2 and osd.3 respectively. CRUSH actually seeks a pseudo-random placement that will take into account failure domains you set in the CRUSH map, so you will rarely see placement groups assigned to nearest neighbor OSDs in a large cluster. We refer to the set of OSDs that should contain the replicas of a particular placement group as the Acting Set . In some cases, an OSD in the Acting Set is down or otherwise not able to service requests for objects in the placement group. When these situations arise, don't panic. Common examples include: You added or removed an OSD. Then, CRUSH reassigned the placement group to other OSDs- thereby changing the composition of the Acting Set and spawning the migration of data with a "backfill" process. An OSD was down , was restarted and is now recovering . An OSD in the Acting Set is down or unable to service requests, and another OSD has temporarily assumed its duties. Ceph processes a client request using the Up Set , which is the set of OSDs that will actually handle the requests. In most cases, the Up Set and the Acting Set are virtually identical. When they are not, it may indicate that Ceph is migrating data, an OSD is recovering, or that there is a problem, that is, Ceph usually echoes a HEALTH WARN state with a "stuck stale" message in such scenarios. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To retrieve a list of placement groups: To view which OSDs are in the Acting Set or in the Up Set for a given placement group: The result should tell you the osdmap epoch, eNNN , the placement group number, PG_NUM , the OSDs in the Up Set up[] , and the OSDs in the acting set, acting[] : Note If the Up Set and Acting Set do not match, this may be an indicator that the cluster rebalancing itself or of a potential problem with the cluster. 3.3.3. Ceph OSD peering Before you can write data to a placement group, it must be in an active state, and it should be in a clean state. For Ceph to determine the current state of a placement group, the primary OSD of the placement group that is, the first OSD in the acting set, peers with the secondary and tertiary OSDs to establish agreement on the current state of the placement group. Assuming a pool with 3 replicas of the PG. 3.3.4. Placement Group States If you execute a command such as ceph health , ceph -s or ceph -w , you may notice that the cluster does not always echo back HEALTH OK . After you check to see if the OSDs are running, you should also check placement group states. You should expect that the cluster will NOT echo HEALTH OK in a number of placement group peering-related circumstances: You have just created a pool and placement groups haven't peered yet. The placement groups are recovering. You have just added an OSD to or removed an OSD from the cluster. You have just modified the CRUSH map and the placement groups are migrating. There is inconsistent data in different replicas of a placement group. Ceph is scrubbing a placement group's replicas. Ceph doesn't have enough storage capacity to complete backfilling operations. If one of the foregoing circumstances causes Ceph to echo HEALTH WARN , don't panic. In many cases, the cluster will recover on its own. In some cases, you may need to take action. An important aspect of monitoring placement groups is to ensure that when the cluster is up and running that all placement groups are active , and preferably in the clean state. To see the status of all placement groups, execute: The result should tell you the placement group map version, vNNNNNN , the total number of placement groups, x , and how many placement groups, y , are in a particular state such as active+clean : Note It is common for Ceph to report multiple states for placement groups. Snapshot Trimming PG States When snapshots exist, two additional PG states will be reported. snaptrim : The PGs are currently being trimmed snaptrim_wait : The PGs are waiting to be trimmed Example Output: In addition to the placement group states, Ceph will also echo back the amount of data used, aa , the amount of storage capacity remaining, bb , and the total storage capacity for the placement group. These numbers can be important in a few cases: You are reaching the near full ratio or full ratio . Your data isn't getting distributed across the cluster due to an error in the CRUSH configuration. Placement Group IDs Placement group IDs consist of the pool number, and not the pool name, followed by a period (.) and the placement group ID- a hexadecimal number. You can view pool numbers and their names from the output of ceph osd lspools . The default pool names data , metadata and rbd correspond to pool numbers 0 , 1 and 2 respectively. A fully qualified placement group ID has the following form: Example output: To retrieve a list of placement groups: To format the output in JSON format and save it to a file: To query a particular placement group: Example output in JSON format: Additional Resources See the chapter Object Storage Daemon (OSD) configuration options in the Red Hat Ceph Storage 4 Configuration Guide for more details on the snapshot trimming settings. 3.3.5. Placement Group creating state When you create a pool, it will create the number of placement groups you specified. Ceph will echo creating when it is creating one or more placement groups. Once they are created, the OSDs that are part of a placement group's Acting Set will peer. Once peering is complete, the placement group status should be active+clean , which means a Ceph client can begin writing to the placement group. 3.3.6. Placement group peering state When Ceph is Peering a placement group, Ceph is bringing the OSDs that store the replicas of the placement group into agreement about the state of the objects and metadata in the placement group. When Ceph completes peering, this means that the OSDs that store the placement group agree about the current state of the placement group. However, completion of the peering process does NOT mean that each replica has the latest contents. Authoritative History Ceph will NOT acknowledge a write operation to a client, until all OSDs of the acting set persist the write operation. This practice ensures that at least one member of the acting set will have a record of every acknowledged write operation since the last successful peering operation. With an accurate record of each acknowledged write operation, Ceph can construct and disseminate a new authoritative history of the placement group. A complete, and fully ordered set of operations that, if performed, would bring an OSD's copy of a placement group up to date. 3.3.7. Placement group active state Once Ceph completes the peering process, a placement group may become active . The active state means that the data in the placement group is generally available in the primary placement group and the replicas for read and write operations. 3.3.8. Placement Group clean state When a placement group is in the clean state, the primary OSD and the replica OSDs have successfully peered and there are no stray replicas for the placement group. Ceph replicated all objects in the placement group the correct number of times. 3.3.9. Placement Group degraded state When a client writes an object to the primary OSD, the primary OSD is responsible for writing the replicas to the replica OSDs. After the primary OSD writes the object to storage, the placement group will remain in a degraded state until the primary OSD has received an acknowledgement from the replica OSDs that Ceph created the replica objects successfully. The reason a placement group can be active+degraded is that an OSD may be active even though it doesn't hold all of the objects yet. If an OSD goes down , Ceph marks each placement group assigned to the OSD as degraded . The OSDs must peer again when the OSD comes back online. However, a client can still write a new object to a degraded placement group if it is active . If an OSD is down and the degraded condition persists, Ceph may mark the down OSD as out of the cluster and remap the data from the down OSD to another OSD. The time between being marked down and being marked out is controlled by mon_osd_down_out_interval , which is set to 600 seconds by default. A placement group can also be degraded , because Ceph cannot find one or more objects that Ceph thinks should be in the placement group. While you cannot read or write to unfound objects, you can still access all of the other objects in the degraded placement group. Let's say there are 9 OSDs in a three way replica pool. If OSD number 9 goes down, the PGs assigned to OSD 9 go in a degraded state. If OSD 9 doesn't recover, it goes out of the cluster and the cluster rebalances. In that scenario, the PGs are degraded and then recover to an active state. 3.3.10. Placement Group recovering state Ceph was designed for fault-tolerance at a scale where hardware and software problems are ongoing. When an OSD goes down , its contents may fall behind the current state of other replicas in the placement groups. When the OSD is back up , the contents of the placement groups must be updated to reflect the current state. During that time period, the OSD may reflect a recovering state. Recovery isn't always trivial, because a hardware failure might cause a cascading failure of multiple OSDs. For example, a network switch for a rack or cabinet may fail, which can cause the OSDs of a number of host machines to fall behind the current state of the cluster. Each one of the OSDs must recover once the fault is resolved. Ceph provides a number of settings to balance the resource contention between new service requests and the need to recover data objects and restore the placement groups to the current state. The osd recovery delay start setting allows an OSD to restart, re-peer and even process some replay requests before starting the recovery process. The osd recovery threads setting limits the number of threads for the recovery process, by default one thread. The osd recovery thread timeout sets a thread timeout, because multiple OSDs may fail, restart and re-peer at staggered rates. The osd recovery max active setting limits the number of recovery requests an OSD will entertain simultaneously to prevent the OSD from failing to serve . The osd recovery max chunk setting limits the size of the recovered data chunks to prevent network congestion. 3.3.11. Back fill state When a new OSD joins the cluster, CRUSH will reassign placement groups from OSDs in the cluster to the newly added OSD. Forcing the new OSD to accept the reassigned placement groups immediately can put excessive load on the new OSD. Backfilling the OSD with the placement groups allows this process to begin in the background. Once backfilling is complete, the new OSD will begin serving requests when it is ready. During the backfill operations, you may see one of several states: * backfill_wait indicates that a backfill operation is pending, but isn't underway yet * backfill indicates that a backfill operation is underway * backfill_too_full indicates that a backfill operation was requested, but couldn't be completed due to insufficient storage capacity. When a placement group cannot be backfilled, it may be considered incomplete . Ceph provides a number of settings to manage the load spike associated with reassigning placement groups to an OSD, especially a new OSD. By default, osd_max_backfills sets the maximum number of concurrent backfills to or from an OSD to 10. The osd backfill full ratio enables an OSD to refuse a backfill request if the OSD is approaching its full ratio, by default 85%. If an OSD refuses a backfill request, the osd backfill retry interval enables an OSD to retry the request, by default after 10 seconds. OSDs can also set osd backfill scan min and osd backfill scan max to manage scan intervals, by default 64 and 512. For some workloads, it is beneficial to avoid regular recovery entirely and use backfill instead. Since backfilling occurs in the background, this allows I/O to proceed on the objects in the OSD. To force backfill rather than recovery, set osd_min_pg_log_entries to 1 , and set osd_max_pg_log_entries to 2 . Contact your Red Hat Support account team for details on when this situation is appropriate for your workload. 3.3.12. Changing the priority of recovery or backfill operations You might encounter a situation where some placement groups (PGs) require recovery and/or backfill, and some of those placement groups contain more important data than do others. Use the pg force-recovery or pg force-backfill command to ensure that the PGs with the higher-priority data undergo recovery or backfill first. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Issue the pg force-recovery or pg force-backfill command and specify the order of priority for the PGs with the higher-priority data: Syntax Example This command causes Red Hat Ceph Storage to perform recovery or backfill on specified placement groups (PGs) first, before processing other placement groups. Issuing the command does not interrupt backfill or recovery operations that are currently executing. After the currently running operations have finished, recovery or backfill takes place as soon as possible for the specified PGs. 3.3.13. Changing or canceling a recovery or backfill operation on specified placement groups If you cancel a high-priority force-recovery or force-backfill operation on certain placement groups (PGs) in a storage cluster, operations for those PGs revert to the default recovery or backfill settings. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To change or cancel a recovery or backfill operation on specified placement groups: Syntax Example This cancels the force flag and processes the PGs in the default order. After recovery or backfill operations for the specified PGs have completed, processing order reverts to the default. Additional Resources For more information about the order of priority of recovery and backfill operations in RADOS, see Priority of placement group recovery and backfill in RADOS . 3.3.14. Forcing high-priority recovery or backfill operations for pools If all of the placement groups in a pool require high-priority recovery or backfill, use the force-recovery or force-backfill options to initiate the operation. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To force the high-priority recovery or backfill on all placement groups in a specified pool: Syntax Example Note Use the force-recovery and force-backfill commands with caution. Changing the priority of these operations might break the ordering of Ceph's internal priority computations. 3.3.15. Canceling high-priority recovery or backfill operations for pools If you cancel a high-priority force-recovery or force-backfill operation on all placement groups in a pool, operations for the PGs in that pool revert to the default recovery or backfill settings. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To cancel a high-priority recovery or backfill operation on all placement groups in a specified pool: Syntax Example 3.3.16. Rearranging the priority of recovery or backfill operations for pools If you have multiple pools that currently use the same underlying OSDs and some of the pools contain high-priority data, you can rearrange the order in which the operations execute. Use the recovery_priority option to assign a higher priority value to the pools with the higher-priority data. Those pools will execute before pools with lower priority values, or pools that are set to default priority. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To rearrange the recovery/backfill priority for the pools: Syntax Example VALUE sets the order of priority. For example, if you have 10 pools, the pool with a priority value of 10 gets processed first, followed by the pool with priority 9, and so on. If only some pools have high priority, you can set priority values for just those pools. The pools without set priority values are processed in the default order. 3.3.17. Priority of placement group recovery in RADOS This section describes the relative priority values for the recovery and backfilling of placement groups (PGs) in RADOS. Higher values are processed first. Inactive PGs receive higher priority values than active or degraded PGs. Operation Value Description OSD_RECOVERY_PRIORITY_MIN 0 Minimum recovery value OSD_BACKFILL_PRIORITY_BASE 100 Base backfill priority for MBackfillReserve OSD_BACKFILL_DEGRADED_PRIORITY_BASE 140 Base backfill priority for MBackfillReserve (degraded PG) OSD_RECOVERY_PRIORITY_BASE 180 Base recovery priority for MBackfillReserve OSD_BACKFILL_INACTIVE_PRIORITY_BASE 220 Base backfill priority for MBackfillReserve (inactive PG) OSD_RECOVERY_INACTIVE_PRIORITY_BASE 220 Base recovery priority for MRecoveryReserve (inactive PG) OSD_RECOVERY_PRIORITY_MAX 253 Max manually/automatically set recovery priority for MBackfillReserve OSD_BACKFILL_PRIORITY_FORCED 254 Backfill priority for MBackfillReserve, when forced manually OSD_RECOVERY_PRIORITY_FORCED 255 Recovery priority for MRecoveryReserve, when forced manually OSD_DELETE_PRIORITY_NORMAL 179 Priority for PG deletion when the OSD is not fullish OSD_DELETE_PRIORITY_FULLISH 219 Priority for PG deletion when the OSD is approaching full OSD_DELETE_PRIORITY_FULL 255 Priority for deletion when the OSD is full 3.3.18. Placement Group remapped state When the Acting Set that services a placement group changes, the data migrates from the old acting set to the new acting set. It may take some time for a new primary OSD to service requests. So it may ask the old primary to continue to service requests until the placement group migration is complete. Once data migration completes, the mapping uses the primary OSD of the new acting set. 3.3.19. Placement Group stale state While Ceph uses heartbeats to ensure that hosts and daemons are running, the ceph-osd daemons may also get into a stuck state where they aren't reporting statistics in a timely manner. For example, a temporary network fault. By default, OSD daemons report their placement group, up thru, boot and failure statistics every half second, that is, 0.5 , which is more frequent than the heartbeat thresholds. If the Primary OSD of a placement group's acting set fails to report to the monitor or if other OSDs have reported the primary OSD down , the monitors will mark the placement group stale . When you start the storage cluster, it is common to see the stale state until the peering process completes. After the storage cluster has been running for awhile, seeing placement groups in the stale state indicates that the primary OSD for those placement groups is down or not reporting placement group statistics to the monitor. 3.3.20. Placement Group misplaced state There are some temporary backfilling scenarios where a PG gets mapped temporarily to an OSD. When that temporary situation should no longer be the case, the PGs might still reside in the temporary location and not in the proper location. In which case, they are said to be misplaced . That's because the correct number of extra copies actually exist, but one or more copies is in the wrong place. For example, there are 3 OSDs: 0,1,2 and all PGs map to some permutation of those three. If you add another OSD (OSD 3), some PGs will now map to OSD 3 instead of one of the others. However, until OSD 3 is backfilled, the PG will have a temporary mapping allowing it to continue to serve I/O from the old mapping. During that time, the PG is misplaced , because it has a temporary mapping, but not degraded , since there are 3 copies. Example [0,1,2] is a temporary mapping, so the up set is not equal to the acting set and the PG is misplaced but not degraded since [0,1,2] is still three copies. Example OSD 3 is now backfilled and the temporary mapping is removed, not degraded and not misplaced. 3.3.21. Placement Group incomplete state A PG goes into a incomplete state when there is incomplete content and peering fails, that is, when there are no complete OSDs which are current enough to perform recovery. Lets say OSD 1, 2, and 3 are the acting OSD set and it switches to OSD 1, 4, and 3, then osd.1 will request a temporary acting set of OSD 1, 2, and 3 while backfilling 4. During this time, if OSD 1, 2, and 3 all go down, osd.4 will be the only one left which might not have fully backfilled all the data. At this time, the PG will go incomplete indicating that there are no complete OSDs which are current enough to perform recovery. Alternately, if osd.4 is not involved and the acting set is simply OSD 1, 2, and 3 when OSD 1, 2, and 3 go down, the PG would likely go stale indicating that the mons have not heard anything on that PG since the acting set changed. The reason being there are no OSDs left to notify the new OSDs. 3.3.22. Identifying stuck Placement Groups As previously noted, a placement group isn't necessarily problematic just because its state isn't active+clean . Generally, Ceph's ability to self repair may not be working when placement groups get stuck. The stuck states include: Unclean : Placement groups contain objects that are not replicated the desired number of times. They should be recovering. Inactive : Placement groups cannot process reads or writes because they are waiting for an OSD with the most up-to-date data to come back up . Stale : Placement groups are in an unknown state, because the OSDs that host them have not reported to the monitor cluster in a while, and can be configured with the mon osd report timeout setting. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To identify stuck placement groups, execute the following: 3.3.23. Finding an object's location The Ceph client retrieves the latest cluster map and the CRUSH algorithm calculates how to map the object to a placement group, and then calculates how to assign the placement group to an OSD dynamically. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To find the object location, all you need is the object name and the pool name: | [
"ceph ceph> health ceph> status ceph> quorum_status ceph> mon_status",
"docker exec -it ceph-mon- MONITOR_NAME /bin/bash",
"exec -it ceph-mon- MONITOR_NAME /bin/bash",
"podman exec -it ceph-mon-mon01 /bin/bash",
"ceph health",
"ceph -c /path/to/conf -k /path/to/keyring health",
"ceph -w",
"cluster b370a29d-9287-4ca3-ab57-3d824f65e339 health HEALTH_OK monmap e1: 1 mons at {ceph1=10.0.0.8:6789/0}, election epoch 2, quorum 0 ceph1 osdmap e63: 2 osds: 2 up, 2 in pgmap v41338: 952 pgs, 20 pools, 17130 MB data, 2199 objects 115 GB used, 167 GB / 297 GB avail 952 active+clean 2014-06-02 15:45:21.655871 osd.0 [INF] 17.71 deep-scrub ok 2014-06-02 15:45:47.880608 osd.1 [INF] 1.0 scrub ok 2014-06-02 15:45:48.865375 osd.1 [INF] 1.3 scrub ok 2014-06-02 15:45:50.866479 osd.1 [INF] 1.4 scrub ok 2014-06-02 15:45:01.345821 mon.0 [INF] pgmap v41339: 952 pgs: 952 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2014-06-02 15:45:05.718640 mon.0 [INF] pgmap v41340: 952 pgs: 1 active+clean+scrubbing+deep, 951 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2014-06-02 15:45:53.997726 osd.1 [INF] 1.5 scrub ok 2014-06-02 15:45:06.734270 mon.0 [INF] pgmap v41341: 952 pgs: 1 active+clean+scrubbing+deep, 951 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2014-06-02 15:45:15.722456 mon.0 [INF] pgmap v41342: 952 pgs: 952 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2014-06-02 15:46:06.836430 osd.0 [INF] 17.75 deep-scrub ok 2014-06-02 15:45:55.720929 mon.0 [INF] pgmap v41343: 952 pgs: 1 active+clean+scrubbing+deep, 951 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail",
"ceph df RAW STORAGE: CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 90 GiB 84 GiB 100 MiB 6.1 GiB 6.78 TOTAL 90 GiB 84 GiB 100 MiB 6.1 GiB 6.78 POOLS: POOL ID STORED OBJECTS USED %USED MAX AVAIL .rgw.root 1 1.3 KiB 4 768 KiB 0 26 GiB default.rgw.control 2 0 B 8 0 B 0 26 GiB default.rgw.meta 3 2.5 KiB 12 2.1 MiB 0 26 GiB default.rgw.log 4 3.5 KiB 208 6.2 MiB 0 26 GiB default.rgw.buckets.index 5 2.4 KiB 33 2.4 KiB 0 26 GiB default.rgw.buckets.data 6 9.6 KiB 15 1.7 MiB 0 26 GiB testpool 10 231 B 5 384 KiB 0 40 GiB",
"ceph df detail RAW STORAGE: CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 90 GiB 84 GiB 100 MiB 6.1 GiB 6.78 TOTAL 90 GiB 84 GiB 100 MiB 6.1 GiB 6.78 POOLS: POOL ID STORED OBJECTS USED %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR .rgw.root 1 1.3 KiB 4 768 KiB 0 26 GiB N/A N/A 4 0 B 0 B default.rgw.control 2 0 B 8 0 B 0 26 GiB N/A N/A 8 0 B 0 B default.rgw.meta 3 2.5 KiB 12 2.1 MiB 0 26 GiB N/A N/A 12 0 B 0 B default.rgw.log 4 3.5 KiB 208 6.2 MiB 0 26 GiB N/A N/A 208 0 B 0 B default.rgw.buckets.index 5 2.4 KiB 33 2.4 KiB 0 26 GiB N/A N/A 33 0 B 0 B default.rgw.buckets.data 6 9.6 KiB 15 1.7 MiB 0 26 GiB N/A N/A 15 0 B 0 B testpool 10 231 B 5 384 KiB 0 40 GiB N/A N/A 5 0 B 0 B",
"ceph osd df ID CLASS WEIGHT REWEIGHT SIZE USE DATA OMAP META AVAIL %USE VAR PGS 3 hdd 0.90959 1.00000 931GiB 70.1GiB 69.1GiB 0B 1GiB 861GiB 7.53 2.93 66 4 hdd 0.90959 1.00000 931GiB 1.30GiB 308MiB 0B 1GiB 930GiB 0.14 0.05 59 0 hdd 0.90959 1.00000 931GiB 18.1GiB 17.1GiB 0B 1GiB 913GiB 1.94 0.76 57 MIN/MAX VAR: 0.02/2.98 STDDEV: 2.91",
"ceph status",
"ceph -s",
"ceph> status",
"cluster b370a29d-9287-4ca3-ab57-3d824f65e339 health HEALTH_OK monmap e1: 1 mons at {ceph1=10.0.0.8:6789/0}, election epoch 2, quorum 0 ceph1 osdmap e63: 2 osds: 2 up, 2 in pgmap v41332: 952 pgs, 20 pools, 17130 MB data, 2199 objects 115 GB used, 167 GB / 297 GB avail 1 active+clean+scrubbing+deep 951 active+clean",
"ceph mon stat",
"ceph mon dump",
"ceph quorum_status -f json-pretty",
"{ \"election_epoch\": 10, \"quorum\": [ 0, 1, 2], \"monmap\": { \"epoch\": 1, \"fsid\": \"444b489c-4f16-4b75-83f0-cb8097468898\", \"modified\": \"2011-12-12 13:28:27.505520\", \"created\": \"2011-12-12 13:28:27.505520\", \"mons\": [ { \"rank\": 0, \"name\": \"a\", \"addr\": \"127.0.0.1:6789\\/0\"}, { \"rank\": 1, \"name\": \"b\", \"addr\": \"127.0.0.1:6790\\/0\"}, { \"rank\": 2, \"name\": \"c\", \"addr\": \"127.0.0.1:6791\\/0\"} ] } }",
"Error 111: Connection Refused",
"ceph daemon TYPE . ID COMMAND",
"ceph daemon mon.0 mon_status",
"ceph daemon /var/run/ceph/ SOCKET_FILE COMMAND",
"ceph daemon /var/run/ceph/ceph-osd.2.asok status",
"ls /var/run/ceph",
"ceph osd stat",
"ceph osd dump",
"eNNNN: x osds: y up, z in",
"ceph osd tree",
"id weight type name up/down reweight -1 3 pool default -3 3 rack mainrack -2 3 host osd-host 0 1 osd.0 up 1 1 1 osd.1 up 1 2 1 osd.2 up 1",
"systemctl start ceph-osd@ OSD_ID",
"ceph pg dump",
"ceph pg map PG_NUM",
"ceph osdmap eNNN pg PG_NUM -> up [0,1,2] acting [0,1,2]",
"ceph pg stat",
"vNNNNNN: x pgs: y active+clean; z bytes data, aa MB used, bb GB / cc GB avail",
"244 active+clean+snaptrim_wait 32 active+clean+snaptrim",
"POOL_NUM . PG_ID",
"0.1f",
"ceph pg dump",
"ceph pg dump -o FILE_NAME --format=json",
"ceph pg POOL_NUM . PG_ID query",
"{ \"state\": \"active+clean\", \"up\": [ 1, 0 ], \"acting\": [ 1, 0 ], \"info\": { \"pgid\": \"1.e\", \"last_update\": \"4'1\", \"last_complete\": \"4'1\", \"log_tail\": \"0'0\", \"last_backfill\": \"MAX\", \"purged_snaps\": \"[]\", \"history\": { \"epoch_created\": 1, \"last_epoch_started\": 537, \"last_epoch_clean\": 537, \"last_epoch_split\": 534, \"same_up_since\": 536, \"same_interval_since\": 536, \"same_primary_since\": 536, \"last_scrub\": \"4'1\", \"last_scrub_stamp\": \"2013-01-25 10:12:23.828174\" }, \"stats\": { \"version\": \"4'1\", \"reported\": \"536'782\", \"state\": \"active+clean\", \"last_fresh\": \"2013-01-25 10:12:23.828271\", \"last_change\": \"2013-01-25 10:12:23.828271\", \"last_active\": \"2013-01-25 10:12:23.828271\", \"last_clean\": \"2013-01-25 10:12:23.828271\", \"last_unstale\": \"2013-01-25 10:12:23.828271\", \"mapping_epoch\": 535, \"log_start\": \"0'0\", \"ondisk_log_start\": \"0'0\", \"created\": 1, \"last_epoch_clean\": 1, \"parent\": \"0.0\", \"parent_split_bits\": 0, \"last_scrub\": \"4'1\", \"last_scrub_stamp\": \"2013-01-25 10:12:23.828174\", \"log_size\": 128, \"ondisk_log_size\": 128, \"stat_sum\": { \"num_bytes\": 205, \"num_objects\": 1, \"num_object_clones\": 0, \"num_object_copies\": 0, \"num_objects_missing_on_primary\": 0, \"num_objects_degraded\": 0, \"num_objects_unfound\": 0, \"num_read\": 1, \"num_read_kb\": 0, \"num_write\": 3, \"num_write_kb\": 1 }, \"stat_cat_sum\": { }, \"up\": [ 1, 0 ], \"acting\": [ 1, 0 ] }, \"empty\": 0, \"dne\": 0, \"incomplete\": 0 }, \"recovery_state\": [ { \"name\": \"Started\\/Primary\\/Active\", \"enter_time\": \"2013-01-23 09:35:37.594691\", \"might_have_unfound\": [ ], \"scrub\": { \"scrub_epoch_start\": \"536\", \"scrub_active\": 0, \"scrub_block_writes\": 0, \"finalizing_scrub\": 0, \"scrub_waiting_on\": 0, \"scrub_waiting_on_whom\": [ ] } }, { \"name\": \"Started\", \"enter_time\": \"2013-01-23 09:35:31.581160\" } ] }",
"ceph pg force-recovery PG1 [ PG2 ] [ PG3 ...] ceph pg force-backfill PG1 [ PG2 ] [ PG3 ...]",
"ceph pg force-recovery group1 group2 ceph pg force-backfill group1 group2",
"ceph pg cancel-force-recovery PG1 [ PG2 ] [ PG3 ...] ceph pg cancel-force-backfill PG1 [ PG2 ] [ PG3 ...]",
"ceph pg cancel-force-recovery group1 group2 ceph pg cancel-force-backfill group1 group2",
"ceph osd pool force-recovery POOL_NAME ceph osd pool force-backfill POOL_NAME",
"ceph osd pool force-recovery pool1 ceph osd pool force-backfill pool1",
"ceph osd pool cancel-force-recovery POOL_NAME ceph osd pool cancel-force-backfill POOL_NAME",
"ceph osd pool cancel-force-recovery pool1 ceph osd pool cancel-force-backfill pool1",
"ceph osd pool set POOL_NAME recovery_priority VALUE",
"ceph osd pool set pool1 recovery_priority 10",
"pg 1.5: up=acting: [0,1,2] ADD_OSD_3 pg 1.5: up: [0,3,1] acting: [0,1,2]",
"pg 1.5: up=acting: [0,3,1]",
"ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} {<int>}",
"ceph osd map POOL_NAME OBJECT_NAME"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/4/html/administration_guide/monitoring-a-ceph-storage-cluster |
3.2. Unconfined Processes | 3.2. Unconfined Processes Unconfined processes run in unconfined domains, for example, unconfined services executed by init end up running in the unconfined_service_t domain, unconfined services executed by kernel end up running in the kernel_t domain, and unconfined services executed by unconfined Linux users end up running in the unconfined_t domain. For unconfined processes, SELinux policy rules are applied, but policy rules exist that allow processes running in unconfined domains almost all access. Processes running in unconfined domains fall back to using DAC rules exclusively. If an unconfined process is compromised, SELinux does not prevent an attacker from gaining access to system resources and data, but of course, DAC rules are still used. SELinux is a security enhancement on top of DAC rules - it does not replace them. To ensure that SELinux is enabled and the system is prepared to perform the following example, complete the Procedure 3.1, "How to Verify SELinux Status" described in Section 3.1, "Confined Processes" . The following example demonstrates how the Apache HTTP Server ( httpd ) can access data intended for use by Samba, when running unconfined. Note that in Red Hat Enterprise Linux, the httpd process runs in the confined httpd_t domain by default. This is an example, and should not be used in production. It assumes that the httpd , wget , dbus and audit packages are installed, that the SELinux targeted policy is used, and that SELinux is running in enforcing mode. Procedure 3.3. An Example of Unconfined Process The chcon command relabels files; however, such label changes do not survive when the file system is relabeled. For permanent changes that survive a file system relabel, use the semanage utility, which is discussed later. As the root user, enter the following command to change the type to a type used by Samba: View the changes: Enter the following command to confirm that the httpd process is not running: If the output differs, enter the following command as root to stop the httpd process: To make the httpd process run unconfined, enter the following command as root to change the type of the /usr/sbin/httpd file, to a type that does not transition to a confined domain: Confirm that /usr/sbin/httpd is labeled with the bin_t type: As root, start the httpd process and confirm, that it started successfully: Enter the following command to view httpd running in the unconfined_service_t domain: Change into a directory where your Linux user has write access to, and enter the following command. Unless there are changes to the default configuration, this command succeeds: Although the httpd process does not have access to files labeled with the samba_share_t type, httpd is running in the unconfined unconfined_service_t domain, and falls back to using DAC rules, and as such, the wget command succeeds. Had httpd been running in the confined httpd_t domain, the wget command would have failed. The restorecon utility restores the default SELinux context for files. As root, enter the following command to restore the default SELinux context for /usr/sbin/httpd : Confirm that /usr/sbin/httpd is labeled with the httpd_exec_t type: As root, enter the following command to restart httpd . After restarting, confirm that httpd is running in the confined httpd_t domain: As root, remove testfile : If you do not require httpd to be running, as root, enter the following command to stop httpd : The examples in these sections demonstrate how data can be protected from a compromised confined-process (protected by SELinux), as well as how data is more accessible to an attacker from a compromised unconfined-process (not protected by SELinux). | [
"~]# chcon -t samba_share_t /var/www/html/testfile",
"~]USD ls -Z /var/www/html/testfile -rw-r--r-- root root unconfined_u:object_r:samba_share_t:s0 /var/www/html/testfile",
"~]USD systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: inactive (dead)",
"~]# systemctl stop httpd.service",
"~]# chcon -t bin_t /usr/sbin/httpd",
"~]USD ls -Z /usr/sbin/httpd -rwxr-xr-x. root root system_u:object_r:bin_t:s0 /usr/sbin/httpd",
"~]# systemctl start httpd.service",
"~]# systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: active (running) since Thu 2013-08-15 11:17:01 CEST; 5s ago",
"~]USD ps -eZ | grep httpd system_u:system_r:unconfined_service_t:s0 11884 ? 00:00:00 httpd system_u:system_r:unconfined_service_t:s0 11885 ? 00:00:00 httpd system_u:system_r:unconfined_service_t:s0 11886 ? 00:00:00 httpd system_u:system_r:unconfined_service_t:s0 11887 ? 00:00:00 httpd system_u:system_r:unconfined_service_t:s0 11888 ? 00:00:00 httpd system_u:system_r:unconfined_service_t:s0 11889 ? 00:00:00 httpd",
"~]USD wget http://localhost/testfile --2009-05-07 01:41:10-- http://localhost/testfile Resolving localhost... 127.0.0.1 Connecting to localhost|127.0.0.1|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 0 [text/plain] Saving to: `testfile' [ <=> ]--.-K/s in 0s 2009-05-07 01:41:10 (0.00 B/s) - `testfile' saved [0/0]",
"~]# restorecon -v /usr/sbin/httpd restorecon reset /usr/sbin/httpd context system_u:object_r:unconfined_exec_t:s0->system_u:object_r:httpd_exec_t:s0",
"~]USD ls -Z /usr/sbin/httpd -rwxr-xr-x root root system_u:object_r:httpd_exec_t:s0 /usr/sbin/httpd",
"~]# systemctl restart httpd.service",
"~]USD ps -eZ | grep httpd system_u:system_r:httpd_t:s0 8883 ? 00:00:00 httpd system_u:system_r:httpd_t:s0 8884 ? 00:00:00 httpd system_u:system_r:httpd_t:s0 8885 ? 00:00:00 httpd system_u:system_r:httpd_t:s0 8886 ? 00:00:00 httpd system_u:system_r:httpd_t:s0 8887 ? 00:00:00 httpd system_u:system_r:httpd_t:s0 8888 ? 00:00:00 httpd system_u:system_r:httpd_t:s0 8889 ? 00:00:00 httpd",
"~]# rm -i /var/www/html/testfile rm: remove regular empty file `/var/www/html/testfile'? y",
"~]# systemctl stop httpd.service"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-Security-Enhanced_Linux-Targeted_Policy-Unconfined_Processes |
Pipelines CLI (tkn) reference | Pipelines CLI (tkn) reference Red Hat OpenShift Pipelines 1.15 The tkn CLI reference for OpenShift Pipelines Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.15/html/pipelines_cli_tkn_reference/index |
Chapter 3. Logging information for Red Hat Quay | Chapter 3. Logging information for Red Hat Quay Obtaining log information using can be beneficial in various ways for managing, monitoring, and troubleshooting applications running in containers or pods. Some of the reasons why obtaining log information is valuable include the following: Debugging and Troubleshooting : Logs provide insights into what's happening inside the application, allowing developers and system administrators to identify and resolve issues. By analyzing log messages, one can identify errors, exceptions, warnings, or unexpected behavior that might occur during the application's execution. Performance Monitoring : Monitoring logs helps to track the performance of the application and its components. Monitoring metrics like response times, request rates, and resource utilization can help in optimizing and scaling the application to meet the demand. Security Analysis : Logs can be essential in auditing and detecting potential security breaches. By analyzing logs, suspicious activities, unauthorized access attempts, or any abnormal behavior can be identified, helping in detecting and responding to security threats. Tracking User Behavior : In some cases, logs can be used to track user activities and behavior. This is particularly important for applications that handle sensitive data, where tracking user actions can be useful for auditing and compliance purposes. Capacity Planning : Log data can be used to understand resource utilization patterns, which can aid in capacity planning. By analyzing logs, one can identify peak usage periods, anticipate resource needs, and optimize infrastructure accordingly. Error Analysis : When errors occur, logs can provide valuable context about what happened leading up to the error. This can help in understanding the root cause of the issue and facilitating the debugging process. Verification of Deployment : Logging during the deployment process can help verify if the application is starting correctly and if all components are functioning as expected. Continuous Integration/Continuous Deployment (CI/CD) : In CI/CD pipelines, logging is essential to capture build and deployment statuses, allowing teams to monitor the success or failure of each stage. 3.1. Obtaining log information for Red Hat Quay Log information can be obtained for all types of Red Hat Quay deployments, including geo-replication deployments, standalone deployments, and Operator deployments. Log information can also be obtained for mirrored repositories. It can help you troubleshoot authentication and authorization issues, and object storage issues. After you have obtained the necessary log information, you can search the Red Hat Knowledgebase for a solution, or file a support ticket with the Red Hat Support team. Use the following procedure to obtain logs for your Red Hat Quay deployment. Procedure If you are using the Red Hat Quay Operator on OpenShift Container Platform, enter the following command to view the logs: USD oc logs <quay_pod_name> If you are on a standalone Red Hat Quay deployment, enter the following command: USD podman logs <quay_container_name> Example output ... gunicorn-web stdout | 2023-01-20 15:41:52,071 [205] [DEBUG] [app] Starting request: urn:request:0d88de25-03b0-4cf9-b8bc-87f1ac099429 (/oauth2/azure/callback) {'X-Forwarded-For': '174.91.79.124'} ... 3.2. Examining verbose logs Red Hat Quay does not have verbose logs, however, with the following procedures, you can obtain a detailed status check of your database pod or container. Note Additional debugging information can be returned if you have deployed Red Hat Quay in one of the following ways: You have deployed Red Hat Quay by passing in the DEBUGLOG=true variable. You have deployed Red Hat Quay with LDAP authentication enabled by passing in the DEBUGLOG=true and USERS_DEBUG=1 variables. You have configured Red Hat Quay on OpenShift Container Platform by updating the QuayRegistry resource to include DEBUGLOG=true . For more information, see "Running Red Hat Quay in debug mode". Procedure Enter the following commands to examine verbose database logs. If you are using the Red Hat Quay Operator on OpenShift Container Platform, enter the following commands: USD oc logs <quay_pod_name> -- USD oc logs <quay_pod_name> -- -c <container_name> USD oc cp <quay_pod_name>:/var/lib/pgsql/data/userdata/log/* /path/to/desired_directory_on_host If you are using a standalone deployment of Red Hat Quay, enter the following commands: USD podman logs <quay_container_id> -- USD podman logs <quay_container_id> -- -c <container_name> USD podman cp <quay_container_id>:/var/lib/pgsql/data/userdata/log/* /path/to/desired_directory_on_host | [
"oc logs <quay_pod_name>",
"podman logs <quay_container_name>",
"gunicorn-web stdout | 2023-01-20 15:41:52,071 [205] [DEBUG] [app] Starting request: urn:request:0d88de25-03b0-4cf9-b8bc-87f1ac099429 (/oauth2/azure/callback) {'X-Forwarded-For': '174.91.79.124'}",
"oc logs <quay_pod_name> --previous",
"oc logs <quay_pod_name> --previous -c <container_name>",
"oc cp <quay_pod_name>:/var/lib/pgsql/data/userdata/log/* /path/to/desired_directory_on_host",
"podman logs <quay_container_id> --previous",
"podman logs <quay_container_id> --previous -c <container_name>",
"podman cp <quay_container_id>:/var/lib/pgsql/data/userdata/log/* /path/to/desired_directory_on_host"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/troubleshooting_red_hat_quay/obtaining-quay-logs |
Chapter 5. Triggers | Chapter 5. Triggers 5.1. Triggers overview Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Events are sent from an event source to a broker as an HTTP POST request. After events have entered the broker, they can be filtered by CloudEvent attributes using triggers, and sent as an HTTP POST request to an event sink. If you are using a Knative broker for Apache Kafka, you can configure the delivery order of events from triggers to event sinks. See Configuring event delivery ordering for triggers . 5.1.1. Configuring event delivery ordering for triggers If you are using a Kafka broker, you can configure the delivery order of events from triggers to event sinks. Prerequisites The OpenShift Serverless Operator, Knative Eventing, and Knative Kafka are installed on your OpenShift Container Platform cluster. Kafka broker is enabled for use on your cluster, and you have created a Kafka broker. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have installed the OpenShift ( oc ) CLI. Procedure Create or modify a Trigger object and set the kafka.eventing.knative.dev/delivery.order annotation: apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> annotations: kafka.eventing.knative.dev/delivery.order: ordered # ... The supported consumer delivery guarantees are: unordered An unordered consumer is a non-blocking consumer that delivers messages unordered, while preserving proper offset management. ordered An ordered consumer is a per-partition blocking consumer that waits for a successful response from the CloudEvent subscriber before it delivers the message of the partition. The default ordering guarantee is unordered . Apply the Trigger object: USD oc apply -f <filename> 5.1.2. steps Configure event delivery parameters that are applied in cases where an event fails to be delivered to an event sink. 5.2. Creating triggers Brokers can be used in combination with triggers to deliver events from an event source to an event sink. Events are sent from an event source to a broker as an HTTP POST request. After events have entered the broker, they can be filtered by CloudEvent attributes using triggers, and sent as an HTTP POST request to an event sink. 5.2.1. Creating a trigger by using the Administrator perspective Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a trigger. After Knative Eventing is installed on your cluster and you have created a broker, you can create a trigger by using the web console. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console and are in the Administrator perspective. You have cluster administrator permissions on OpenShift Container Platform, or you have cluster or dedicated administrator permissions on Red Hat OpenShift Service on AWS or OpenShift Dedicated. You have created a Knative broker. You have created a Knative service to use as a subscriber. Procedure In the Administrator perspective of the OpenShift Container Platform web console, navigate to Serverless Eventing . In the Broker tab, select the Options menu for the broker that you want to add a trigger to. Click Add Trigger in the list. In the Add Trigger dialogue box, select a Subscriber for the trigger. The subscriber is the Knative service that will receive events from the broker. Click Add . 5.2.2. Creating a trigger by using the Developer perspective Using the OpenShift Container Platform web console provides a streamlined and intuitive user interface to create a trigger. After Knative Eventing is installed on your cluster and you have created a broker, you can create a trigger by using the web console. Prerequisites The OpenShift Serverless Operator, Knative Serving, and Knative Eventing are installed on your OpenShift Container Platform cluster. You have logged in to the web console. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You have created a broker and a Knative service or other event sink to connect to the trigger. Procedure In the Developer perspective, navigate to the Topology page. Hover over the broker that you want to create a trigger for, and drag the arrow. The Add Trigger option is displayed. Click Add Trigger . Select your sink in the Subscriber list. Click Add . Verification After the subscription has been created, you can view it in the Topology page, where it is represented as a line that connects the broker to the event sink. Deleting a trigger In the Developer perspective, navigate to the Topology page. Click on the trigger that you want to delete. In the Actions context menu, select Delete Trigger . 5.2.3. Creating a trigger by using the Knative CLI You can use the kn trigger create command to create a trigger. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Create a trigger: USD kn trigger create <trigger_name> --broker <broker_name> --filter <key=value> --sink <sink_name> Alternatively, you can create a trigger and simultaneously create the default broker using broker injection: USD kn trigger create <trigger_name> --inject-broker --filter <key=value> --sink <sink_name> By default, triggers forward all events sent to a broker to sinks that are subscribed to that broker. Using the --filter attribute for triggers allows you to filter events from a broker, so that subscribers will only receive a subset of events based on your defined criteria. 5.3. List triggers from the command line Using the Knative ( kn ) CLI to list triggers provides a streamlined and intuitive user interface. 5.3.1. Listing triggers by using the Knative CLI You can use the kn trigger list command to list existing triggers in your cluster. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. Procedure Print a list of available triggers: USD kn trigger list Example output NAME BROKER SINK AGE CONDITIONS READY REASON email default ksvc:edisplay 4s 5 OK / 5 True ping default ksvc:edisplay 32s 5 OK / 5 True Optional: Print a list of triggers in JSON format: USD kn trigger list -o json 5.4. Describe triggers from the command line Using the Knative ( kn ) CLI to describe triggers provides a streamlined and intuitive user interface. 5.4.1. Describing a trigger by using the Knative CLI You can use the kn trigger describe command to print information about existing triggers in your cluster by using the Knative CLI. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a trigger. Procedure Enter the command: USD kn trigger describe <trigger_name> Example output Name: ping Namespace: default Labels: eventing.knative.dev/broker=default Annotations: eventing.knative.dev/creator=kube:admin, eventing.knative.dev/lastModifier=kube:admin Age: 2m Broker: default Filter: type: dev.knative.event Sink: Name: edisplay Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 2m ++ BrokerReady 2m ++ DependencyReady 2m ++ Subscribed 2m ++ SubscriberResolved 2m 5.5. Connecting a trigger to a sink You can connect a trigger to a sink, so that events from a broker are filtered before they are sent to the sink. A sink that is connected to a trigger is configured as a subscriber in the Trigger object's resource spec. Example of a Trigger object connected to an Apache Kafka sink apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> 1 spec: ... subscriber: ref: apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink name: <kafka_sink_name> 2 1 The name of the trigger being connected to the sink. 2 The name of a KafkaSink object. 5.6. Filtering triggers from the command line Using the Knative ( kn ) CLI to filter events by using triggers provides a streamlined and intuitive user interface. You can use the kn trigger create command, along with the appropriate flags, to filter events by using triggers. 5.6.1. Filtering events with triggers by using the Knative CLI In the following trigger example, only events with the attribute type: dev.knative.samples.helloworld are sent to the event sink: USD kn trigger create <trigger_name> --broker <broker_name> --filter type=dev.knative.samples.helloworld --sink ksvc:<service_name> You can also filter events by using multiple attributes. The following example shows how to filter events using the type, source, and extension attributes: USD kn trigger create <trigger_name> --broker <broker_name> --sink ksvc:<service_name> \ --filter type=dev.knative.samples.helloworld \ --filter source=dev.knative.samples/helloworldsource \ --filter myextension=my-extension-value 5.7. Updating triggers from the command line Using the Knative ( kn ) CLI to update triggers provides a streamlined and intuitive user interface. 5.7.1. Updating a trigger by using the Knative CLI You can use the kn trigger update command with certain flags to update attributes for a trigger. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Update a trigger: USD kn trigger update <trigger_name> --filter <key=value> --sink <sink_name> [flags] You can update a trigger to filter exact event attributes that match incoming events. For example, using the type attribute: USD kn trigger update <trigger_name> --filter type=knative.dev.event You can remove a filter attribute from a trigger. For example, you can remove the filter attribute with key type : USD kn trigger update <trigger_name> --filter type- You can use the --sink parameter to change the event sink of a trigger: USD kn trigger update <trigger_name> --sink ksvc:my-event-sink 5.8. Deleting triggers from the command line Using the Knative ( kn ) CLI to delete a trigger provides a streamlined and intuitive user interface. 5.8.1. Deleting a trigger by using the Knative CLI You can use the kn trigger delete command to delete a trigger. Prerequisites The OpenShift Serverless Operator and Knative Eventing are installed on your OpenShift Container Platform cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. Procedure Delete a trigger: USD kn trigger delete <trigger_name> Verification List existing triggers: USD kn trigger list Verify that the trigger no longer exists: Example output No triggers found. | [
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> annotations: kafka.eventing.knative.dev/delivery.order: ordered",
"oc apply -f <filename>",
"kn trigger create <trigger_name> --broker <broker_name> --filter <key=value> --sink <sink_name>",
"kn trigger create <trigger_name> --inject-broker --filter <key=value> --sink <sink_name>",
"kn trigger list",
"NAME BROKER SINK AGE CONDITIONS READY REASON email default ksvc:edisplay 4s 5 OK / 5 True ping default ksvc:edisplay 32s 5 OK / 5 True",
"kn trigger list -o json",
"kn trigger describe <trigger_name>",
"Name: ping Namespace: default Labels: eventing.knative.dev/broker=default Annotations: eventing.knative.dev/creator=kube:admin, eventing.knative.dev/lastModifier=kube:admin Age: 2m Broker: default Filter: type: dev.knative.event Sink: Name: edisplay Namespace: default Resource: Service (serving.knative.dev/v1) Conditions: OK TYPE AGE REASON ++ Ready 2m ++ BrokerReady 2m ++ DependencyReady 2m ++ Subscribed 2m ++ SubscriberResolved 2m",
"apiVersion: eventing.knative.dev/v1 kind: Trigger metadata: name: <trigger_name> 1 spec: subscriber: ref: apiVersion: eventing.knative.dev/v1alpha1 kind: KafkaSink name: <kafka_sink_name> 2",
"kn trigger create <trigger_name> --broker <broker_name> --filter type=dev.knative.samples.helloworld --sink ksvc:<service_name>",
"kn trigger create <trigger_name> --broker <broker_name> --sink ksvc:<service_name> --filter type=dev.knative.samples.helloworld --filter source=dev.knative.samples/helloworldsource --filter myextension=my-extension-value",
"kn trigger update <trigger_name> --filter <key=value> --sink <sink_name> [flags]",
"kn trigger update <trigger_name> --filter type=knative.dev.event",
"kn trigger update <trigger_name> --filter type-",
"kn trigger update <trigger_name> --sink ksvc:my-event-sink",
"kn trigger delete <trigger_name>",
"kn trigger list",
"No triggers found."
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/eventing/triggers |
19.7.2. Useful Websites | 19.7.2. Useful Websites http://web.mit.edu/kerberos/www/ - Kerberos: The Network Authentication Protocol webpage from MIT. http://www.nrl.navy.mil/CCS/people/kenh/kerberos-faq.html - The Kerberos Frequently Asked Questions (FAQ). ftp://athena-dist.mit.edu/pub/kerberos/doc/usenix.PS - The PostScript version of Kerberos: An Authentication Service for Open Network Systems by Jennifer G. Steiner, Clifford Neuman, and Jeffrey I. Schiller. This document is the original paper describing Kerberos. http://web.mit.edu/kerberos/www/dialogue.html - Designing an Authentication System: a Dialogue in Four Scenes originally by Bill Bryant in 1988, modified by Theodore Ts'o in 1997. This document is a conversation between two developers who are thinking through the creation of a Kerberos-style authentication system. The conversational style of the discussion make this a good starting place for people who are completely unfamiliar with Kerberos. http://www.ornl.gov/~jar/HowToKerb.html - How to Kerberize your site is a good reference for kerberizing a network. http://www.networkcomputing.com/netdesign/kerb1.html - Kerberos Network Design Manual is a thorough overview of the Kerberos system. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-kerberos-useful-websites |
Installing on IBM Z and IBM LinuxONE | Installing on IBM Z and IBM LinuxONE OpenShift Container Platform 4.14 Installing OpenShift Container Platform on IBM Z and IBM LinuxONE Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_z_and_ibm_linuxone/index |
5.2.3. Define a target profile in virt-v2v.conf | 5.2.3. Define a target profile in virt-v2v.conf Now that you are able to connect to the conversion server as root, it must be pre-configured with details about what to do with the virtual machine it creates. These details are given as a target profile in the /etc/virt-v2v.conf file on the conversion server. Define a target profile in virt-v2v.conf : As root, edit /etc/virt-v2v.conf : Scroll to the end of the file. Before the final </virt-v2v> , add the following: Where: Profile Name is an arbitrary, descriptive target profile name. Method is the destination hypervisor type (rhev or libvirt). Storage Format is the output storage format, either raw or qcow2. Allocation is the output allocation policy, either preallocated or sparse. Network type specifies the network to which a network interface should be connected when imported into Red Hat Enterprise Virtualization. The first network type entry contains details about network configuration before conversion, the second network type entry maps to an after conversion configuration. In the given example, any detected network card is to be mapped to the managed network called rhevm. Important The value associated with the <storage format> tag (in the above example "nfs.share.com:/export1") must match the value associated with the <method> tag. In this example, since the output method is "rhev", the value associated with storage must be an initialized NFS share. For the libvirt method, the storage format value must be an initialized storage domain that exists locally on the conversion server, for example "default". You have created a target profile that defines what will happen to the virtual machine that results from this P2V conversion. | [
"nano /etc/virt-v2v.conf",
"<profile name=\"myrhev\"> <method>rhev</method> <storage format=\"raw\" allocation=\"preallocated\"> nfs.share.com:/export1 </storage> <network type=\"default\"> <network type=\"network\" name=\"rhevm\"/> </network> </profile>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/v2v_guide/preparation_before_the_p2v_migration-define_a_host_profile_in_virt_v2v-conf |
Chapter 142. KafkaRebalanceSpec schema reference | Chapter 142. KafkaRebalanceSpec schema reference Used in: KafkaRebalance Property Description mode Mode to run the rebalancing. The supported modes are full , add-brokers , remove-brokers . If not specified, the full mode is used by default. full mode runs the rebalancing across all the brokers in the cluster. add-brokers mode can be used after scaling up the cluster to move some replicas to the newly added brokers. remove-brokers mode can be used before scaling down the cluster to move replicas out of the brokers to be removed. string (one of [remove-brokers, full, add-brokers]) brokers The list of newly added brokers in case of scaling up or the ones to be removed in case of scaling down to use for rebalancing. This list can be used only with rebalancing mode add-brokers and removed-brokers . It is ignored with full mode. integer array goals A list of goals, ordered by decreasing priority, to use for generating and executing the rebalance proposal. The supported goals are available at https://github.com/linkedin/cruise-control#goals . If an empty goals list is provided, the goals declared in the default.goals Cruise Control configuration parameter are used. string array skipHardGoalCheck Whether to allow the hard goals specified in the Kafka CR to be skipped in optimization proposal generation. This can be useful when some of those hard goals are preventing a balance solution being found. Default is false. boolean rebalanceDisk Enables intra-broker disk balancing, which balances disk space utilization between disks on the same broker. Only applies to Kafka deployments that use JBOD storage with multiple disks. When enabled, inter-broker balancing is disabled. Default is false. boolean excludedTopics A regular expression where any matching topics will be excluded from the calculation of optimization proposals. This expression will be parsed by the java.util.regex.Pattern class; for more information on the supported format consult the documentation for that class. string concurrentPartitionMovementsPerBroker The upper bound of ongoing partition replica movements going into/out of each broker. Default is 5. integer concurrentIntraBrokerPartitionMovements The upper bound of ongoing partition replica movements between disks within each broker. Default is 2. integer concurrentLeaderMovements The upper bound of ongoing partition leadership movements. Default is 1000. integer replicationThrottle The upper bound, in bytes per second, on the bandwidth used to move replicas. There is no limit by default. integer replicaMovementStrategies A list of strategy class names used to determine the execution order for the replica movements in the generated optimization proposal. By default BaseReplicaMovementStrategy is used, which will execute the replica movements in the order that they were generated. string array | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaRebalanceSpec-reference |
Chapter 3. Installing a cluster quickly on GCP | Chapter 3. Installing a cluster quickly on GCP In OpenShift Container Platform version 4.18, you can install a cluster on Google Cloud Platform (GCP) that uses the default configuration options. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 3.5. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. 2 To view different installation details, specify warn , debug , or error instead of info . When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Provide values at the prompts: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your host, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. If you provide a name that is longer than 6 characters, only the first 6 characters will be used in the infrastructure ID that is generated from the cluster name. Paste the pull secret from Red Hat OpenShift Cluster Manager . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.6. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.18. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.18 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 3.7. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 3.8. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.9. steps Customize your cluster . If necessary, you can opt out of remote health reporting . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_gcp/installing-gcp-default |
9.2.3. Examples of Three-Node and Two-Node Configurations | 9.2.3. Examples of Three-Node and Two-Node Configurations Refer to the examples that follow for comparison between a three-node and a two-node configuration. Example 9.1. Three-node Cluster Configuration Example 9.2. Two-node Cluster Configuration | [
"<cluster name=\"mycluster\" config_version=\"3\"> <cman/> <clusternodes> <clusternode name=\"node-01.example.com\" nodeid=\"1\"> <fence> <method name=\"APC\"> <device name=\"apc\" port=\"1\"/> </method> </fence> </clusternode> <clusternode name=\"node-02.example.com\" nodeid=\"2\"> <fence> <method name=\"APC\"> <device name=\"apc\" port=\"2\"/> </method> </fence> </clusternode> <clusternode name=\"node-03.example.com\" nodeid=\"3\"> <fence> <method name=\"APC\"> <device name=\"apc\" port=\"3\"/> </method> </fence> </clusternode> </clusternodes> <fencedevices> <fencedevice agent=\"fence_apc\" ipaddr=\"apc_ip_example\" login=\"login_example\" name=\"apc\" passwd=\"password_example\"/> </fencedevices> <rm> <failoverdomains> <failoverdomain name=\"example_pri\" nofailback=\"0\" ordered=\"1\" restricted=\"0\"> <failoverdomainnode name=\"node-01.example.com\" priority=\"1\"/> <failoverdomainnode name=\"node-02.example.com\" priority=\"2\"/> <failoverdomainnode name=\"node-03.example.com\" priority=\"3\"/> </failoverdomain> </failoverdomains> <resources> <ip address=\"127.143.131.100\" monitor_link=\"yes\" sleeptime=\"10\"> <fs name=\"web_fs\" device=\"/dev/sdd2\" mountpoint=\"/var/www\" fstype=\"ext3\"> <apache config_file=\"conf/httpd.conf\" name=\"example_server\" server_root=\"/etc/httpd\" shutdown_wait=\"0\"/> </fs> </ip> </resources> <service autostart=\"0\" domain=\"example_pri\" exclusive=\"0\" name=\"example_apache\" recovery=\"relocate\"> <fs ref=\"web_fs\"/> <ip ref=\"127.143.131.100\"/> <apache ref=\"example_server\"/> </service> <service autostart=\"0\" domain=\"example_pri\" exclusive=\"0\" name=\"example_apache2\" recovery=\"relocate\"> <fs name=\"web_fs2\" device=\"/dev/sdd3\" mountpoint=\"/var/www\" fstype=\"ext3\"/> <ip address=\"127.143.131.101\" monitor_link=\"yes\" sleeptime=\"10\"/> <apache config_file=\"conf/httpd.conf\" name=\"example_server2\" server_root=\"/etc/httpd\" shutdown_wait=\"0\"/> </service> </rm> </cluster>",
"<cluster name=\"mycluster\" config_version=\"3\"> <cman two_node=\"1\" expected_votes=\"1\"/> <clusternodes> <clusternode name=\"node-01.example.com\" nodeid=\"1\"> <fence> <method name=\"APC\"> <device name=\"apc\" port=\"1\"/> </method> </fence> </clusternode> <clusternode name=\"node-02.example.com\" nodeid=\"2\"> <fence> <method name=\"APC\"> <device name=\"apc\" port=\"2\"/> </method> </fence> </clusternodes> <fencedevices> <fencedevice agent=\"fence_apc\" ipaddr=\"apc_ip_example\" login=\"login_example\" name=\"apc\" passwd=\"password_example\"/> </fencedevices> <rm> <failoverdomains> <failoverdomain name=\"example_pri\" nofailback=\"0\" ordered=\"1\" restricted=\"0\"> <failoverdomainnode name=\"node-01.example.com\" priority=\"1\"/> <failoverdomainnode name=\"node-02.example.com\" priority=\"2\"/> </failoverdomain> </failoverdomains> <resources> <ip address=\"127.143.131.100\" monitor_link=\"yes\" sleeptime=\"10\"> <fs name=\"web_fs\" device=\"/dev/sdd2\" mountpoint=\"/var/www\" fstype=\"ext3\"> <apache config_file=\"conf/httpd.conf\" name=\"example_server\" server_root=\"/etc/httpd\" shutdown_wait=\"0\"/> </fs> </ip> </resources> <service autostart=\"0\" domain=\"example_pri\" exclusive=\"0\" name=\"example_apache\" recovery=\"relocate\"> <fs ref=\"web_fs\"/> <ip ref=\"127.143.131.100\"/> <apache ref=\"example_server\"/> </service> <service autostart=\"0\" domain=\"example_pri\" exclusive=\"0\" name=\"example_apache2\" recovery=\"relocate\"> <fs name=\"web_fs2\" device=\"/dev/sdd3\" mountpoint=\"/var/www\" fstype=\"ext3\"/> <ip address=\"127.143.131.101\" monitor_link=\"yes\" sleeptime=\"10\"/> <apache config_file=\"conf/httpd.conf\" name=\"example_server2\" server_root=\"/etc/httpd\" shutdown_wait=\"0\"/> </service> </rm> </cluster>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-clusterconf-two-three-node-examples-CA |
Providing feedback on Red Hat documentation | Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_on_any_platform/providing-feedback-on-red-hat-documentation_rhodf |
Chapter 3. Understanding persistent storage | Chapter 3. Understanding persistent storage 3.1. Persistent storage overview Managing storage is a distinct problem from managing compute resources. OpenShift Container Platform uses the Kubernetes persistent volume (PV) framework to allow cluster administrators to provision persistent storage for a cluster. Developers can use persistent volume claims (PVCs) to request PV resources without having specific knowledge of the underlying storage infrastructure. PVCs are specific to a project, and are created and used by developers as a means to use a PV. PV resources on their own are not scoped to any single project; they can be shared across the entire OpenShift Container Platform cluster and claimed from any project. After a PV is bound to a PVC, that PV can not then be bound to additional PVCs. This has the effect of scoping a bound PV to a single namespace, that of the binding project. PVs are defined by a PersistentVolume API object, which represents a piece of existing storage in the cluster that was either statically provisioned by the cluster administrator or dynamically provisioned using a StorageClass object. It is a resource in the cluster just like a node is a cluster resource. PVs are volume plugins like Volumes but have a lifecycle that is independent of any individual pod that uses the PV. PV objects capture the details of the implementation of the storage, be that NFS, iSCSI, or a cloud-provider-specific storage system. Important High availability of storage in the infrastructure is left to the underlying storage provider. PVCs are defined by a PersistentVolumeClaim API object, which represents a request for storage by a developer. It is similar to a pod in that pods consume node resources and PVCs consume PV resources. For example, pods can request specific levels of resources, such as CPU and memory, while PVCs can request specific storage capacity and access modes. For example, they can be mounted once read-write or many times read-only. 3.2. Lifecycle of a volume and claim PVs are resources in the cluster. PVCs are requests for those resources and also act as claim checks to the resource. The interaction between PVs and PVCs have the following lifecycle. 3.2.1. Provision storage In response to requests from a developer defined in a PVC, a cluster administrator configures one or more dynamic provisioners that provision storage and a matching PV. Alternatively, a cluster administrator can create a number of PVs in advance that carry the details of the real storage that is available for use. PVs exist in the API and are available for use. 3.2.2. Bind claims When you create a PVC, you request a specific amount of storage, specify the required access mode, and create a storage class to describe and classify the storage. The control loop in the master watches for new PVCs and binds the new PVC to an appropriate PV. If an appropriate PV does not exist, a provisioner for the storage class creates one. The size of all PVs might exceed your PVC size. This is especially true with manually provisioned PVs. To minimize the excess, OpenShift Container Platform binds to the smallest PV that matches all other criteria. Claims remain unbound indefinitely if a matching volume does not exist or can not be created with any available provisioner servicing a storage class. Claims are bound as matching volumes become available. For example, a cluster with many manually provisioned 50Gi volumes would not match a PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to the cluster. 3.2.3. Use pods and claimed PVs Pods use claims as volumes. The cluster inspects the claim to find the bound volume and mounts that volume for a pod. For those volumes that support multiple access modes, you must specify which mode applies when you use the claim as a volume in a pod. Once you have a claim and that claim is bound, the bound PV belongs to you for as long as you need it. You can schedule pods and access claimed PVs by including persistentVolumeClaim in the pod's volumes block. Note If you attach persistent volumes that have high file counts to pods, those pods can fail or can take a long time to start. For more information, see When using Persistent Volumes with high file counts in OpenShift, why do pods fail to start or take an excessive amount of time to achieve "Ready" state? . 3.2.4. Storage Object in Use Protection The Storage Object in Use Protection feature ensures that PVCs in active use by a pod and PVs that are bound to PVCs are not removed from the system, as this can result in data loss. Storage Object in Use Protection is enabled by default. Note A PVC is in active use by a pod when a Pod object exists that uses the PVC. If a user deletes a PVC that is in active use by a pod, the PVC is not removed immediately. PVC removal is postponed until the PVC is no longer actively used by any pods. Also, if a cluster admin deletes a PV that is bound to a PVC, the PV is not removed immediately. PV removal is postponed until the PV is no longer bound to a PVC. 3.2.5. Release a persistent volume When you are finished with a volume, you can delete the PVC object from the API, which allows reclamation of the resource. The volume is considered released when the claim is deleted, but it is not yet available for another claim. The claimant's data remains on the volume and must be handled according to policy. 3.2.6. Reclaim policy for persistent volumes The reclaim policy of a persistent volume tells the cluster what to do with the volume after it is released. A volume's reclaim policy can be Retain , Recycle , or Delete . Retain reclaim policy allows manual reclamation of the resource for those volume plugins that support it. Recycle reclaim policy recycles the volume back into the pool of unbound persistent volumes once it is released from its claim. Important The Recycle reclaim policy is deprecated in OpenShift Container Platform 4. Dynamic provisioning is recommended for equivalent and better functionality. Delete reclaim policy deletes both the PersistentVolume object from OpenShift Container Platform and the associated storage asset in external infrastructure, such as AWS EBS or VMware vSphere. Note Dynamically provisioned volumes are always deleted. 3.2.7. Reclaiming a persistent volume manually When a persistent volume claim (PVC) is deleted, the persistent volume (PV) still exists and is considered "released". However, the PV is not yet available for another claim because the data of the claimant remains on the volume. Procedure To manually reclaim the PV as a cluster administrator: Delete the PV. USD oc delete pv <pv-name> The associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume, still exists after the PV is deleted. Clean up the data on the associated storage asset. Delete the associated storage asset. Alternately, to reuse the same storage asset, create a new PV with the storage asset definition. The reclaimed PV is now available for use by another PVC. 3.2.8. Changing the reclaim policy of a persistent volume To change the reclaim policy of a persistent volume: List the persistent volumes in your cluster: USD oc get pv Example output Choose one of your persistent volumes and change its reclaim policy: USD oc patch pv <your-pv-name> -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}' Verify that your chosen persistent volume has the right policy: USD oc get pv Example output In the preceding output, the volume bound to claim default/claim3 now has a Retain reclaim policy. The volume will not be automatically deleted when a user deletes claim default/claim3 . 3.3. Persistent volumes Each PV contains a spec and status , which is the specification and status of the volume, for example: PersistentVolume object definition example apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain 4 ... status: ... 1 Name of the persistent volume. 2 The amount of storage available to the volume. 3 The access mode, defining the read-write and mount permissions. 4 The reclaim policy, indicating how the resource should be handled once it is released. 3.3.1. Types of PVs OpenShift Container Platform supports the following persistent volume plugins: AliCloud Disk AWS Elastic Block Store (EBS) AWS Elastic File Store (EFS) Azure Disk Azure File Cinder Fibre Channel GCE Persistent Disk IBM VPC Block HostPath iSCSI Local volume NFS OpenStack Manila Red Hat OpenShift Data Foundation VMware vSphere 3.3.2. Capacity Generally, a persistent volume (PV) has a specific storage capacity. This is set by using the capacity attribute of the PV. Currently, storage capacity is the only resource that can be set or requested. Future attributes may include IOPS, throughput, and so on. 3.3.3. Access modes A persistent volume can be mounted on a host in any way supported by the resource provider. Providers have different capabilities and each PV's access modes are set to the specific modes supported by that particular volume. For example, NFS can support multiple read-write clients, but a specific NFS PV might be exported on the server as read-only. Each PV gets its own set of access modes describing that specific PV's capabilities. Claims are matched to volumes with similar access modes. The only two matching criteria are access modes and size. A claim's access modes represent a request. Therefore, you might be granted more, but never less. For example, if a claim requests RWO, but the only volume available is an NFS PV (RWO+ROX+RWX), the claim would then match NFS because it supports RWO. Direct matches are always attempted first. The volume's modes must match or contain more modes than you requested. The size must be greater than or equal to what is expected. If two types of volumes, such as NFS and iSCSI, have the same set of access modes, either of them can match a claim with those modes. There is no ordering between types of volumes and no way to choose one type over another. All volumes with the same modes are grouped, and then sorted by size, smallest to largest. The binder gets the group with matching modes and iterates over each, in size order, until one size matches. The following table lists the access modes: Table 3.1. Access modes Access Mode CLI abbreviation Description ReadWriteOnce RWO The volume can be mounted as read-write by a single node. ReadOnlyMany ROX The volume can be mounted as read-only by many nodes. ReadWriteMany RWX The volume can be mounted as read-write by many nodes. Important Volume access modes are descriptors of volume capabilities. They are not enforced constraints. The storage provider is responsible for runtime errors resulting from invalid use of the resource. For example, NFS offers ReadWriteOnce access mode. You must mark the claims as read-only if you want to use the volume's ROX capability. Errors in the provider show up at runtime as mount errors. iSCSI and Fibre Channel volumes do not currently have any fencing mechanisms. You must ensure the volumes are only used by one node at a time. In certain situations, such as draining a node, the volumes can be used simultaneously by two nodes. Before draining the node, first ensure the pods that use these volumes are deleted. Table 3.2. Supported access modes for PVs Volume plugin ReadWriteOnce [1] ReadOnlyMany ReadWriteMany AliCloud Disk ✅ - - AWS EBS [2] ✅ - - AWS EFS ✅ ✅ ✅ Azure File ✅ ✅ ✅ Azure Disk ✅ - - Cinder ✅ - - Fibre Channel ✅ ✅ - GCE Persistent Disk ✅ - - HostPath ✅ - - IBM VPC Disk ✅ - - iSCSI ✅ ✅ - Local volume ✅ - - NFS ✅ ✅ ✅ OpenStack Manila - - ✅ Red Hat OpenShift Data Foundation ✅ - ✅ VMware vSphere ✅ - ✅ [3] ReadWriteOnce (RWO) volumes cannot be mounted on multiple nodes. If a node fails, the system does not allow the attached RWO volume to be mounted on a new node because it is already assigned to the failed node. If you encounter a multi-attach error message as a result, force delete the pod on a shutdown or crashed node to avoid data loss in critical workloads, such as when dynamic persistent volumes are attached. Use a recreate deployment strategy for pods that rely on AWS EBS. If the underlying vSphere environment supports the vSAN file service, then the vSphere Container Storage Interface (CSI) Driver Operator installed by OpenShift Container Platform supports provisioning of ReadWriteMany (RWX) volumes. If you do not have vSAN file service configured, and you request RWX, the volume fails to get created and an error is logged. For more information, see "Using Container Storage Interface" "VMware vSphere CSI Driver Operator". 3.3.4. Phase Volumes can be found in one of the following phases: Table 3.3. Volume phases Phase Description Available A free resource not yet bound to a claim. Bound The volume is bound to a claim. Released The claim was deleted, but the resource is not yet reclaimed by the cluster. Failed The volume has failed its automatic reclamation. You can view the name of the PVC bound to the PV by running: USD oc get pv <pv-claim> 3.3.4.1. Mount options You can specify mount options while mounting a PV by using the attribute mountOptions . For example: Mount options example apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce mountOptions: 1 - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2 persistentVolumeReclaimPolicy: Retain claimRef: name: claim1 namespace: default 1 Specified mount options are used while mounting the PV to the disk. The following PV types support mount options: AWS Elastic Block Store (EBS) Azure Disk Azure File Cinder GCE Persistent Disk iSCSI Local volume NFS Red Hat OpenShift Data Foundation (Ceph RBD only) VMware vSphere Note Fibre Channel and HostPath PVs do not support mount options. Additional resources ReadWriteMany vSphere volume support 3.4. Persistent volume claims Each PersistentVolumeClaim object contains a spec and status , which is the specification and status of the persistent volume claim (PVC), for example: PersistentVolumeClaim object definition example kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 8Gi 3 storageClassName: gold 4 status: ... 1 Name of the PVC 2 The access mode, defining the read-write and mount permissions 3 The amount of storage available to the PVC 4 Name of the StorageClass required by the claim 3.4.1. Storage classes Claims can optionally request a specific storage class by specifying the storage class's name in the storageClassName attribute. Only PVs of the requested class, ones with the same storageClassName as the PVC, can be bound to the PVC. The cluster administrator can configure dynamic provisioners to service one or more storage classes. The cluster administrator can create a PV on demand that matches the specifications in the PVC. Important The Cluster Storage Operator might install a default storage class depending on the platform in use. This storage class is owned and controlled by the operator. It cannot be deleted or modified beyond defining annotations and labels. If different behavior is desired, you must define a custom storage class. The cluster administrator can also set a default storage class for all PVCs. When a default storage class is configured, the PVC must explicitly ask for StorageClass or storageClassName annotations set to "" to be bound to a PV without a storage class. Note If more than one storage class is marked as default, a PVC can only be created if the storageClassName is explicitly specified. Therefore, only one storage class should be set as the default. 3.4.2. Access modes Claims use the same conventions as volumes when requesting storage with specific access modes. 3.4.3. Resources Claims, such as pods, can request specific quantities of a resource. In this case, the request is for storage. The same resource model applies to volumes and claims. 3.4.4. Claims as volumes Pods access storage by using the claim as a volume. Claims must exist in the same namespace as the pod using the claim. The cluster finds the claim in the pod's namespace and uses it to get the PersistentVolume backing the claim. The volume is mounted to the host and into the pod, for example: Mount volume to the host and into the pod example kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: "/var/www/html" 1 name: mypd 2 volumes: - name: mypd persistentVolumeClaim: claimName: myclaim 3 1 Path to mount the volume inside the pod. 2 Name of the volume to mount. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . 3 Name of the PVC, that exists in the same namespace, to use. 3.5. Block volume support OpenShift Container Platform can statically provision raw block volumes. These volumes do not have a file system, and can provide performance benefits for applications that either write to the disk directly or implement their own storage service. Raw block volumes are provisioned by specifying volumeMode: Block in the PV and PVC specification. Important Pods using raw block volumes must be configured to allow privileged containers. The following table displays which volume plugins support block volumes. Table 3.4. Block volume support Volume Plugin Manually provisioned Dynamically provisioned Fully supported AliCloud Disk ✅ ✅ ✅ AWS EBS ✅ ✅ ✅ AWS EFS Azure Disk ✅ ✅ ✅ Azure File Cinder ✅ ✅ ✅ Fibre Channel ✅ ✅ GCP ✅ ✅ ✅ HostPath IBM VPC Disk ✅ ✅ ✅ iSCSI ✅ ✅ Local volume ✅ ✅ NFS Red Hat OpenShift Data Foundation ✅ ✅ ✅ VMware vSphere ✅ ✅ ✅ Important Using any of the block volumes that can be provisioned manually, but are not provided as fully supported, is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 3.5.1. Block volume examples PV example apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce volumeMode: Block 1 persistentVolumeReclaimPolicy: Retain fc: targetWWNs: ["50060e801049cfd1"] lun: 0 readOnly: false 1 volumeMode must be set to Block to indicate that this PV is a raw block volume. PVC example apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block 1 resources: requests: storage: 10Gi 1 volumeMode must be set to Block to indicate that a raw block PVC is requested. Pod specification example apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: ["/bin/sh", "-c"] args: [ "tail -f /dev/null" ] volumeDevices: 1 - name: data devicePath: /dev/xvda 2 volumes: - name: data persistentVolumeClaim: claimName: block-pvc 3 1 volumeDevices , instead of volumeMounts , is used for block devices. Only PersistentVolumeClaim sources can be used with raw block volumes. 2 devicePath , instead of mountPath , represents the path to the physical device where the raw block is mapped to the system. 3 The volume source must be of type persistentVolumeClaim and must match the name of the PVC as expected. Table 3.5. Accepted values for volumeMode Value Default Filesystem Yes Block No Table 3.6. Binding scenarios for block volumes PV volumeMode PVC volumeMode Binding result Filesystem Filesystem Bind Unspecified Unspecified Bind Filesystem Unspecified Bind Unspecified Filesystem Bind Block Block Bind Unspecified Block No Bind Block Unspecified No Bind Filesystem Block No Bind Block Filesystem No Bind Important Unspecified values result in the default value of Filesystem . 3.6. Using fsGroup to reduce pod timeouts If a storage volume contains many files (~1,000,000 or greater), you may experience pod timeouts. This can occur because, by default, OpenShift Container Platform recursively changes ownership and permissions for the contents of each volume to match the fsGroup specified in a pod's securityContext when that volume is mounted. For large volumes, checking and changing ownership and permissions can be time consuming, slowing pod startup. You can use the fsGroupChangePolicy field inside a securityContext to control the way that OpenShift Container Platform checks and manages ownership and permissions for a volume. fsGroupChangePolicy defines behavior for changing ownership and permission of the volume before being exposed inside a pod. This field only applies to volume types that support fsGroup -controlled ownership and permissions. This field has two possible values: OnRootMismatch : Only change permissions and ownership if permission and ownership of root directory does not match with expected permissions of the volume. This can help shorten the time it takes to change ownership and permission of a volume to reduce pod timeouts. Always : Always change permission and ownership of the volume when a volume is mounted. fsGroupChangePolicy example securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 fsGroupChangePolicy: "OnRootMismatch" 1 ... 1 OnRootMismatch specifies skipping recursive permission change, thus helping to avoid pod timeout problems. Note The fsGroupChangePolicyfield has no effect on ephemeral volume types, such as secret, configMap, and emptydir. | [
"oc delete pv <pv-name>",
"oc get pv",
"NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim3 manual 3s",
"oc patch pv <your-pv-name> -p '{\"spec\":{\"persistentVolumeReclaimPolicy\":\"Retain\"}}'",
"oc get pv",
"NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b6efd8da-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim1 manual 10s pvc-b95650f8-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Delete Bound default/claim2 manual 6s pvc-bb3ca71d-b7b5-11e6-9d58-0ed433a7dd94 4Gi RWO Retain Bound default/claim3 manual 3s",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 1 spec: capacity: storage: 5Gi 2 accessModes: - ReadWriteOnce 3 persistentVolumeReclaimPolicy: Retain 4 status:",
"oc get pv <pv-claim>",
"apiVersion: v1 kind: PersistentVolume metadata: name: pv0001 spec: capacity: storage: 1Gi accessModes: - ReadWriteOnce mountOptions: 1 - nfsvers=4.1 nfs: path: /tmp server: 172.17.0.2 persistentVolumeReclaimPolicy: Retain claimRef: name: claim1 namespace: default",
"kind: PersistentVolumeClaim apiVersion: v1 metadata: name: myclaim 1 spec: accessModes: - ReadWriteOnce 2 resources: requests: storage: 8Gi 3 storageClassName: gold 4 status:",
"kind: Pod apiVersion: v1 metadata: name: mypod spec: containers: - name: myfrontend image: dockerfile/nginx volumeMounts: - mountPath: \"/var/www/html\" 1 name: mypd 2 volumes: - name: mypd persistentVolumeClaim: claimName: myclaim 3",
"apiVersion: v1 kind: PersistentVolume metadata: name: block-pv spec: capacity: storage: 10Gi accessModes: - ReadWriteOnce volumeMode: Block 1 persistentVolumeReclaimPolicy: Retain fc: targetWWNs: [\"50060e801049cfd1\"] lun: 0 readOnly: false",
"apiVersion: v1 kind: PersistentVolumeClaim metadata: name: block-pvc spec: accessModes: - ReadWriteOnce volumeMode: Block 1 resources: requests: storage: 10Gi",
"apiVersion: v1 kind: Pod metadata: name: pod-with-block-volume spec: containers: - name: fc-container image: fedora:26 command: [\"/bin/sh\", \"-c\"] args: [ \"tail -f /dev/null\" ] volumeDevices: 1 - name: data devicePath: /dev/xvda 2 volumes: - name: data persistentVolumeClaim: claimName: block-pvc 3",
"securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000 fsGroupChangePolicy: \"OnRootMismatch\" 1"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/storage/understanding-persistent-storage |
Chapter 1. OpenShift image registry overview | Chapter 1. OpenShift image registry overview OpenShift Container Platform can build images from your source code, deploy them, and manage their lifecycle. It provides an internal, integrated container image registry that can be deployed in your OpenShift Container Platform environment to locally manage images. This overview contains reference information and links for registries commonly used with OpenShift Container Platform, with a focus on the OpenShift image registry. 1.1. Glossary of common terms for OpenShift image registry This glossary defines the common terms that are used in the registry content. container Lightweight and executable images that consist software and all its dependencies. Because containers virtualize the operating system, you can run containers in data center, a public or private cloud, or your local host. Image Registry Operator The Image Registry Operator runs in the openshift-image-registry namespace, and manages the registry instance in that location. image repository An image repository is a collection of related container images and tags identifying images. mirror registry The mirror registry is a registry that holds the mirror of OpenShift Container Platform images. namespace A namespace isolates groups of resources within a single cluster. pod The pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node. private registry A registry is a server that implements the container image registry API. A private registry is a registry that requires authentication to allow users access its contents. public registry A registry is a server that implements the container image registry API. A public registry is a registry that serves its contently publicly. Quay.io A public Red Hat Quay Container Registry instance provided and maintained by Red Hat, that serves most of the container images and Operators to OpenShift Container Platform clusters. OpenShift image registry OpenShift image registry is the registry provided by OpenShift Container Platform to manage images. registry authentication To push and pull images to and from private image repositories, the registry needs to authenticate its users with credentials. route Exposes a service to allow for network access to pods from users and applications outside the OpenShift Container Platform instance. scale down To decrease the number of replicas. scale up To increase the number of replicas. service A service exposes a running application on a set of pods. 1.2. Integrated OpenShift image registry OpenShift Container Platform provides a built-in container image registry that runs as a standard workload on the cluster. The registry is configured and managed by an infrastructure Operator. It provides an out-of-the-box solution for users to manage the images that run their workloads, and runs on top of the existing cluster infrastructure. This registry can be scaled up or down like any other cluster workload and does not require specific infrastructure provisioning. In addition, it is integrated into the cluster user authentication and authorization system, which means that access to create and retrieve images is controlled by defining user permissions on the image resources. The registry is typically used as a publication target for images built on the cluster, as well as being a source of images for workloads running on the cluster. When a new image is pushed to the registry, the cluster is notified of the new image and other components can react to and consume the updated image. Image data is stored in two locations. The actual image data is stored in a configurable storage location, such as cloud storage or a filesystem volume. The image metadata, which is exposed by the standard cluster APIs and is used to perform access control, is stored as standard API resources, specifically images and imagestreams. Additional resources Image Registry Operator in OpenShift Container Platform 1.3. Third-party registries OpenShift Container Platform can create containers using images from third-party registries, but it is unlikely that these registries offer the same image notification support as the integrated OpenShift image registry. In this situation, OpenShift Container Platform will fetch tags from the remote registry upon imagestream creation. To refresh the fetched tags, run oc import-image <stream> . When new images are detected, the previously described build and deployment reactions occur. 1.3.1. Authentication OpenShift Container Platform can communicate with registries to access private image repositories using credentials supplied by the user. This allows OpenShift Container Platform to push and pull images to and from private repositories. 1.3.1.1. Registry authentication with Podman Some container image registries require access authorization. Podman is an open source tool for managing containers and container images and interacting with image registries. You can use Podman to authenticate your credentials, pull the registry image, and store local images in a local file system. The following is a generic example of authenticating the registry with Podman. Procedure Use the Red Hat Ecosystem Catalog to search for specific container images from the Red Hat Repository and select the required image. Click Get this image to find the command for your container image. Log in by running the following command and entering your username and password to authenticate: USD podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password> Download the image and save it locally by running the following command: USD podman pull registry.redhat.io/<repository_name> 1.4. Red Hat Quay registries If you need an enterprise-quality container image registry, Red Hat Quay is available both as a hosted service and as software you can install in your own data center or cloud environment. Advanced features in Red Hat Quay include geo-replication, image scanning, and the ability to roll back images. Visit the Quay.io site to set up your own hosted Quay registry account. After that, follow the Quay Tutorial to log in to the Quay registry and start managing your images. You can access your Red Hat Quay registry from OpenShift Container Platform like any remote container image registry. Additional resources Red Hat Quay product documentation 1.5. Authentication enabled Red Hat registry All container images available through the Container images section of the Red Hat Ecosystem Catalog are hosted on an image registry, registry.redhat.io . The registry, registry.redhat.io , requires authentication for access to images and hosted content on OpenShift Container Platform. Following the move to the new registry, the existing registry will be available for a period of time. Note OpenShift Container Platform pulls images from registry.redhat.io , so you must configure your cluster to use it. The new registry uses standard OAuth mechanisms for authentication, with the following methods: Authentication token. Tokens, which are generated by administrators, are service accounts that give systems the ability to authenticate against the container image registry. Service accounts are not affected by changes in user accounts, so the token authentication method is reliable and resilient. This is the only supported authentication option for production clusters. Web username and password. This is the standard set of credentials you use to log in to resources such as access.redhat.com . While it is possible to use this authentication method with OpenShift Container Platform, it is not supported for production deployments. Restrict this authentication method to stand-alone projects outside OpenShift Container Platform. You can use podman login with your credentials, either username and password or authentication token, to access content on the new registry. All imagestreams point to the new registry, which uses the installation pull secret to authenticate. You must place your credentials in either of the following places: openshift namespace . Your credentials must exist in the openshift namespace so that the imagestreams in the openshift namespace can import. Your host . Your credentials must exist on your host because Kubernetes uses the credentials from your host when it goes to pull images. Additional resources Registry service accounts | [
"podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password>",
"podman pull registry.redhat.io/<repository_name>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/registry/registry-overview-1 |
8.65. gdb | 8.65. gdb 8.65.1. RHBA-2014:1534 - gdb bug fix and enhancement update Updated gdb packages that fix several bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The gdb packages provide the GNU Debugger (GDB) to debug programs written in C, C++, Java, and other languages by executing them in a controlled fashion and then printing out their data. Bug Fixes BZ# 1104587 Previously, when the users tried to debug certain core dump files generated from multi-threaded applications, GDB was unable to handle correctly specific situations, for example, when a referenced DWARF Compilation Unit was aged out. As a consequence, performing the "thread apply all bt" command to display a backtrace of all threads could cause GDB to terminate unexpectedly. A patch has been provided to fix this bug, and GDB no longer crashes in this scenario. BZ# 913146 Previously, when executing the signal handling code, GDB was calling certain non-reentrant functions, such as the calloc() function. This could sometimes result in a deadlock situation. To avoid deadlocks in this scenario, the relevant GDB code has been modified to handle non-reentrant functions correctly. BZ# 1007614 Previously, due to a bug in a specific function in the support for Python, if a Python script read a memory region from the program that was being debugged, and the reference to the memory region became out of scope, GDB did not deallocate the memory. As a consequence, this led to a memory leak, which was particularly significant in memory-intensive scenarios. A patch has been applied, and GDB now frees the acquired memory correctly. BZ# 903734 Prior to this update, GDB did not add the necessary offsets when dealing with bit fields inside nested instances of the struct data type. Consequently, when the user tried to set the value of a bit field that was declared inside such a data structure, GDB was unable to calculate it correctly. With this update, GDB calculates the values of bit fields inside nested data structures correctly. BZ# 1080656 Previously, GDB was unable to correctly access Thread Local Storage (TLS) data on statically linked binaries. Consequently, the user could not inspect TLS data on the program being debugged if the program was linked statically. This bug has been fixed, and users can now inspect TLS data on statically linked binaries as expected. BZ# 981154 Prior to this update, GDB incorrectly handled symbolic links related to build-id files. As a consequence, when the user tried to debug core dump files generated from programs that were not installed on the system, GDB printed misleading error messages instructing the user to run incorrect commands to install the binary files. Subsequently, the suggested commands did not fully work and the program package was not correctly installed. This bug has been fixed, and GDB now issues a message containing correct commands to install the necessary binary files. In addition, this update adds the following Enhancement BZ# 971849 This update adds the "USD_exitsignal" internal variable to GDB. Now, when debugging a core dump file of a program that was killed by a signal, "USD_exitsignal" provides the signal number to the user. Users of gdb are advised to upgrade to these updated packages, which fix these bugs and add this enhancement. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/gdb |
Preface | Preface When you select Tekton as your CI provider while creating an application, you must add webhooks to your source code repository, for example, GitLab or Bitbucket. These webhooks automatically trigger pipeline runs in RHDH when code is updated. This integration ensures that your pipeline is always in sync with your code changes. | null | https://docs.redhat.com/en/documentation/red_hat_trusted_application_pipeline/1.4/html/configuring_tekton_webhooks_in_gitlab_and_bitbucket/pr01 |
6.3. Recovering from LVM Mirror Failure | 6.3. Recovering from LVM Mirror Failure This section provides an example of recovering from a situation where one leg of an LVM mirrored volume fails because the underlying device for a physical volume goes down. When a mirror leg fails, LVM converts the mirrored volume into a linear volume, which continues to operate as before but without the mirrored redundancy. At that point, you can add a new disk device to the system to use as a replacement physical device and rebuild the mirror. The following command creates the physical volumes which will be used for the mirror. The following commands creates the volume group vg and the mirrored volume groupfs . You can use the lvs command to verify the layout of the mirrored volume and the underlying devices for the mirror leg and the mirror log. Note that in the first example the mirror is not yet completely synced; you should wait until the Copy% field displays 100.00 before continuing. In this example, the primary leg of the mirror /dev/sda1 fails. Any write activity to the mirrored volume causes LVM to detect the failed mirror. When this occurs, LVM converts the mirror into a single linear volume. In this case, to trigger the conversion, we execute a dd command You can use the lvs command to verify that the device is now a linear device. Because of the failed disk, I/O errors occur. At this point you should still be able to use the logical volume, but there will be no mirror redundancy. To rebuild the mirrored volume, you replace the broken drive and recreate the physical volume. If you use the same disk rather than replacing it with a new one, you will see "inconsistent" warnings when you run the pvcreate command. you extend the original volume group with the new physical volume. Convert the linear volume back to its original mirrored state. You can use the lvs command to verify that the mirror is restored. | [
"pvcreate /dev/sd[abcdefgh][12] Physical volume \"/dev/sda1\" successfully created Physical volume \"/dev/sda2\" successfully created Physical volume \"/dev/sdb1\" successfully created Physical volume \"/dev/sdb2\" successfully created Physical volume \"/dev/sdc1\" successfully created Physical volume \"/dev/sdc2\" successfully created Physical volume \"/dev/sdd1\" successfully created Physical volume \"/dev/sdd2\" successfully created Physical volume \"/dev/sde1\" successfully created Physical volume \"/dev/sde2\" successfully created Physical volume \"/dev/sdf1\" successfully created Physical volume \"/dev/sdf2\" successfully created Physical volume \"/dev/sdg1\" successfully created Physical volume \"/dev/sdg2\" successfully created Physical volume \"/dev/sdh1\" successfully created Physical volume \"/dev/sdh2\" successfully created",
"vgcreate vg /dev/sd[abcdefgh][12] Volume group \"vg\" successfully created lvcreate -L 750M -n groupfs -m 1 vg /dev/sda1 /dev/sdb1 /dev/sdc1 Rounding up size to full physical extent 752.00 MB Logical volume \"groupfs\" created",
"lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices groupfs vg mwi-a- 752.00M groupfs_mlog 21.28 groupfs_mimage_0(0),groupfs_mimage_1(0) [groupfs_mimage_0] vg iwi-ao 752.00M /dev/sda1(0) [groupfs_mimage_1] vg iwi-ao 752.00M /dev/sdb1(0) [groupfs_mlog] vg lwi-ao 4.00M /dev/sdc1(0) lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices groupfs vg mwi-a- 752.00M groupfs_mlog 100.00 groupfs_mimage_0(0),groupfs_mimage_1(0) [groupfs_mimage_0] vg iwi-ao 752.00M /dev/sda1(0) [groupfs_mimage_1] vg iwi-ao 752.00M /dev/sdb1(0) [groupfs_mlog] vg lwi-ao 4.00M i /dev/sdc1(0)",
"dd if=/dev/zero of=/dev/vg/groupfs count=10 10+0 records in 10+0 records out",
"lvs -a -o +devices /dev/sda1: read failed after 0 of 2048 at 0: Input/output error /dev/sda2: read failed after 0 of 2048 at 0: Input/output error LV VG Attr LSize Origin Snap% Move Log Copy% Devices groupfs vg -wi-a- 752.00M /dev/sdb1(0)",
"pvcreate /dev/sda[12] Physical volume \"/dev/sda1\" successfully created Physical volume \"/dev/sda2\" successfully created pvscan PV /dev/sdb1 VG vg lvm2 [67.83 GB / 67.10 GB free] PV /dev/sdb2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdc1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdc2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdd1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdd2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sde1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sde2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdf1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdf2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdg1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdg2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdh1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdh2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sda1 lvm2 [603.94 GB] PV /dev/sda2 lvm2 [603.94 GB] Total: 16 [2.11 TB] / in use: 14 [949.65 GB] / in no VG: 2 [1.18 TB]",
"vgextend vg /dev/sda[12] Volume group \"vg\" successfully extended pvscan PV /dev/sdb1 VG vg lvm2 [67.83 GB / 67.10 GB free] PV /dev/sdb2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdc1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdc2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdd1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdd2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sde1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sde2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdf1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdf2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdg1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdg2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdh1 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sdh2 VG vg lvm2 [67.83 GB / 67.83 GB free] PV /dev/sda1 VG vg lvm2 [603.93 GB / 603.93 GB free] PV /dev/sda2 VG vg lvm2 [603.93 GB / 603.93 GB free] Total: 16 [2.11 TB] / in use: 16 [2.11 TB] / in no VG: 0 [0 ]",
"lvconvert -m 1 /dev/vg/groupfs /dev/sda1 /dev/sdb1 /dev/sdc1 Logical volume mirror converted.",
"lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices groupfs vg mwi-a- 752.00M groupfs_mlog 68.62 groupfs_mimage_0(0),groupfs_mimage_1(0) [groupfs_mimage_0] vg iwi-ao 752.00M /dev/sdb1(0) [groupfs_mimage_1] vg iwi-ao 752.00M /dev/sda1(0) [groupfs_mlog] vg lwi-ao 4.00M /dev/sdc1(0)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/mirrorrecover |
Appendix A. Comparison between Ceph Ansible and Cephadm | Appendix A. Comparison between Ceph Ansible and Cephadm The Red Hat Ceph Storage 5 introduces a new deployment tool, Cephadm, for the containerized deployment of the storage cluster. The tables compare Cephadm with Ceph-Ansible playbooks for managing the containerized deployment of a Ceph cluster for day one and day two operations. Table A.1. Day one operations Description Ceph-Ansible Cephadm Installation of the Red Hat Ceph Storage cluster Run the site-container.yml playbook. Run cephadm bootstrap command to bootstrap the cluster on the admin node. Addition of hosts Use the Ceph Ansible inventory. Run ceph orch host add HOST_NAME to add hosts to the cluster. Addition of monitors Run the add-mon.yml playbook. Run the ceph orch apply mon command. Addition of managers Run the site-container.yml playbook. Run the ceph orch apply mgr command. Addition of OSDs Run the add-osd.yml playbook. Run the ceph orch apply osd command to add OSDs on all available devices or on specific hosts. Addition of OSDs on specific devices Select the devices in the osd.yml file and then run the add-osd.yml playbook. Select the paths filter under the data_devices in the osd.yml file and then run ceph orch apply -i FILE_NAME .yml command. Addition of MDS Run the site-container.yml playbook. Run the ceph orch apply FILESYSTEM_NAME command to add MDS. Addition of Ceph Object Gateway Run the site-container.yml playbook. Run the ceph orch apply rgw commands to add Ceph Object Gateway. Table A.2. Day two operations Description Ceph-Ansible Cephadm Removing hosts Use the Ansible inventory. Run ceph orch host rm HOST_NAME to remove the hosts. Removing monitors Run the shrink-mon.yml playbook. Run ceph orch apply mon to redeploy other monitors. Removing managers Run the shrink-mon.yml playbook. Run ceph orch apply mgr to redeploy other managers. Removing OSDs Run the shrink-osd.yml playbook. Run ceph orch osd rm OSD_ID to remove the OSDs. Removing MDS Run the shrink-mds.yml playbook. Run ceph orch rm SERVICE_NAME to remove the specific service. Exporting Ceph File System over NFS Protocol. Not supported on Red Hat Ceph Storage 4. Run ceph nfs export create command. Deployment of Ceph Object Gateway Run the site-container.yml playbook. Run ceph orch apply rgw SERVICE_NAME to deploy Ceph Object Gateway service. Removing Ceph Object Gateway Run the shrink-rgw.yml playbook. Run ceph orch rm SERVICE_NAME to remove the specific service. Deployment of iSCSI gateways Run the site-container.yml playbook. Run ceph orch apply iscsi to deploy iSCSI gateway. Block device mirroring Run the site-container.yml playbook. Run ceph orch apply rbd-mirror command. Minor version upgrade of Red Hat Ceph Storage Run the infrastructure-playbooks/rolling_update.yml playbook. Run ceph orch upgrade start command. Upgrading from Red Hat Ceph Storage 4 to Red Hat Ceph Storage 5 Run infrastructure-playbooks/rolling_update.yml playbook. Upgrade using Cephadm is not supported. Deployment of monitoring stack Edit the all.yml file during installation. Run the ceph orch apply -i FILE .yml after specifying the services. Additional Resources For more details on using the Ceph Orchestrator, see the Red Hat Ceph Storage Operations Guide . | null | https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/installation_guide/comparison-between-ceph-ansible-and-cephadm_install |
Chapter 19. System and Subscription Management | Chapter 19. System and Subscription Management cockpit rebased to version 173 The cockpit packages, which provide the Cockpit browser-based administration console, have been upgraded to version 173. This version provides a number of bug fixes and enhancements. Notable changes include: The menu and navigation can now work with mobile browsers. Cockpit now supports alternate Kerberos keytabs for Cockpit's web server, which enables configuration of Single Sign-On (SSO). Automatic setup of Kerberos keytab for Cockpit web server. Automatic configuration of SSO with FreeIPA for Cockpit is possible. Cockpit requests FreeIPA SSL certificate for Cockpit's web server. Cockpit shows available package updates and missing registrations on system front page. A Firewall interface has been added. The flow control to avoid user interface hangs and unbounded memory usage for big file downloads has been added. Terminal issues in Chrome have been fixed. Cockpit now properly localizes numbers, times, and dates. Subscriptions page hang when accessing as a non-administrator user has been fixed. Log in is now localized properly. The check for root privilege availability has been improved to work for FreeIPA administrators as well. (BZ# 1568728 , BZ# 1495543 , BZ# 1442540 , BZ#1541454, BZ#1574630) reposync now by default skips packages whose location falls outside the destination directory Previously, the reposync command did not sanitize paths to packages specified in a remote repository, which was insecure. A security fix for CVE-2018-10897 has changed the default behavior of reposync to not store any packages outside the specified destination directory. To restore the original insecure behavior, use the new --allow-path-traversal option. (BZ#1609302, BZ#1600618) The yum clean all command now prints a disk usage summary When using the yum clean all command, the following hint was always displayed: With this update, the hint has been removed, and yum clean all now prints a disk usage summary for remaining repositories that were not affected by yum clean all (BZ# 1481220 ) The yum versionlock plug-in now displays which packages are blocked when running the yum update command Previously, the yum versionlock plug-in, which is used to lock RPM packages, did not display any information about packages excluded from the update. Consequently, users were not warned that such packages will not be updated when running the yum update command. With this update, yum versionlock has been changed. The plug-in now prints a message about how many package updates are being excluded. In addition, the new status subcommand has been added to the plug-in. The yum versionlock status command prints the list of available package updates blocked by the plug-in. (BZ# 1497351 ) The repotrack command now supports the --repofrompath option The --repofrompath option , which is already supported by the repoquery and repoclosure commands, has been added to the repotrack command. As a result, non-root users can now add custom repositories to track without escalating their privileges. (BZ# 1506205 ) Subscription manager now respects proxy_port settings from rhsm.conf Previously, subscription manager did not respect changes to the default proxy_port configuration from the /etc/rhsm/rhsm.conf file. Consequently, the default value of 3128 was used even after the user had changed the value of proxy_port . With this update, the underlying source code has been fixed, and subscription manager now respects changes to the default proxy_port configuration. However, making any change to the proxy_port value in /etc/rhsm/rhsm.conf requires an selinux policy change. To avoid selinux denials when changing the default proxy_port , run this command for the benefit of the rhsmcertd daemon process: (BZ# 1576423 ) New package: sos-collector sos-collector is a utility that gathers sosreports from multi-node environments. sos-collector facilitates data collection for support cases and it can be run from either a node or from an administrator's local workstation that has network access to the environment. (BZ#1481861) | [
"Maybe you want: rm -rf /var/cache/yum",
"semanage port -a -t squid_port_t -p tcp <new_proxy_port>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/new_features_system_and_subscription_management |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/using_jlink_to_customize_java_runtime_environment/making-open-source-more-inclusive |
Chapter 3. Preparing for director installation | Chapter 3. Preparing for director installation To install and configure director, you must complete some preparation tasks to ensure you have registered the undercloud to the Red Hat Customer Portal or a Red Hat Satellite server, you have installed the director packages, and you have configured a container image source for the director to pull container images during installation. 3.1. Preparing the undercloud Before you can install director, you must complete some basic configuration on the host machine. Procedure Log in to your undercloud as the root user. Create the stack user: Set a password for the user: Disable password requirements when using sudo : Switch to the new stack user: Create directories for system images and heat templates: Director uses system images and heat templates to create the overcloud environment. Red Hat recommends creating these directories to help you organize your local file system. Check the base and full hostname of the undercloud: If either of the commands do not report the correct fully-qualified hostname or report an error, use hostnamectl to set a hostname: If you are not using a DNS server that can resolve the fully qualified domain name (FQDN) of the undercloud host, edit the /etc/hosts and include an entry for the system hostname. The IP address in /etc/hosts must match the address that you plan to use for your undercloud public API. For example, if the system uses undercloud.example.com as the FQDN and uses 10.0.0.1 for its IP address, add the following line to the /etc/hosts file: If you plan for the Red Hat OpenStack Platform director to be on a separate domain than the overcloud or its identity provider, then you must add the additional domains to /etc/resolv.conf: Important You must enable the DNS domain for ports extension ( dns_domain_ports ) for DNS to internally resolve names for ports in your RHOSP environment. Using the NeutronDnsDomain default value, openstacklocal , means that the Networking service does not internally resolve port names for DNS. For more information, see Specifying the name that DNS assigns to ports in Configuring Red Hat OpenStack Platform networking . 3.2. Registering the undercloud and attaching subscriptions Before you can install director, you must run subscription-manager to register the undercloud and attach a valid Red Hat OpenStack Platform subscription. Procedure Log in to your undercloud as the stack user. Register your system either with the Red Hat Content Delivery Network or with a Red Hat Satellite. For example, run the following command to register the system to the Content Delivery Network. Enter your Customer Portal user name and password when prompted: Find the entitlement pool ID for Red Hat OpenStack Platform (RHOSP) director: Locate the Pool ID value and attach the Red Hat OpenStack Platform 17.1 entitlement: Lock the undercloud to Red Hat Enterprise Linux 9.2: 3.3. Enabling repositories for the undercloud Enable the repositories that are required for the undercloud, and update the system packages to the latest versions. Procedure Log in to your undercloud as the stack user. Disable all default repositories, and enable the required Red Hat Enterprise Linux (RHEL) repositories: These repositories contain packages that the director installation requires. Perform an update on your system to ensure that you have the latest base system packages: Install the command line tools for director installation and configuration: 3.4. Preparing container images The undercloud installation requires an environment file to determine where to obtain container images and how to store them. Generate and customize the environment file that you can use to prepare your container images. Note If you need to configure specific container image versions for your undercloud, you must pin the images to a specific version. For more information, see Pinning container images for the undercloud . Procedure Log in to the undercloud host as the stack user. Generate the default container image preparation file: This command includes the following additional options: --local-push-destination sets the registry on the undercloud as the location for container images. This means that director pulls the necessary images from the Red Hat Container Catalog and pushes them to the registry on the undercloud. Director uses this registry as the container image source. To pull directly from the Red Hat Container Catalog, omit this option. --output-env-file is an environment file name. The contents of this file include the parameters for preparing your container images. In this case, the name of the file is containers-prepare-parameter.yaml . Note You can use the same containers-prepare-parameter.yaml file to define a container image source for both the undercloud and the overcloud. Modify the containers-prepare-parameter.yaml to suit your requirements. For more information about container image parameters, see Container image preparation parameters . 3.5. Obtaining container images from private registries The registry.redhat.io registry requires authentication to access and pull images. To authenticate with registry.redhat.io and other private registries, include the ContainerImageRegistryCredentials and ContainerImageRegistryLogin parameters in your containers-prepare-parameter.yaml file. ContainerImageRegistryCredentials Some container image registries require authentication to access images. In this situation, use the ContainerImageRegistryCredentials parameter in your containers-prepare-parameter.yaml environment file. The ContainerImageRegistryCredentials parameter uses a set of keys based on the private registry URL. Each private registry URL uses its own key and value pair to define the username (key) and password (value). This provides a method to specify credentials for multiple private registries. In the example, replace my_username and my_password with your authentication credentials. Instead of using your individual user credentials, Red Hat recommends creating a registry service account and using those credentials to access registry.redhat.io content. To specify authentication details for multiple registries, set multiple key-pair values for each registry in ContainerImageRegistryCredentials : Important The default ContainerImagePrepare parameter pulls container images from registry.redhat.io , which requires authentication. For more information, see Red Hat Container Registry Authentication . ContainerImageRegistryLogin The ContainerImageRegistryLogin parameter is used to control whether an overcloud node system needs to log in to the remote registry to fetch the container images. This situation occurs when you want the overcloud nodes to pull images directly, rather than use the undercloud to host images. You must set ContainerImageRegistryLogin to true if push_destination is set to false or not used for a given strategy. However, if the overcloud nodes do not have network connectivity to the registry hosts defined in ContainerImageRegistryCredentials and you set ContainerImageRegistryLogin to true , the deployment might fail when trying to perform a login. If the overcloud nodes do not have network connectivity to the registry hosts defined in the ContainerImageRegistryCredentials , set push_destination to true and ContainerImageRegistryLogin to false so that the overcloud nodes pull images from the undercloud. | [
"useradd stack",
"passwd stack",
"echo \"stack ALL=(root) NOPASSWD:ALL\" | tee -a /etc/sudoers.d/stack chmod 0440 /etc/sudoers.d/stack",
"su - stack [stack@director ~]USD",
"[stack@director ~]USD mkdir ~/images [stack@director ~]USD mkdir ~/templates",
"[stack@director ~]USD hostname [stack@director ~]USD hostname -f",
"[stack@director ~]USD sudo hostnamectl set-hostname undercloud.example.com",
"10.0.0.1 undercloud.example.com undercloud",
"search overcloud.com idp.overcloud.com",
"[stack@director ~]USD sudo subscription-manager register",
"[stack@director ~]USD sudo subscription-manager list --available --all --matches=\"Red Hat OpenStack\" Subscription Name: Name of SKU Provides: Red Hat Single Sign-On Red Hat Enterprise Linux Workstation Red Hat CloudForms Red Hat OpenStack Red Hat Software Collections (for RHEL Workstation) SKU: SKU-Number Contract: Contract-Number Pool ID: Valid-Pool-Number-123456 Provides Management: Yes Available: 1 Suggested: 1 Service Level: Support-level Service Type: Service-Type Subscription Type: Sub-type Ends: End-date System Type: Physical",
"[stack@director ~]USD sudo subscription-manager attach --pool=Valid-Pool-Number-123456",
"sudo subscription-manager release --set=9.2",
"[stack@director ~]USD sudo subscription-manager repos --disable=* [stack@director ~]USD sudo subscription-manager repos --enable=rhel-9-for-x86_64-baseos-eus-rpms --enable=rhel-9-for-x86_64-appstream-eus-rpms --enable=rhel-9-for-x86_64-highavailability-eus-rpms --enable=openstack-17.1-for-rhel-9-x86_64-rpms --enable=fast-datapath-for-rhel-9-x86_64-rpms",
"[stack@director ~]USD sudo dnf update -y [stack@director ~]USD sudo reboot",
"[stack@director ~]USD sudo dnf install -y python3-tripleoclient",
"openstack tripleo container image prepare default --local-push-destination --output-env-file containers-prepare-parameter.yaml",
"parameter_defaults: ContainerImagePrepare: - push_destination: true set: namespace: registry.redhat.io/ ContainerImageRegistryCredentials: registry.redhat.io: my_username: my_password",
"parameter_defaults: ContainerImagePrepare: - push_destination: true set: namespace: registry.redhat.io/ - push_destination: true set: namespace: registry.internalsite.com/ ContainerImageRegistryCredentials: registry.redhat.io: myuser: 'p@55w0rd!' registry.internalsite.com: myuser2: '0th3rp@55w0rd!' '192.0.2.1:8787': myuser3: '@n0th3rp@55w0rd!'",
"parameter_defaults: ContainerImagePrepare: - push_destination: false set: namespace: registry.redhat.io/ ContainerImageRegistryCredentials: registry.redhat.io: myuser: 'p@55w0rd!' ContainerImageRegistryLogin: true",
"parameter_defaults: ContainerImagePrepare: - push_destination: true set: namespace: registry.redhat.io/ ContainerImageRegistryCredentials: registry.redhat.io: myuser: 'p@55w0rd!' ContainerImageRegistryLogin: false"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/installing_and_managing_red_hat_openstack_platform_with_director/assembly_preparing-for-director-installation |
2.8.9.2.6. Listing Options | 2.8.9.2.6. Listing Options The default list command, iptables -L [<chain-name>] , provides a very basic overview of the default filter table's current chains. Additional options provide more information: -v - Displays verbose output, such as the number of packets and bytes each chain has processed, the number of packets and bytes each rule has matched, and which interfaces apply to a particular rule. -x - Expands numbers into their exact values. On a busy system, the number of packets and bytes processed by a particular chain or rule may be abbreviated to Kilobytes , Megabytes , or Gigabytes . This option forces the full number to be displayed. -n - Displays IP addresses and port numbers in numeric format, rather than the default hostname and network service format. --line-numbers - Lists rules in each chain to their numeric order in the chain. This option is useful when attempting to delete the specific rule in a chain or to locate where to insert a rule within a chain. -t <table-name> - Specifies a table name. If omitted, defaults to the filter table. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-Security_Guide-Command_Options_for_IPTables-Listing_Options |
Chapter 2. Deploy OpenShift Data Foundation using Red Hat Ceph storage | Chapter 2. Deploy OpenShift Data Foundation using Red Hat Ceph storage Red Hat OpenShift Data Foundation can make services from an external Red Hat Ceph Storage cluster available for consumption through OpenShift Container Platform clusters. You need to install the OpenShift Data Foundation operator and then create an OpenShift Data Foundation cluster for the external Ceph storage system. 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.15 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Creating an OpenShift Data Foundation Cluster for external Ceph storage system You need to create a new OpenShift Data Foundation cluster after you install OpenShift Data Foundation operator on OpenShift Container Platform deployed on VMware vSphere or user-provisioned bare metal infrastructures. Prerequisites A valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Ensure the OpenShift Container Platform version is 4.15 or above before deploying OpenShift Data Foundation 4.15. OpenShift Data Foundation operator must be installed. For more information, see Installing OpenShift Data Foundation Operator using the Operator Hub . To check the supportability and interoperability of Red Hat Ceph Storage (RHCS) with Red Hat OpenShift Data Foundation in external mode, go to the lab Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . Select Service Type as ODF as Self-Managed Service . Select appropriate Version from the drop down. On the Versions tab, click the Supported RHCS versions in the External Mode tab. If you have updated the Red Hat Ceph Storage cluster from a version lower than 4.1.1 to the latest release and is not a freshly deployed cluster, you must manually set the application type for the CephFS pool on the Red Hat Ceph Storage cluster to enable CephFS PVC creation in external mode. For more details, see Troubleshooting CephFS PVC creation in external mode . Red Hat Ceph Storage must have Ceph Dashboard installed and configured. For more information, see Ceph Dashboard installation and access . It is recommended that the external Red Hat Ceph Storage cluster has the PG Autoscaler enabled. For more information, see The placement group autoscaler section in the Red Hat Ceph Storage documentation. The external Ceph cluster should have an existing RBD pool pre-configured for use. If it does not exist, contact your Red Hat Ceph Storage administrator to create one before you move ahead with OpenShift Data Foundation deployment. Red Hat recommends to use a separate pool for each OpenShift Data Foundation cluster. Optional: If there is a zonegroup created apart from the default zonegroup, you need to add the hostname, rook-ceph-rgw-ocs-external-storagecluster-cephobjectstore.openshift-storage.svc to the zonegroup as OpenShift Data Foundation sends S3 requests to the RADOS Object Gateways (RGWs) with this hostname. For more information, see the Red Hat Knowledgebase solution Ceph - How to add hostnames in RGW zonegroup? . Procedure Click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation and then click Create StorageSystem . In the Backing storage page, select the following options: Select Full deployment for the Deployment type option. Select Connect an external storage platform from the available options. Select Red Hat Ceph Storage for Storage platform . Click . In the Connection details page, provide the necessary information: Click on the Download Script link to download the python script for extracting Ceph cluster details. For extracting the Red Hat Ceph Storage (RHCS) cluster details, contact the RHCS administrator to run the downloaded python script on a Red Hat Ceph Storage node with the admin key . Run the following command on the RHCS node to view the list of available arguments: Important Use python instead of python3 if the Red Hat Ceph Storage 4.x cluster is deployed on Red Hat Enterprise Linux 7.x (RHEL 7.x) cluster. You can also run the script from inside a MON container (containerized deployment) or from a MON node (RPM deployment). Note Use the yum install cephadm command and then the cephadm command to deploy your RHCS cluster using containers. You must pull the RHCS container images using the cephadm command, rather than using yum for installing the Ceph packages onto nodes. For more information, see RHCS product documentation . To retrieve the external cluster details from the RHCS cluster, run the following command: For example: In this example, rbd-data-pool-name A mandatory parameter that is used for providing block storage in OpenShift Data Foundation. rgw-endpoint (Optional) This parameter is required only if the object storage is to be provisioned through Ceph Rados Gateway for OpenShift Data Foundation. Provide the endpoint in the following format: <ip_address>:<port> Note A fully-qualified domain name (FQDN) is also supported in the format <FQDN>:<PORT> . monitoring-endpoint (Optional) This parameter accepts comma-separated list of IP addresses of active and standby mgrs reachable from the OpenShift Container Platform cluster. If not provided, the value is automatically populated. monitoring-endpoint-port (Optional) It is the port associated with the ceph-mgr Prometheus exporter specified by --monitoring-endpoint . If not provided, the value is automatically populated. run-as-user (Optional) This parameter is used for providing name for the Ceph user which is created by the script. If this parameter is not specified, a default user name client.healthchecker is created. The permissions for the new user is set as: caps: [mgr] allow command config caps: [mon] allow r, allow command quorum_status, allow command version caps: [osd] allow rwx pool= RGW_POOL_PREFIX.rgw.meta , allow r pool= .rgw.root , allow rw pool= RGW_POOL_PREFIX.rgw.control , allow rx pool= RGW_POOL_PREFIX.rgw.log , allow x pool= RGW_POOL_PREFIX.rgw.buckets.index Additional flags: rgw-pool-prefix (Optional) The prefix of the RGW pools. If not specified, the default prefix is default . rgw-tls-cert-path (Optional) The file path of the RADOS Gateway endpoint TLS certificate. rgw-skip-tls (Optional) This parameter ignores the TLS certification validation when a self-signed certificate is provided (NOT RECOMMENDED). ceph-conf (Optional) The name of the Ceph configuration file. cluster-name (Optional) The Ceph cluster name. output (Optional) The file where the output is required to be stored. cephfs-metadata-pool-name (Optional) The name of the CephFS metadata pool. cephfs-data-pool-name (Optional) The name of the CephFS data pool. cephfs-filesystem-name (Optional) The name of the CephFS filesystem. rbd-metadata-ec-pool-name (Optional) The name of the erasure coded RBD metadata pool. dry-run (Optional) This parameter helps to print the executed commands without running them. restricted-auth-permission (Optional) This parameter restricts cephCSIKeyrings auth permissions to specific pools and clusters. Mandatory flags that need to be set with this are rbd-data-pool-name and cluster-name . You can also pass the cephfs-filesystem-name flag if there is CephFS user restriction so that permission is restricted to a particular CephFS filesystem. Note This parameter must be applied only for the new deployments. To restrict csi-users per pool and per cluster, you need to create new csi-users and new secrets for those csi-users . Example with restricted auth permission: Example of JSON output generated using the python script: Save the JSON output to a file with .json extension Note For OpenShift Data Foundation to work seamlessly, ensure that the parameters (RGW endpoint, CephFS details, RBD pool, and so on) to be uploaded using the JSON file remain unchanged on the RHCS external cluster after the storage cluster creation. Run the command when there is a multi-tenant deployment in which the RHCS cluster is already connected to OpenShift Data Foundation deployment with a lower version. Click Browse to select and upload the JSON file. The content of the JSON file is populated and displayed in the text box. Click The button is enabled only after you upload the .json file. In the Review and create page, review if all the details are correct: To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-external-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick. To verify that OpenShift Data Foundation, pods and StorageClass are successfully installed, see Verifying your external mode OpenShift Data Foundation installation for external Ceph storage system . 2.3. Verifying your OpenShift Data Foundation installation for external Ceph storage system Use this section to verify that OpenShift Data Foundation is deployed correctly. 2.3.1. Verifying the state of the pods Click Workloads Pods from the left pane of the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see Table 2.1, "Pods corresponding to OpenShift Data Foundation components" Verify that the following pods are in running state: Table 2.1. Pods corresponding to OpenShift Data Foundation components Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any worker node) ocs-metrics-exporter-* (1 pod on any worker node) odf-operator-controller-manager-* (1 pod on any worker node) odf-console-* (1 pod on any worker node) csi-addons-controller-manager-* (1 pod on any worker node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any worker node) Multicloud Object Gateway noobaa-operator-* (1 pod on any worker node) noobaa-core-* (1 pod on any worker node) noobaa-db-pg-* (1 pod on any worker node) noobaa-endpoint-* (1 pod on any worker node) CSI cephfs csi-cephfsplugin-* (1 pod on each worker node) csi-cephfsplugin-provisioner-* (2 pods distributed across worker nodes) Note If an MDS is not deployed in the external cluster, the csi-cephfsplugin pods will not be created. rbd csi-rbdplugin-* (1 pod on each worker node) csi-rbdplugin-provisioner-* (2 pods distributed across worker nodes) 2.3.2. Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.3.3. Verifying that the Multicloud Object Gateway is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the Multicloud Object Gateway (MCG) information is displayed. Note The RADOS Object Gateway is only listed in case RADOS Object Gateway endpoint details are included while deploying OpenShift Data Foundation in external mode. For more information on the health of OpenShift Data Foundation cluster using the object dashboard, see Monitoring OpenShift Data Foundation . 2.3.4. Verifying that the storage classes are created and listed Click Storage Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: ocs-external-storagecluster-ceph-rbd ocs-external-storagecluster-ceph-rgw ocs-external-storagecluster-cephfs openshift-storage.noobaa.io Note If an MDS is not deployed in the external cluster, ocs-external-storagecluster-cephfs storage class will not be created. If RGW is not deployed in the external cluster, the ocs-external-storagecluster-ceph-rgw storage class will not be created. For more information regarding MDS and RGW, see Red Hat Ceph Storage documentation 2.3.5. Verifying that Ceph cluster is connected Run the following command to verify if the OpenShift Data Foundation cluster is connected to the external Red Hat Ceph Storage cluster. 2.3.6. Verifying that storage cluster is ready Run the following command to verify if the storage cluster is ready and the External option is set to true . | [
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"python3 ceph-external-cluster-details-exporter.py --help",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name <rbd block pool name> [optional arguments]",
"python3 ceph-external-cluster-details-exporter.py --rbd-data-pool-name ceph-rbd --monitoring-endpoint xxx.xxx.xxx.xxx --monitoring-endpoint-port xxxx --rgw-endpoint xxx.xxx.xxx.xxx:xxxx --run-as-user client.ocs",
"python3 /etc/ceph/create-external-cluster-resources.py --cephfs-filesystem-name myfs --rbd-data-pool-name replicapool --cluster-name rookStorage --restricted-auth-permission true",
"[{\"name\": \"rook-ceph-mon-endpoints\", \"kind\": \"ConfigMap\", \"data\": {\"data\": \"xxx.xxx.xxx.xxx:xxxx\", \"maxMonId\": \"0\", \"mapping\": \"{}\"}}, {\"name\": \"rook-ceph-mon\", \"kind\": \"Secret\", \"data\": {\"admin-secret\": \"admin-secret\", \"fsid\": \"<fs-id>\", \"mon-secret\": \"mon-secret\"}}, {\"name\": \"rook-ceph-operator-creds\", \"kind\": \"Secret\", \"data\": {\"userID\": \"<user-id>\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-node\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-node\", \"userKey\": \"<user-key>\"}}, {\"name\": \"ceph-rbd\", \"kind\": \"StorageClass\", \"data\": {\"pool\": \"<pool>\"}}, {\"name\": \"monitoring-endpoint\", \"kind\": \"CephCluster\", \"data\": {\"MonitoringEndpoint\": \"xxx.xxx.xxx.xxx\", \"MonitoringPort\": \"xxxx\"}}, {\"name\": \"rook-ceph-dashboard-link\", \"kind\": \"Secret\", \"data\": {\"userID\": \"ceph-dashboard-link\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-rbd-provisioner\", \"kind\": \"Secret\", \"data\": {\"userID\": \"csi-rbd-provisioner\", \"userKey\": \"<user-key>\"}}, {\"name\": \"rook-csi-cephfs-provisioner\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-provisioner\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"rook-csi-cephfs-node\", \"kind\": \"Secret\", \"data\": {\"adminID\": \"csi-cephfs-node\", \"adminKey\": \"<admin-key>\"}}, {\"name\": \"cephfs\", \"kind\": \"StorageClass\", \"data\": {\"fsName\": \"cephfs\", \"pool\": \"cephfs_data\"}}, {\"name\": \"ceph-rgw\", \"kind\": \"StorageClass\", \"data\": {\"endpoint\": \"xxx.xxx.xxx.xxx:xxxx\", \"poolPrefix\": \"default\"}}, {\"name\": \"rgw-admin-ops-user\", \"kind\": \"Secret\", \"data\": {\"accessKey\": \"<access-key>\", \"secretKey\": \"<secret-key>\"}}]",
"python3 ceph-external-cluster-details-exporter.py --upgrade",
"oc get cephcluster -n openshift-storage NAME DATADIRHOSTPATH MONCOUNT AGE PHASE MESSAGE HEALTH EXTERNAL ocs-external-storagecluster-cephcluster 30m Connected Cluster connected successfully HEALTH_OK true",
"oc get storagecluster -n openshift-storage NAME AGE PHASE EXTERNAL CREATED AT VERSION ocs-external-storagecluster 30m Ready true 2021-11-17T09:09:52Z 4.15.0"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_in_external_mode/deploy-openshift-data-foundation-using-red-hat-ceph-storage |
Chapter 6. Understanding OpenShift Container Platform development | Chapter 6. Understanding OpenShift Container Platform development To fully leverage the capability of containers when developing and running enterprise-quality applications, ensure your environment is supported by tools that allow containers to be: Created as discrete microservices that can be connected to other containerized, and non-containerized, services. For example, you might want to join your application with a database or attach a monitoring application to it. Resilient, so if a server crashes or needs to go down for maintenance or to be decommissioned, containers can start on another machine. Automated to pick up code changes automatically and then start and deploy new versions of themselves. Scaled up, or replicated, to have more instances serving clients as demand increases and then spun down to fewer instances as demand declines. Run in different ways, depending on the type of application. For example, one application might run once a month to produce a report and then exit. Another application might need to run constantly and be highly available to clients. Managed so you can watch the state of your application and react when something goes wrong. Containers' widespread acceptance, and the resulting requirements for tools and methods to make them enterprise-ready, resulted in many options for them. The rest of this section explains options for assets you can create when you build and deploy containerized Kubernetes applications in OpenShift Container Platform. It also describes which approaches you might use for different kinds of applications and development requirements. 6.1. About developing containerized applications You can approach application development with containers in many ways, and different approaches might be more appropriate for different situations. To illustrate some of this variety, the series of approaches that is presented starts with developing a single container and ultimately deploys that container as a mission-critical application for a large enterprise. These approaches show different tools, formats, and methods that you can employ with containerized application development. This topic describes: Building a simple container and storing it in a registry Creating a Kubernetes manifest and saving it to a Git repository Making an Operator to share your application with others 6.2. Building a simple container You have an idea for an application and you want to containerize it. First you require a tool for building a container, like buildah or docker, and a file that describes what goes in your container, which is typically a Dockerfile . , you require a location to push the resulting container image so you can pull it to run anywhere you want it to run. This location is a container registry. Some examples of each of these components are installed by default on most Linux operating systems, except for the Dockerfile, which you provide yourself. The following diagram displays the process of building and pushing an image: Figure 6.1. Create a simple containerized application and push it to a registry If you use a computer that runs Red Hat Enterprise Linux (RHEL) as the operating system, the process of creating a containerized application requires the following steps: Install container build tools: RHEL contains a set of tools that includes podman, buildah, and skopeo that you use to build and manage containers. Create a Dockerfile to combine base image and software: Information about building your container goes into a file that is named Dockerfile . In that file, you identify the base image you build from, the software packages you install, and the software you copy into the container. You also identify parameter values like network ports that you expose outside the container and volumes that you mount inside the container. Put your Dockerfile and the software you want to containerize in a directory on your RHEL system. Run buildah or docker build: Run the buildah build-using-dockerfile or the docker build command to pull your chosen base image to the local system and create a container image that is stored locally. You can also build container images without a Dockerfile by using buildah. Tag and push to a registry: Add a tag to your new container image that identifies the location of the registry in which you want to store and share your container. Then push that image to the registry by running the podman push or docker push command. Pull and run the image: From any system that has a container client tool, such as podman or docker, run a command that identifies your new image. For example, run the podman run <image_name> or docker run <image_name> command. Here <image_name> is the name of your new container image, which resembles quay.io/myrepo/myapp:latest . The registry might require credentials to push and pull images. For more details on the process of building container images, pushing them to registries, and running them, see Custom image builds with Buildah . 6.2.1. Container build tool options Building and managing containers with buildah, podman, and skopeo results in industry standard container images that include features specifically tuned for deploying containers in OpenShift Container Platform or other Kubernetes environments. These tools are daemonless and can run without root privileges, requiring less overhead to run them. Important Support for Docker Container Engine as a container runtime is deprecated in Kubernetes 1.20 and will be removed in a future release. However, Docker-produced images will continue to work in your cluster with all runtimes, including CRI-O. For more information, see the Kubernetes blog announcement . When you ultimately run your containers in OpenShift Container Platform, you use the CRI-O container engine. CRI-O runs on every worker and control plane machine in an OpenShift Container Platform cluster, but CRI-O is not yet supported as a standalone runtime outside of OpenShift Container Platform. 6.2.2. Base image options The base image you choose to build your application on contains a set of software that resembles a Linux system to your application. When you build your own image, your software is placed into that file system and sees that file system as though it were looking at its operating system. Choosing this base image has major impact on how secure, efficient and upgradeable your container is in the future. Red Hat provides a new set of base images referred to as Red Hat Universal Base Images (UBI). These images are based on Red Hat Enterprise Linux and are similar to base images that Red Hat has offered in the past, with one major difference: they are freely redistributable without a Red Hat subscription. As a result, you can build your application on UBI images without having to worry about how they are shared or the need to create different images for different environments. These UBI images have standard, init, and minimal versions. You can also use the Red Hat Software Collections images as a foundation for applications that rely on specific runtime environments such as Node.js, Perl, or Python. Special versions of some of these runtime base images are referred to as Source-to-Image (S2I) images. With S2I images, you can insert your code into a base image environment that is ready to run that code. S2I images are available for you to use directly from the OpenShift Container Platform web UI by selecting Catalog Developer Catalog , as shown in the following figure: Figure 6.2. Choose S2I base images for apps that need specific runtimes 6.2.3. Registry options Container registries are where you store container images so you can share them with others and make them available to the platform where they ultimately run. You can select large, public container registries that offer free accounts or a premium version that offer more storage and special features. You can also install your own registry that can be exclusive to your organization or selectively shared with others. To get Red Hat images and certified partner images, you can draw from the Red Hat Registry. The Red Hat Registry is represented by two locations: registry.access.redhat.com , which is unauthenticated and deprecated, and registry.redhat.io , which requires authentication. You can learn about the Red Hat and partner images in the Red Hat Registry from the Container images section of the Red Hat Ecosystem Catalog . Besides listing Red Hat container images, it also shows extensive information about the contents and quality of those images, including health scores that are based on applied security updates. Large, public registries include Docker Hub and Quay.io . The Quay.io registry is owned and managed by Red Hat. Many of the components used in OpenShift Container Platform are stored in Quay.io, including container images and the Operators that are used to deploy OpenShift Container Platform itself. Quay.io also offers the means of storing other types of content, including Helm charts. If you want your own, private container registry, OpenShift Container Platform itself includes a private container registry that is installed with OpenShift Container Platform and runs on its cluster. Red Hat also offers a private version of the Quay.io registry called Red Hat Quay . Red Hat Quay includes geo replication, Git build triggers, Clair image scanning, and many other features. All of the registries mentioned here can require credentials to download images from those registries. Some of those credentials are presented on a cluster-wide basis from OpenShift Container Platform, while other credentials can be assigned to individuals. 6.3. Creating a Kubernetes manifest for OpenShift Container Platform While the container image is the basic building block for a containerized application, more information is required to manage and deploy that application in a Kubernetes environment such as OpenShift Container Platform. The typical steps after you create an image are to: Understand the different resources you work with in Kubernetes manifests Make some decisions about what kind of an application you are running Gather supporting components Create a manifest and store that manifest in a Git repository so you can store it in a source versioning system, audit it, track it, promote and deploy it to the environment, roll it back to earlier versions, if necessary, and share it with others 6.3.1. About Kubernetes pods and services While the container image is the basic unit with docker, the basic units that Kubernetes works with are called pods . Pods represent the step in building out an application. A pod can contain one or more than one container. The key is that the pod is the single unit that you deploy, scale, and manage. Scalability and namespaces are probably the main items to consider when determining what goes in a pod. For ease of deployment, you might want to deploy a container in a pod and include its own logging and monitoring container in the pod. Later, when you run the pod and need to scale up an additional instance, those other containers are scaled up with it. For namespaces, containers in a pod share the same network interfaces, shared storage volumes, and resource limitations, such as memory and CPU, which makes it easier to manage the contents of the pod as a single unit. Containers in a pod can also communicate with each other by using standard inter-process communications, such as System V semaphores or POSIX shared memory. While individual pods represent a scalable unit in Kubernetes, a service provides a means of grouping together a set of pods to create a complete, stable application that can complete tasks such as load balancing. A service is also more permanent than a pod because the service remains available from the same IP address until you delete it. When the service is in use, it is requested by name and the OpenShift Container Platform cluster resolves that name into the IP addresses and ports where you can reach the pods that compose the service. By their nature, containerized applications are separated from the operating systems where they run and, by extension, their users. Part of your Kubernetes manifest describes how to expose the application to internal and external networks by defining network policies that allow fine-grained control over communication with your containerized applications. To connect incoming requests for HTTP, HTTPS, and other services from outside your cluster to services inside your cluster, you can use an Ingress resource. If your container requires on-disk storage instead of database storage, which might be provided through a service, you can add volumes to your manifests to make that storage available to your pods. You can configure the manifests to create persistent volumes (PVs) or dynamically create volumes that are added to your Pod definitions. After you define a group of pods that compose your application, you can define those pods in Deployment and DeploymentConfig objects. 6.3.2. Application types , consider how your application type influences how to run it. Kubernetes defines different types of workloads that are appropriate for different kinds of applications. To determine the appropriate workload for your application, consider if the application is: Meant to run to completion and be done. An example is an application that starts up to produce a report and exits when the report is complete. The application might not run again then for a month. Suitable OpenShift Container Platform objects for these types of applications include Job and CronJob objects. Expected to run continuously. For long-running applications, you can write a deployment . Required to be highly available. If your application requires high availability, then you want to size your deployment to have more than one instance. A Deployment or DeploymentConfig object can incorporate a replica set for that type of application. With replica sets, pods run across multiple nodes to make sure the application is always available, even if a worker goes down. Need to run on every node. Some types of Kubernetes applications are intended to run in the cluster itself on every master or worker node. DNS and monitoring applications are examples of applications that need to run continuously on every node. You can run this type of application as a daemon set . You can also run a daemon set on a subset of nodes, based on node labels. Require life-cycle management. When you want to hand off your application so that others can use it, consider creating an Operator . Operators let you build in intelligence, so it can handle things like backups and upgrades automatically. Coupled with the Operator Lifecycle Manager (OLM), cluster managers can expose Operators to selected namespaces so that users in the cluster can run them. Have identity or numbering requirements. An application might have identity requirements or numbering requirements. For example, you might be required to run exactly three instances of the application and to name the instances 0 , 1 , and 2 . A stateful set is suitable for this application. Stateful sets are most useful for applications that require independent storage, such as databases and zookeeper clusters. 6.3.3. Available supporting components The application you write might need supporting components, like a database or a logging component. To fulfill that need, you might be able to obtain the required component from the following Catalogs that are available in the OpenShift Container Platform web console: OperatorHub, which is available in each OpenShift Container Platform 4.11 cluster. The OperatorHub makes Operators available from Red Hat, certified Red Hat partners, and community members to the cluster operator. The cluster operator can make those Operators available in all or selected namespaces in the cluster, so developers can launch them and configure them with their applications. Templates, which are useful for a one-off type of application, where the lifecycle of a component is not important after it is installed. A template provides an easy way to get started developing a Kubernetes application with minimal overhead. A template can be a list of resource definitions, which could be Deployment , Service , Route , or other objects. If you want to change names or resources, you can set these values as parameters in the template. You can configure the supporting Operators and templates to the specific needs of your development team and then make them available in the namespaces in which your developers work. Many people add shared templates to the openshift namespace because it is accessible from all other namespaces. 6.3.4. Applying the manifest Kubernetes manifests let you create a more complete picture of the components that make up your Kubernetes applications. You write these manifests as YAML files and deploy them by applying them to the cluster, for example, by running the oc apply command. 6.3.5. steps At this point, consider ways to automate your container development process. Ideally, you have some sort of CI pipeline that builds the images and pushes them to a registry. In particular, a GitOps pipeline integrates your container development with the Git repositories that you use to store the software that is required to build your applications. The workflow to this point might look like: Day 1: You write some YAML. You then run the oc apply command to apply that YAML to the cluster and test that it works. Day 2: You put your YAML container configuration file into your own Git repository. From there, people who want to install that app, or help you improve it, can pull down the YAML and apply it to their cluster to run the app. Day 3: Consider writing an Operator for your application. 6.4. Develop for Operators Packaging and deploying your application as an Operator might be preferred if you make your application available for others to run. As noted earlier, Operators add a lifecycle component to your application that acknowledges that the job of running an application is not complete as soon as it is installed. When you create an application as an Operator, you can build in your own knowledge of how to run and maintain the application. You can build in features for upgrading the application, backing it up, scaling it, or keeping track of its state. If you configure the application correctly, maintenance tasks, like updating the Operator, can happen automatically and invisibly to the Operator's users. An example of a useful Operator is one that is set up to automatically back up data at particular times. Having an Operator manage an application's backup at set times can save a system administrator from remembering to do it. Any application maintenance that has traditionally been completed manually, like backing up data or rotating certificates, can be completed automatically with an Operator. | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/architecture/understanding-development |
Chapter 1. Deploying VDO | Chapter 1. Deploying VDO As a system administrator, you can use VDO to create deduplicated and compressed storage pools. 1.1. Introduction to VDO Virtual Data Optimizer (VDO) provides inline data reduction for Linux in the form of deduplication, compression, and thin provisioning. When you set up a VDO volume, you specify a block device on which to construct your VDO volume and the amount of logical storage you plan to present. When hosting active VMs or containers, Red Hat recommends provisioning storage at a 10:1 logical to physical ratio: that is, if you are utilizing 1 TB of physical storage, you would present it as 10 TB of logical storage. For object storage, such as the type provided by Ceph, Red Hat recommends using a 3:1 logical to physical ratio: that is, 1 TB of physical storage would present as 3 TB logical storage. In either case, you can simply put a file system on top of the logical device presented by VDO and then use it directly or as part of a distributed cloud storage architecture. Because VDO is thinly provisioned, the file system and applications only see the logical space in use and are not aware of the actual physical space available. Use scripting to monitor the actual available space and generate an alert if use exceeds a threshold: for example, when the VDO volume is 80% full. Additional resources For more information about monitoring physical space, see Section 2.1, "Managing free space on VDO volumes" . 1.2. VDO deployment scenarios You can deploy VDO in a variety of ways to provide deduplicated storage for: both block and file access both local and remote storage Because VDO exposes its deduplicated storage as a standard Linux block device, you can use it with standard file systems, iSCSI and FC target drivers, or as unified storage. Note Deployment of VDO volumes on top of Ceph RADOS Block Device (RBD) is currently supported. However, the deployment of Red Hat Ceph Storage cluster components on top of VDO volumes is currently not supported. KVM You can deploy VDO on a KVM server configured with Direct Attached Storage. File systems You can create file systems on top of VDO and expose them to NFS or CIFS users with the NFS server or Samba. Placement of VDO on iSCSI You can export the entirety of the VDO storage target as an iSCSI target to remote iSCSI initiators. When creating a VDO volume on iSCSI, you can place the VDO volume above or below the iSCSI layer. Although there are many considerations to be made, some guidelines are provided here to help you select the method that best suits your environment. When placing the VDO volume on the iSCSI server (target) below the iSCSI layer: The VDO volume is transparent to the initiator, similar to other iSCSI LUNs. Hiding the thin provisioning and space savings from the client makes the appearance of the LUN easier to monitor and maintain. There is decreased network traffic because there are no VDO metadata reads or writes, and read verification for the dedupe advice does not occur across the network. The memory and CPU resources being used on the iSCSI target can result in better performance. For example, the ability to host an increased number of hypervisors because the volume reduction is happening on the iSCSI target. If the client implements encryption on the initiator and there is a VDO volume below the target, you will not realize any space savings. When placing the VDO volume on the iSCSI client (initiator) above the iSCSI layer: There is a potential for lower network traffic across the network in ASYNC mode if achieving high rates of space savings. You can directly view and control the space savings and monitor usage. If you want to encrypt the data, for example, using dm-crypt , you can implement VDO on top of the crypt and take advantage of space efficiency. LVM On more feature-rich systems, you can use LVM to provide multiple logical unit numbers (LUNs) that are all backed by the same deduplicated storage pool. In the following diagram, the VDO target is registered as a physical volume so that it can be managed by LVM. Multiple logical volumes ( LV1 to LV4 ) are created out of the deduplicated storage pool. In this way, VDO can support multiprotocol unified block or file access to the underlying deduplicated storage pool. Deduplicated unified storage design enables for multiple file systems to collectively use the same deduplication domain through the LVM tools. Also, file systems can take advantage of LVM snapshot, copy-on-write, and shrink or grow features, all on top of VDO. Encryption Device Mapper (DM) mechanisms such as DM Crypt are compatible with VDO. Encrypting VDO volumes helps ensure data security, and any file systems above VDO are still deduplicated. Important Applying the encryption layer above VDO results in little if any data deduplication. Encryption makes duplicate blocks different before VDO can deduplicate them. Always place the encryption layer below VDO. 1.3. Components of a VDO volume VDO uses a block device as a backing store, which can include an aggregation of physical storage consisting of one or more disks, partitions, or even flat files. When a storage management tool creates a VDO volume, VDO reserves volume space for the UDS index and VDO volume. The UDS index and the VDO volume interact together to provide deduplicated block storage. Figure 1.1. VDO disk organization The VDO solution consists of the following components: kvdo A kernel module that loads into the Linux Device Mapper layer provides a deduplicated, compressed, and thinly provisioned block storage volume. The kvdo module exposes a block device. You can access this block device directly for block storage or present it through a Linux file system, such as XFS or ext4. When kvdo receives a request to read a logical block of data from a VDO volume, it maps the requested logical block to the underlying physical block and then reads and returns the requested data. When kvdo receives a request to write a block of data to a VDO volume, it first checks whether the request is a DISCARD or TRIM request or whether the data is uniformly zero. If either of these conditions is true, kvdo updates its block map and acknowledges the request. Otherwise, VDO processes and optimizes the data. uds A kernel module that communicates with the Universal Deduplication Service (UDS) index on the volume and analyzes data for duplicates. For each new piece of data, UDS quickly determines if that piece is identical to any previously stored piece of data. If the index finds a match, the storage system can then internally reference the existing item to avoid storing the same information more than once. The UDS index runs inside the kernel as the uds kernel module. Command line tools For configuring and managing optimized storage. 1.4. The physical and logical size of a VDO volume VDO utilizes physical, available physical, and logical size in the following ways: Physical size This is the same size as the underlying block device. VDO uses this storage for: User data, which might be deduplicated and compressed VDO metadata, such as the UDS index Available physical size This is the portion of the physical size that VDO is able to use for user data It is equivalent to the physical size minus the size of the metadata, minus the remainder after dividing the volume into slabs by the given slab size. Logical Size This is the provisioned size that the VDO volume presents to applications. It is usually larger than the available physical size. If the --vdoLogicalSize option is not specified, then the provisioning of the logical volume is now provisioned to a 1:1 ratio. For example, if a VDO volume is put on top of a 20 GB block device, then 2.5 GB is reserved for the UDS index (if the default index size is used). The remaining 17.5 GB is provided for the VDO metadata and user data. As a result, the available storage to consume is not more than 17.5 GB, and can be less due to metadata that makes up the actual VDO volume. VDO currently supports any logical size up to 254 times the size of the physical volume with an absolute maximum logical size of 4PB. Figure 1.2. VDO disk organization In this figure, the VDO deduplicated storage target sits completely on top of the block device, meaning the physical size of the VDO volume is the same size as the underlying block device. Additional resources For more information about how much storage VDO metadata requires on block devices of different sizes, see Section 1.6.4, "Examples of VDO requirements by physical size" . 1.5. Slab size in VDO The physical storage of the VDO volume is divided into a number of slabs. Each slab is a contiguous region of the physical space. All of the slabs for a given volume have the same size, which can be any power of 2 multiple of 128 MB up to 32 GB. The default slab size is 2 GB to facilitate evaluating VDO on smaller test systems. A single VDO volume can have up to 8192 slabs. Therefore, in the default configuration with 2 GB slabs, the maximum allowed physical storage is 16 TB. When using 32 GB slabs, the maximum allowed physical storage is 256 TB. VDO always reserves at least one entire slab for metadata, and therefore, the reserved slab cannot be used for storing user data. Slab size has no effect on the performance of the VDO volume. Table 1.1. Recommended VDO slab sizes by physical volume size Physical volume size Recommended slab size 10-99 GB 1 GB 100 GB - 1 TB 2 GB 2-256 TB 32 GB The minimal disk usage for a VDO volume using default settings of 2 GB slab size and 0.25 dense index, requires approx 4.7 GB. This provides slightly less than 2 GB of physical data to write at 0% deduplication or compression. Here, the minimal disk usage is the sum of the default slab size and dense index. You can control the slab size by providing the --vdosettings 'vdo_slab_size_mb= size-in-megabytes ' option to the lvcreate command. 1.6. VDO requirements VDO has certain requirements on its placement and your system resources. 1.6.1. VDO memory requirements Each VDO volume has two distinct memory requirements: The VDO module VDO requires a fixed 38 MB of RAM and several variable amounts: 1.15 MB of RAM for each 1 MB of configured block map cache size. The block map cache requires a minimum of 150 MB of RAM. 1.6 MB of RAM for each 1 TB of logical space. 268 MB of RAM for each 1 TB of physical storage managed by the volume. The UDS index The Universal Deduplication Service (UDS) requires a minimum of 250 MB of RAM, which is also the default amount that deduplication uses. You can configure the value when formatting a VDO volume, because the value also affects the amount of storage that the index needs. The memory required for the UDS index is determined by the index type and the required size of the deduplication window. The deduplication window is the amount of previously written data that VDO can check for matching blocks. Index type Deduplication window Dense 1 TB per 1 GB of RAM Sparse 10 TB per 1 GB of RAM Note The minimal disk usage for a VDO volume using default settings of 2 GB slab size and 0.25 dense index, requires approx 4.7 GB. This provides slightly less than 2 GB of physical data to write at 0% deduplication or compression. Here, the minimal disk usage is the sum of the default slab size and dense index. Additional resources Examples of VDO requirements by physical size 1.6.2. VDO storage space requirements You can configure a VDO volume to use up to 256 TB of physical storage. Only a certain part of the physical storage is usable to store data. VDO requires storage for two types of VDO metadata and for the UDS index. Use the following calculations to determine the usable size of a VDO-managed volume: The first type of VDO metadata uses approximately 1 MB for each 4 GB of physical storage plus an additional 1 MB per slab. The second type of VDO metadata consumes approximately 1.25 MB for each 1 GB of logical storage , rounded up to the nearest slab. The amount of storage required for the UDS index depends on the type of index and the amount of RAM allocated to the index. For each 1 GB of RAM, a dense UDS index uses 17 GB of storage, and a sparse UDS index will use 170 GB of storage. Additional resources Section 1.6.4, "Examples of VDO requirements by physical size" Section 1.5, "Slab size in VDO" 1.6.3. Placement of VDO in the storage stack Place storage layers either above, or under the Virtual Data Optimizer (VDO), to fit the placement requirements. A VDO volume is a thin-provisioned block device. You can prevent running out of physical space by placing the volume above a storage layer that you can expand at a later time. Examples of such expandable storage are Logical Volume Manager (LVM) volumes, or Multiple Device Redundant Array Inexpensive or Independent Disks (MD RAID) arrays. You can place thick provisioned layers above VDO. There are two aspects of thick provisioned layers that you must consider: Writing new data to unused logical space on a thick device. When using VDO, or other thin-provisioned storage, the device can report that it is out of space during this kind of write. Overwriting used logical space on a thick device with new data. When using VDO, overwriting data can also result in a report of the device being out of space. These limitations affect all layers above the VDO layer. If you do not monitor the VDO device, you can unexpectedly run out of physical space on the thick-provisioned volumes above VDO. See the following examples of supported and unsupported VDO volume configurations. Figure 1.3. Supported VDO volume configurations Figure 1.4. Unsupported VDO volume configurations Additional resources For more information about stacking VDO with LVM layers, see the Stacking LVM volumes article. 1.6.4. Examples of VDO requirements by physical size The following tables provide approximate system requirements of VDO based on the physical size of the underlying volume. Each table lists requirements appropriate to the intended deployment, such as primary storage or backup storage. The exact numbers depend on your configuration of the VDO volume. Primary storage deployment In the primary storage case, the UDS index is between 0.01% to 25% the size of the physical size. Table 1.2. Examples of storage and memory configurations for primary storage Physical size RAM usage: UDS RAM usage: VDO Disk usage Index type 1 TB 250 MB 472 MB 2.5 GB Dense 10 TB 1 GB 3 GB 10 GB Dense 250 MB 22 GB Sparse 50 TB 1 GB 14 GB 85 GB Sparse 100 TB 3 GB 27 GB 255 GB Sparse 256 TB 5 GB 69 GB 425 GB Sparse Backup storage deployment In the backup storage case, the deduplication window must be larger than the backup set. If you expect the backup set or the physical size to grow in the future, factor this into the index size. Table 1.3. Examples of storage and memory configurations for backup storage Deduplication window RAM usage: UDS Disk usage Index type 1 TB 250 MB 2.5 GB Dense 10 TB 2 GB 21 GB Dense 50 TB 2 GB 170 GB Sparse 100 TB 4 GB 340 GB Sparse 256 TB 8 GB 700 GB Sparse 1.7. Installing VDO You can install the VDO software necessary to create, mount, and manage VDO volumes. Procedure Install the VDO software: 1.8. Creating a VDO volume This procedure creates a VDO volume on a block device. Prerequisites Install the VDO software. See Section 1.7, "Installing VDO" . Use expandable storage as the backing block device. For more information, see Section 1.6.3, "Placement of VDO in the storage stack" . Procedure In all the following steps, replace vdo-name with the identifier you want to use for your VDO volume; for example, vdo1 . You must use a different name and device for each instance of VDO on the system. Find a persistent name for the block device where you want to create the VDO volume. For more information about persistent names, see Chapter 6, Overview of persistent naming attributes . If you use a non-persistent device name, then VDO might fail to start properly in the future if the device name changes. Create the VDO volume: Replace block-device with the persistent name of the block device where you want to create the VDO volume. For example, /dev/disk/by-id/scsi-3600508b1001c264ad2af21e903ad031f . Replace logical-size with the amount of logical storage that the VDO volume should present: For active VMs or container storage, use logical size that is ten times the physical size of your block device. For example, if your block device is 1TB in size, use 10T here. For object storage, use logical size that is three times the physical size of your block device. For example, if your block device is 1TB in size, use 3T here. If the physical block device is larger than 16TiB, add the --vdoSlabSize=32G option to increase the slab size on the volume to 32GiB. Using the default slab size of 2GiB on block devices larger than 16TiB results in the vdo create command failing with the following error: Example 1.1. Creating VDO for container storage For example, to create a VDO volume for container storage on a 1TB block device, you might use: Important If a failure occurs when creating the VDO volume, remove the volume to clean up. See Section 2.10.2, "Removing an unsuccessfully created VDO volume" for details. Create a file system on top of the VDO volume: For the XFS file system: For the ext4 file system: Note The purpose of the -K and -E nodiscard options on a freshly created VDO volume is to not spend time sending requests, as it has no effect on an un-allocated block. A fresh VDO volume starts out 100% un-allocated. Use the following command to wait for the system to register the new device node: steps Mount the file system. See Section 1.9, "Mounting a VDO volume" for details. Enable the discard feature for the file system on your VDO device. See Section 1.10, "Enabling periodic block discard" for details. Additional resources vdo(8) man page on your system 1.9. Mounting a VDO volume This procedure mounts a file system on a VDO volume, either manually or persistently. Prerequisites A VDO volume has been created on your system. For instructions, see Section 1.8, "Creating a VDO volume" . Procedure To mount the file system on the VDO volume manually, use: To configure the file system to mount automatically at boot, add a line to the /etc/fstab file: For the XFS file system: For the ext4 file system: If the VDO volume is located on a block device that requires network, such as iSCSI, add the _netdev mount option. Additional resources vdo(8) man page on your system For iSCSI and other block devices requiring network, see the systemd.mount(5) man page for information about the _netdev mount option. 1.10. Enabling periodic block discard You can enable a systemd timer to regularly discard unused blocks on all supported file systems. Procedure Enable and start the systemd timer: Verification Verify the status of the timer: 1.11. Monitoring VDO This procedure describes how to obtain usage and efficiency information from a VDO volume. Prerequisites Install the VDO software. See Section 1.7, "Installing VDO" . Procedure Use the vdostats utility to get information about a VDO volume: Additional resources vdostats(8) man page on your system | [
"yum install lvm2 kmod-kvdo vdo",
"vdo create --name= vdo-name --device= block-device --vdoLogicalSize= logical-size",
"vdo: ERROR - vdoformat: formatVDO failed on '/dev/ device ': VDO Status: Exceeds maximum number of slabs supported",
"vdo create --name= vdo1 --device= /dev/disk/by-id/scsi-3600508b1001c264ad2af21e903ad031f --vdoLogicalSize= 10T",
"mkfs.xfs -K /dev/mapper/ vdo-name",
"mkfs.ext4 -E nodiscard /dev/mapper/ vdo-name",
"udevadm settle",
"mount /dev/mapper/ vdo-name mount-point",
"/dev/mapper/ vdo-name mount-point xfs defaults 0 0",
"/dev/mapper/ vdo-name mount-point ext4 defaults 0 0",
"systemctl enable --now fstrim.timer Created symlink /etc/systemd/system/timers.target.wants/fstrim.timer /usr/lib/systemd/system/fstrim.timer.",
"systemctl status fstrim.timer fstrim.timer - Discard unused blocks once a week Loaded: loaded (/usr/lib/systemd/system/fstrim.timer; enabled; vendor preset: disabled) Active: active (waiting) since Wed 2023-05-17 13:24:41 CEST; 3min 15s ago Trigger: Mon 2023-05-22 01:20:46 CEST; 4 days left Docs: man:fstrim May 17 13:24:41 localhost.localdomain systemd[1]: Started Discard unused blocks once a week.",
"vdostats --human-readable Device 1K-blocks Used Available Use% Space saving% /dev/mapper/node1osd1 926.5G 21.0G 905.5G 2% 73% /dev/mapper/node1osd2 926.5G 28.2G 898.3G 3% 64%"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deduplicating_and_compressing_storage/deploying-vdo_deduplicating-and-compressing-storage |
Chapter 18. Installation configuration parameters for AWS | Chapter 18. Installation configuration parameters for AWS Before you deploy an OpenShift Container Platform cluster on AWS, you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 18.1. Available installation configuration parameters for AWS The following tables specify the required, optional, and AWS-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 18.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 18.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 18.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 18.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 18.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 18.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings Required to set the NLB load balancer type in AWS. Valid values are Classic or NLB . If no value is specified, the installation program defaults to Classic . The installation program sets the value provided here in the ingress cluster configuration object. If you do not specify a load balancer type for other Ingress Controllers, they use the type set in this parameter. Classic or NLB . The default value is Classic . How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough , or Manual . Important Setting this parameter to Manual enables alternatives to storing administrator-level secrets in the kube-system project, which require additional configuration steps. For more information, see "Alternatives to storing administrator-level secrets in the kube-system project". 18.1.4. Optional AWS configuration parameters Optional AWS configuration parameters are described in the following table: Table 18.4. Optional AWS parameters Parameter Description Values The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. Integer, for example 4000 . The size in GiB of the root volume. Integer, for example 500 . The type of the root volume. Valid AWS EBS volume type , such as io1 . The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of worker nodes with a specific KMS key. Valid key ID or the key ARN . The EC2 instance type for the compute machines. Valid AWS instance type, such as m4.2xlarge . See the Supported AWS machine types table that follows. The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . The AWS region that the installation program creates compute resources in. Any valid AWS region , such as us-east-1 . You can use the AWS CLI to access the regions available based on your selected instance type. For example: aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. Currently, AWS Graviton3 processors are only available in some regions. The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. The Input/Output Operations Per Second (IOPS) that is reserved for the root volume on control plane machines. Integer, for example 4000 . The size in GiB of the root volume for control plane machines. Integer, for example 500 . The type of the root volume for control plane machines. Valid AWS EBS volume type , such as io1 . The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of control plane nodes with a specific KMS key. Valid key ID and the key ARN . The EC2 instance type for the control plane machines. Valid AWS instance type, such as m6i.xlarge . See the Supported AWS machine types table that follows. The availability zones where the installation program creates machines for the control plane machine pool. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . The AWS region that the installation program creates control plane resources in. Valid AWS region , such as us-east-1 . The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. String, for example Z3URY6TWQ91KVV . An Amazon Resource Name (ARN) for an existing IAM role in the account containing the specified hosted zone. The installation program and cluster operators will assume this role when performing operations on the hosted zone. This parameter should only be used if you are installing a cluster into a shared VPC. String, for example arn:aws:iam::1234567890:role/shared-vpc-role . The AWS service endpoint name and URL. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. Valid AWS service endpoint name and valid AWS service endpoint URL. A map of keys and values that the installation program adds as tags to all resources that it creates. Any valid YAML map, such as key value pairs in the <key>: <value> format. For more information about AWS tags, see Tagging Your Amazon EC2 Resources in the AWS documentation. Note You can add up to 25 user defined tags during installation. The remaining 25 tags are reserved for OpenShift Container Platform. A flag that directs in-cluster Operators to include the specified user tags in the tags of the AWS resources that the Operators create. Boolean values, for example true or false . If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same machineNetwork[].cidr ranges that you specify. For a standard cluster, specify a public and a private subnet for each availability zone. For a private cluster, specify a private subnet for each availability zone. For clusters that use AWS Local Zones, you must add AWS Local Zone subnets to this list to ensure edge machine pool creation. Valid subnet IDs. Prevents the S3 bucket from being deleted after completion of bootstrapping. true or false . The default value is false , which results in the S3 bucket being deleted. | [
"apiVersion:",
"baseDomain:",
"metadata:",
"metadata: name:",
"platform:",
"pullSecret:",
"{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }",
"networking:",
"networking: networkType:",
"networking: clusterNetwork:",
"networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"networking: clusterNetwork: cidr:",
"networking: clusterNetwork: hostPrefix:",
"networking: serviceNetwork:",
"networking: serviceNetwork: - 172.30.0.0/16",
"networking: machineNetwork:",
"networking: machineNetwork: - cidr: 10.0.0.0/16",
"networking: machineNetwork: cidr:",
"additionalTrustBundle:",
"capabilities:",
"capabilities: baselineCapabilitySet:",
"capabilities: additionalEnabledCapabilities:",
"cpuPartitioningMode:",
"compute:",
"compute: architecture:",
"compute: hyperthreading:",
"compute: name:",
"compute: platform:",
"compute: replicas:",
"featureSet:",
"controlPlane:",
"controlPlane: architecture:",
"controlPlane: hyperthreading:",
"controlPlane: name:",
"controlPlane: platform:",
"controlPlane: replicas:",
"credentialsMode:",
"fips:",
"imageContentSources:",
"imageContentSources: source:",
"imageContentSources: mirrors:",
"platform: aws: lbType:",
"publish:",
"sshKey:",
"compute: platform: aws: amiID:",
"compute: platform: aws: iamRole:",
"compute: platform: aws: rootVolume: iops:",
"compute: platform: aws: rootVolume: size:",
"compute: platform: aws: rootVolume: type:",
"compute: platform: aws: rootVolume: kmsKeyARN:",
"compute: platform: aws: type:",
"compute: platform: aws: zones:",
"compute: aws: region:",
"aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge",
"controlPlane: platform: aws: amiID:",
"controlPlane: platform: aws: iamRole:",
"controlPlane: platform: aws: rootVolume: iops:",
"controlPlane: platform: aws: rootVolume: size:",
"controlPlane: platform: aws: rootVolume: type:",
"controlPlane: platform: aws: rootVolume: kmsKeyARN:",
"controlPlane: platform: aws: type:",
"controlPlane: platform: aws: zones:",
"controlPlane: aws: region:",
"platform: aws: amiID:",
"platform: aws: hostedZone:",
"platform: aws: hostedZoneRole:",
"platform: aws: serviceEndpoints: - name: url:",
"platform: aws: userTags:",
"platform: aws: propagateUserTags:",
"platform: aws: subnets:",
"platform: aws: preserveBootstrapIgnition:"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_aws/installation-config-parameters-aws |
Authentication | Authentication Red Hat Developer Hub 1.4 Configuring authentication to external services in Red Hat Developer Hub Red Hat Customer Content Services | [
"auth: environment: development providers: guest: dangerouslyAllowOutsideDevelopment: true",
"upstream: backstage: appConfig: app: baseUrl: 'https://{{- include \"janus-idp.hostname\" . }}' auth: environment: development providers: guest: dangerouslyAllowOutsideDevelopment: true",
"auth: environment: production providers: oidc: production: metadataUrl: USD{AUTH_OIDC_METADATA_URL} clientId: USD{AUTH_OIDC_CLIENT_ID} clientSecret: USD{AUTH_OIDC_CLIENT_SECRET} prompt: auto signInPage: oidc",
"auth: environment: production providers: oidc: production: metadataUrl: USD{AUTH_OIDC_METADATA_URL} clientId: USD{AUTH_OIDC_CLIENT_ID} clientSecret: USD{AUTH_OIDC_CLIENT_SECRET} prompt: auto signInPage: oidc dangerouslyAllowSignInWithoutUserInCatalog: true",
"auth: providers: oidc: production: callbackUrl: USD{AUTH_OIDC_CALLBACK_URL}",
"auth: providers: oidc: production: tokenEndpointAuthMethod: USD{AUTH_OIDC_TOKEN_ENDPOINT_METHOD}",
"auth: providers: oidc: production: tokenSignedResponseAlg: USD{AUTH_OIDC_SIGNED_RESPONSE_ALG}",
"auth: providers: oidc: production: scope: USD{AUTH_OIDC_SCOPE}",
"auth: providers: oidc: production: signIn: resolvers: - resolver: preferredUsernameMatchingUserEntityName - resolver: emailMatchingUserEntityProfileEmail - resolver: emailLocalPartMatchingUserEntityName",
"auth: backstageTokenExpiration: { minutes: <user_defined_value> }",
"dangerouslyAllowSignInWithoutUserInCatalog: false catalog: providers: keycloakOrg: default: baseUrl: USD{AUTH_OIDC_METADATA_URL} clientId: USD{AUTH_OIDC_CLIENT_ID} clientSecret: USD{AUTH_OIDC_CLIENT_SECRET}",
"catalog: providers: keycloakOrg: default: realm: master",
"catalog: providers: keycloakOrg: default: loginRealm: master",
"catalog: providers: keycloakOrg: default: userQuerySize: 100",
"catalog: providers: keycloakOrg: default: groupQuerySize: 100",
"catalog: providers: keycloakOrg: default: schedule: frequency: { hours: 1 }",
"catalog: providers: keycloakOrg: default: schedule: timeout: { minutes: 50 }",
"catalog: providers: keycloakOrg: default: schedule: initialDelay: { seconds: 15}",
"{\"class\":\"KeycloakOrgEntityProvider\",\"level\":\"info\",\"message\":\"Read 3 Keycloak users and 2 Keycloak groups in 1.5 seconds. Committing...\",\"plugin\":\"catalog\",\"service\":\"backstage\",\"taskId\":\"KeycloakOrgEntityProvider:default:refresh\",\"taskInstanceId\":\"bf0467ff-8ac4-4702-911c-380270e44dea\",\"timestamp\":\"2024-09-25 13:58:04\"} {\"class\":\"KeycloakOrgEntityProvider\",\"level\":\"info\",\"message\":\"Committed 3 Keycloak users and 2 Keycloak groups in 0.0 seconds.\",\"plugin\":\"catalog\",\"service\":\"backstage\",\"taskId\":\"KeycloakOrgEntityProvider:default:refresh\",\"taskInstanceId\":\"bf0467ff-8ac4-4702-911c-380270e44dea\",\"timestamp\":\"2024-09-25 13:58:04\"}",
"import { GroupTransformer, keycloakTransformerExtensionPoint, UserTransformer, } from '@backstage-community/plugin-catalog-backend-module-keycloak'; const customGroupTransformer: GroupTransformer = async ( entity, // entity output from default parser realm, // Keycloak realm name groups, // Keycloak group representation ) => { /* apply transformations */ return entity; }; const customUserTransformer: UserTransformer = async ( entity, // entity output from default parser user, // Keycloak user representation realm, // Keycloak realm name groups, // Keycloak group representation ) => { /* apply transformations */ return entity; }; export const keycloakBackendModuleTransformer = createBackendModule({ pluginId: 'catalog', moduleId: 'keycloak-transformer', register(reg) { reg.registerInit({ deps: { keycloak: keycloakTransformerExtensionPoint, }, async init({ keycloak }) { keycloak.setUserTransformer(customUserTransformer); keycloak.setGroupTransformer(customGroupTransformer); /* highlight-add-end */ }, }); }, });",
"backend.add(import(backstage-plugin-catalog-backend-module-keycloak-transformer))",
"{\"class\":\"KeycloakOrgEntityProvider\",\"level\":\"info\",\"message\":\"Read 3 Keycloak users and 2 Keycloak groups in 1.5 seconds. Committing...\",\"plugin\":\"catalog\",\"service\":\"backstage\",\"taskId\":\"KeycloakOrgEntityProvider:default:refresh\",\"taskInstanceId\":\"bf0467ff-8ac4-4702-911c-380270e44dea\",\"timestamp\":\"2024-09-25 13:58:04\"} {\"class\":\"KeycloakOrgEntityProvider\",\"level\":\"info\",\"message\":\"Committed 3 Keycloak users and 2 Keycloak groups in 0.0 seconds.\",\"plugin\":\"catalog\",\"service\":\"backstage\",\"taskId\":\"KeycloakOrgEntityProvider:default:refresh\",\"taskInstanceId\":\"bf0467ff-8ac4-4702-911c-380270e44dea\",\"timestamp\":\"2024-09-25 13:58:04\"}",
"auth: environment: production providers: github: production: clientId: USD{AUTH_GITHUB_CLIENT_ID} clientSecret: USD{AUTH_GITHUB_CLIENT_SECRET} integrations: github: - host: USD{GITHUB_HOST_DOMAIN} apps: - appId: USD{AUTH_GITHUB_APP_ID} clientId: USD{AUTH_GITHUB_CLIENT_ID} clientSecret: USD{GITHUB_CLIENT_SECRET} webhookUrl: USD{GITHUB_WEBHOOK_URL} webhookSecret: USD{GITHUB_WEBHOOK_SECRET} privateKey: | USD{GITHUB_PRIVATE_KEY_FILE} signInPage: github",
"auth: environment: production providers: github: production: clientId: USD{AUTH_GITHUB_CLIENT_ID} clientSecret: USD{AUTH_GITHUB_CLIENT_SECRET} integrations: github: - host: USD{GITHUB_HOST_DOMAIN} apps: - appId: USD{AUTH_GITHUB_APP_ID} clientId: USD{AUTH_GITHUB_CLIENT_ID} clientSecret: USD{GITHUB_CLIENT_SECRET} webhookUrl: USD{GITHUB_WEBHOOK_URL} webhookSecret: USD{GITHUB_WEBHOOK_SECRET} privateKey: | USD{GITHUB_PRIVATE_KEY_FILE} signInPage: github dangerouslyAllowSignInWithoutUserInCatalog: true",
"auth: providers: github: production: callbackUrl: <your_intermediate_service_url/handler>",
"auth: environment: production providers: github: production: clientId: USD{AUTH_GITHUB_CLIENT_ID} clientSecret: USD{AUTH_GITHUB_CLIENT_SECRET} <your_other_authentication_providers_configuration> integrations: github: - host: USD{GITHUB_HOST_DOMAIN} apps: - appId: USD{AUTH_GITHUB_APP_ID} clientId: USD{AUTH_GITHUB_CLIENT_ID} clientSecret: USD{GITHUB_CLIENT_SECRET} webhookUrl: USD{GITHUB_WEBHOOK_URL} webhookSecret: USD{GITHUB_WEBHOOK_SECRET} privateKey: | USD{GITHUB_PRIVATE_KEY_FILE} signInPage: <your_main_authentication_provider>",
"dangerouslyAllowSignInWithoutUserInCatalog: false catalog: providers: github: providerId: organization: \"USD{GITHUB_ORGANIZATION}\" schedule: frequency: minutes: 30 initialDelay: seconds: 15 timeout: minutes: 15 githubOrg: githubUrl: \"USD{GITHUB_HOST_DOMAIN}\" orgs: [ \"USD{GITHUB_ORGANIZATION}\" ] schedule: frequency: minutes: 30 initialDelay: seconds: 15 timeout: minutes: 15",
"{\"class\":\"GithubMultiOrgEntityProvider\",\"level\":\"info\",\"message\":\"Reading GitHub users and teams for org: rhdh-dast\",\"plugin\":\"catalog\",\"service\":\"backstage\",\"target\":\"https://github.com\",\"taskId\":\"GithubMultiOrgEntityProvider:production:refresh\",\"taskInstanceId\":\"801b3c6c-167f-473b-b43e-e0b4b780c384\",\"timestamp\":\"2024-09-09 23:55:58\"} {\"class\":\"GithubMultiOrgEntityProvider\",\"level\":\"info\",\"message\":\"Read 7 GitHub users and 2 GitHub groups in 0.4 seconds. Committing...\",\"plugin\":\"catalog\",\"service\":\"backstage\",\"target\":\"https://github.com\",\"taskId\":\"GithubMultiOrgEntityProvider:production:refresh\",\"taskInstanceId\":\"801b3c6c-167f-473b-b43e-e0b4b780c384\",\"timestamp\":\"2024-09-09 23:55:59\"}",
"auth: environment: production providers: microsoft: production: clientId: USD{AUTH_AZURE_CLIENT_ID} clientSecret: USD{AUTH_AZURE_CLIENT_SECRET} tenantId: USD{AUTH_AZURE_TENANT_ID} signInPage: microsoft",
"auth: environment: production providers: microsoft: production: clientId: USD{AUTH_AZURE_CLIENT_ID} clientSecret: USD{AUTH_AZURE_CLIENT_SECRET} tenantId: USD{AUTH_AZURE_TENANT_ID} signInPage: microsoft dangerouslyAllowSignInWithoutUserInCatalog: true",
"auth: environment: production providers: microsoft: production: domainHint: USD{AUTH_AZURE_TENANT_ID}",
"auth: environment: production providers: microsoft: production: additionalScopes: - Mail.Send",
"dangerouslyAllowSignInWithoutUserInCatalog: false catalog: providers: microsoftGraphOrg: providerId: target: https://graph.microsoft.com/v1.0 tenantId: USD{AUTH_AZURE_TENANT_ID} clientId: USD{AUTH_AZURE_CLIENT_ID} clientSecret: USD{AUTH_AZURE_CLIENT_SECRET}",
"catalog: providers: microsoftGraphOrg: providerId: authority: https://login.microsoftonline.com/",
"catalog: providers: microsoftGraphOrg: providerId: queryMode: advanced",
"catalog: providers: microsoftGraphOrg: providerId: user: expand: manager",
"catalog: providers: microsoftGraphOrg: providerId: user: filter: accountEnabled eq true and userType eq 'member'",
"catalog: providers: microsoftGraphOrg: providerId: user: loadPhotos: true",
"catalog: providers: microsoftGraphOrg: providerId: user: select: ['id', 'displayName', 'description']",
"catalog: providers: microsoftGraphOrg: providerId: userGroupMember: filter: \"displayName eq 'Backstage Users'\"",
"catalog: providers: microsoftGraphOrg: providerId: userGroupMember: search: '\"description:One\" AND (\"displayName:Video\" OR \"displayName:Drive\")'",
"catalog: providers: microsoftGraphOrg: providerId: group: expand: member",
"catalog: providers: microsoftGraphOrg: providerId: group: filter: securityEnabled eq false and mailEnabled eq true and groupTypes/any(c:c+eq+'Unified')",
"catalog: providers: microsoftGraphOrg: providerId: group: search: '\"description:One\" AND (\"displayName:Video\" OR \"displayName:Drive\")'",
"catalog: providers: microsoftGraphOrg: providerId: group: select: ['id', 'displayName', 'description']",
"catalog: providers: microsoftGraphOrg: providerId: schedule: frequency: { hours: 1 }",
"catalog: providers: microsoftGraphOrg: providerId: schedule: timeout: { minutes: 50 }",
"catalog: providers: microsoftGraphOrg: providerId: schedule: initialDelay: { seconds: 15}",
"backend:start: {\"class\":\"MicrosoftGraphOrgEntityProviderUSD1\",\"level\":\"info\",\"message\":\"Read 1 msgraph users and 1 msgraph groups in 2.2 seconds. Committing...\",\"plugin\":\"catalog\",\"service\":\"backstage\",\"taskId\":\"MicrosoftGraphOrgEntityProvider:default:refresh\",\"taskInstanceId\":\"88a67ce1-c466-41a4-9760-825e16b946be\",\"timestamp\":\"2024-06-26 12:23:42\"} backend:start: {\"class\":\"MicrosoftGraphOrgEntityProviderUSD1\",\"level\":\"info\",\"message\":\"Committed 1 msgraph users and 1 msgraph groups in 0.0 seconds.\",\"plugin\":\"catalog\",\"service\":\"backstage\",\"taskId\":\"MicrosoftGraphOrgEntityProvider:default:refresh\",\"taskInstanceId\":\"88a67ce1-c466-41a4-9760-825e16b946be\",\"timestamp\":\"2024-06-26 12:23:42\"}"
]
| https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html-single/authentication/index |
Chapter 13. Interrupt Request (IRQ) Tapset | Chapter 13. Interrupt Request (IRQ) Tapset This family of probe points is used to probe interrupt request (IRQ) activities. It contains the following probe points: | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/irq-dot-stp |
Chapter 4. Bug fixes | Chapter 4. Bug fixes This section describes bugs with significant user impact, which were fixed in this release of Red Hat Ceph Storage. In addition, the section includes descriptions of fixed known issues found in versions. 4.1. The Cephadm utility The ceph-volume commands do not block OSDs and devices and runs as expected Previously, the ceph-volume commands like ceph-volume lvm list and ceph-volume inventory were not completed thereby preventing the execution of other ceph-volume commands for creating OSDs, listing devices, and listing OSDs. With this update, the default output of these commands are not added to the Cephadm log resulting in completion of all ceph-volume commands run in a container launched by the cephadm binary. ( BZ#1948717 ) Searching Ceph OSD id claim matches a host's fully-qualified domain name to a host name Previously, when replacing a failed Ceph OSD, the name in the CRUSH map appeared only as a host name, and searching for the Ceph OSD id claim was using the fully-qualified domain name (FQDN) instead. As a result, the Ceph OSD id claim was not found. With this release, the Ceph OSD id claim search functionality correctly matches a FQDN to a host name, and replacing the Ceph OSD works as expected. ( BZ#1954503 ) The ceph orch ls command correctly displays the number of daemons running for a given service Previously, the ceph orch ls --service-type SERVICE_TYPE command incorrectly reported 0 daemons running for a service that had running daemons, and users were unable to see how many daemons were running for a specific service. With this release, the ceph orch ls --service-type SERVICE_TYPE command now correctly displays how many daemons are running for that given service. ( BZ#1964951 ) Users are no longer able to remove the Ceph Manager service using cephadm Previously, if a user ran a ceph orch rm mgr command, it would cause cephadm to remove all the Ceph Manager daemons in the storage cluster, making the storage cluster inaccessible. With this release, attempting to remove the Ceph Manager, a Ceph Monitor, or a Ceph OSD service using the ceph orch rm SERVICE_NAME command displays a warning message stating that it is not safe to remove these services, and results in no actions taken. ( BZ#1976820 ) The node-exporter and alert-manager container versions have been updated Previously, the Red Hat Ceph Storage 5.0 node-exporter and alert-manager container versions defaulted to version 4.5, when version 4.6 was available, and in use in Red Hat Ceph Storage 4.2. With this release, using the cephadm command to upgrade from Red Hat Ceph Storage 5.0 to Red Hat Ceph Storage 5.0z1 results in the node-exporter and alert-manager container versions being updated to version 4.6. ( BZ#1996090 ) 4.2. Ceph Dashboard Secure cookie-based sessions are enabled for accessing the Red Hat Ceph Storage Dashboard Previously, storing information in LocalStorage made the Red Hat Ceph Storage dashboard accessible to all sessions running in a browser, making the dashboard vulnerable to XSS attacks. With this release, LocalStorage is replaced with secure cookie-based sessions and thereby the session secret is available only to the current browser instance. ( BZ#1889435 ) 4.3. Ceph File System The MDS daemon no longer crashes when receiving unsupported metrics Previously, the MDS daemon could not handle the new metrics from the kernel client causing the MDS daemons to crash on receiving any unsupported metrics. With this release, the MDS discards any unsupported metrics and works as expected. ( BZ#2030451 ) Deletion of data is allowed when the storage cluster is full Previously, when the storage cluster was full, the Ceph Manager hung on checking pool permissions while reading the configuration file. The Ceph Metadata Server (MDS) did not allow write operations to occur when the Ceph OSD was full, resulting in an ENOSPACE error. When the storage cluster hit full ratio, users could not delete data to free space using the Ceph Manager volume plugin. With this release, the new FULL capability is introduced. With the FULL capability, the Ceph Manager bypasses the Ceph OSD full check. The client_check_pool_permission option is disabled by default whereas, in releases, it was enabled. With the Ceph Manager having FULL capabilities, the MDS no longer blocks Ceph Manager calls. This results in allowing the Ceph Manager to free up space by deleting subvolumes and snapshots when a storage cluster is full. ( BZ#1910272 ) Ceph monitors no longer crash when processing authentication requests from Ceph File System clients Previously, if a client did not have permission to view a legacy file system, the Ceph monitors would crash when processing authentication requests from clients. This caused the Ceph monitors to become unavailable. With this release, the code update fixes the handling of legacy file system authentication requests and authentication requests work as expected. ( BZ#1976915 ) Fixes KeyError appearing every few milliseconds in the MGR log Previously, KeyError was logged to the Ceph Manager log every few milliseconds. This was due to an attempt to remove an element from client_metadata[in_progress] dictionary with a non-existent key, resulting in a KeyError . As a result, locating other stack traces in the logs was difficult. This release fixes the code logic in the Ceph File System performance metrics and KeyError messages in the Ceph Manager log. ( BZ#1979520 ) Deleting a subvolume clone is no longer allowed for certain clone states Previously, if you tried to remove a subvolume clone with the force option when the clone was not in a COMPLETED or CANCELLED state, the clone was not removed from the index tracking the ongoing clones. This caused the corresponding cloner thread to retry the cloning indefinitely, eventually resulting in an ENOENT failure. With the default number of cloner threads set to four, attempts to delete four clones resulted in all four threads entering a blocked state allowing none of the pending clones to complete. With this release, unless a clone is either in a COMPLETED or CANCELLED state, it is not removed. The cloner threads no longer block because the clones are deleted, along with their entry from the index tracking the ongoing clones. As a result, pending clones continue to complete as expected. ( BZ#1980920 ) The ceph fs snapshot mirror daemon status command no longer requires a file system name Previously, users were required to give at least one file system name to the ceph fs snapshot mirror daemon status command. With this release, the user no longer needs to specify a file system name as a command argument, and daemon status displays each file system separately. ( BZ#1988338 ) Stopping the cephfs-mirror daemon can result in an unclean shutdown Previously, the cephfs-mirror process would terminate uncleanly due to a race condition during cephfs-mirror shutdown process. With this release, the race condition was resolved, and as a result, the cephfs-mirror daemon shuts down gracefully. ( BZ#2002140 ) The Ceph Metadata Server no longer falsely reports metadata damage, and failure warnings Previously, the Ceph Monitor assigned a rank to standby-replay daemons during creation. This behavior can lead to the Ceph Metadata Servers (MDS) reporting false metadata damage, and failure warnings. With this release, Ceph Monitors no longer assign rank to standby-replay daemons during creation, eliminating false metadata damage, and failure warnings. ( BZ#2002398 ) 4.4. Ceph Manager plugins The pg_autoscaler module no longer reports failed op error Previously, the pg-autoscaler module reported KeyError for op when trying to get the pool status if any pool had the CRUSH rule step set_chooseleaf_vary_r 1 . As a result, the Ceph cluster health displayed HEALTH_ERR with Module 'pg_autoscaler' has failed: op error. With this release,only steps with op are iterated for a CRUSH rule while getting the pool status and the pg_autoscaler module no longer reports the failed op error. ( BZ#1874866 ) 4.5. Ceph Object Gateway S3 lifecycle expiration header feature identifies the objects as expected Previously, some objects without a lifecycle expiration were incorrectly identified in GET or HEAD requests as having a lifecycle expiration due to an error in the logic of the feature when comparing object names to stored lifecycle policy. With this update, the S3 lifecycle expiration header feature works as expected and identifies the objects correctly. ( BZ#1786226 ) The radosgw-admin user list command no longer takes a long time to execute in Red Hat Ceph Storage cluster 4 Previously, in Red Hat Ceph Storage cluster 4, the performance of many radosgw-admin commands were affected because the value of rgw_gc_max_objs config variable ,which controls the number of GC shards, was increased significantly. This included radosgw-admin commands that were not related to GC. With this release, after an upgrade from Red Hat Ceph Storage cluster 3 to Red Hat Ceph Storage cluster 4 , the radosgw-admin user list command does not take a longer time to execute. Only the performance of radosgw-admin commands that require GC to operate is affected by the value of the rgw_gc_max_objs configuration. ( BZ#1927940 ) Policies with invalid Amazon resource name elements no longer lead to privilege escalations Previously, incorrect handling of invalid Amazon resource name (ARN) elements in IAM policy documents, such as bucket policies, can cause unintentional permissions granted to users who are not part of the policy. With this release, this fix prevents storing policies with invalid ARN elements, or if already stored, correctly evaluates the policies. ( BZ#2007451 ) 4.6. RADOS Setting bluestore_cache_trim_max_skip_pinned to 10000 enables trimming of the object's metadata The least recently used (LRU) cache is used for the object's metadata. Trimming of the cache is done from the least recently accessed objects. Objects that are pinned are exempted from eviction, which means they are still being used by Bluestore.. Previously, the configuration variable bluestore_cache_trim_max_skip_pinned controlled how many pinned objects were visited, thereby the scrubbing process caused objects to be pinned for a long time. When the number of objects pinned on the bottom of the LRU metadata cache became larger than bluestore_cache_trim_max_skip_pinned , then trimming of cache was not completed. With this release, you can set bluestore_cache_trim_max_skip_pinned to 10000 which is larger than the possible count of metadata cache. This enables trimming and the metadata cache size adheres to the configuration settings. ( BZ#1931504 ) Upgrading storage cluster from Red Hat Ceph Storage 4 to 5 completes with HEALTH_WARN state When upgrading a Red Hat Ceph Storage cluster from a previously supported version to Red Hat Ceph Storage 5, the upgrade completes with the storage cluster in a HEALTH_WARN state stating that monitors are allowing insecure global_id reclaim. This is due to a patched CVE, the details of which are available in the CVE-2021-20288 . Recommendations to mute health warnings: Identify clients that are not updated by checking the ceph health detail output for the AUTH_INSECURE_GLOBAL_ID_RECLAIM alert. Upgrade all clients to Red Hat Ceph Storage 5.0 release. If all the clients are not upgraded immediately, mute health alerts temporarily: Syntax After validating all clients have been updated and the AUTH_INSECURE_GLOBAL_ID_RECLAIM alert is no longer present for a client, set auth_allow_insecure_global_id_reclaim to false Syntax Ensure that no clients are listed with the AUTH_INSECURE_GLOBAL_ID_RECLAIM alert. ( BZ#1953494 ) The trigger condition for RocksDB flush and compactions works as expected BlueStore organizes data into chunks called blobs, the size of which is 64K by default. For large writes, it is split into a sequence of 64K blob writes. Previously, when the deferred size was equal to or more than the blob size, all the data was deferred and they were placed under the "L" column family. A typical example is the case for HDD configuration where the value is 64K for both bluestore_prefer_deferred_size_hdd and bluestore_max_blob_size_hdd parameters. This consumed the "L" column faster resulting in the RocksDB flush count and the compactions becoming more frequent. The trigger condition for this scenario was data size in blob ⇐ minimum deferred size . With this release, the deferred trigger condition checks the size of extents on disks and not blobs. Extents smaller than deferred_size go to a deferred mechanism and larger extents are written to the disk immediately. The trigger condition is changed to data size in extent < minimum deferred size . The small writes are placed under the "L" column and the growth of this column is slow with no extra compactions. The bluestore_prefer_deferred_size parameter controls the deferred without any interference from the blob size and works as per it's description of "writes smaller than this size". ( BZ#1991677 ) The Ceph Manager no longer crashes during large increases to pg_num and pgp_num Previously, the code that adjusts placement groups did not handle large increases to pg_num and pgp_num parameters correctly, and led to an integer underflow that can crash the Ceph Manager. With this release, the code that adjusts placement groups was fixed. As a result, large increases to placement groups do not cause the Ceph Manager to crash. ( BZ#2001152 ) 4.7. RADOS Block Devices (RBD) The librbd code honors the CEPH_OSD_FLAG_FULL_TRY flag Previously, you could set the CEPH_OSD_FLAG_FULL_TRY with the rados_set_pool_full_try() API function. In Red Hat Ceph Storage 5, librbd stopped honoring this flag. This resulted in write operations stalling on waiting for space when a pool became full or reached a quota limit, even if the CEPH_OSD_FLAG_FULL_TRY was set. With this release, librbd now honors the CEPH_OSD_FLAG_FULL_TRY flag, and when set, and a pool becomes full or reaches quota, the write operations either succeed or fail with ENOSPC or EDQUOT message. The ability to remove RADOS Block Device (RBD) images from a full or at-quota pool is restored. ( BZ#1969301 ) 4.8. RBD Mirroring Improvements to the rbd mirror pool peer bootstrap import command Previously, running the rbd mirror pool peer bootstrap import command caused librados to log errors about a missing key ring file in cases where a key ring was not required. This can confuse site administrators, because it appears as though the command failed due to a missing key ring. With this release, librados no longer log errors in cases where a remote storage cluster's key ring is not required, such as when the bootstrap token contains the key. ( BZ#1981186 ) 4.9. iSCSI Gateway The gwcli tool now shows the correct erasure coded pool profile Previously, the gwcli tool would show the incorrect k+m values of the erasure coded pool. With this release, the gwcli tool pulls the information from the erasure coded pool settings from the associated erasure coded profile and the Red Hat Ceph Storage cluster shows the correct erasure coded pool profile. ( BZ#1840721 ) The upgrade of the storage cluster with iSCSI configured now works as expected Previously, the upgrade of the storage cluster with iSCSI configured would fail as the latest ceph-iscsi packages would not have the ceph-iscsi-tools packages that were deprecated. With this release, the ceph-iscsi-tools package is marked as obsolete in the RPM specification file and the upgrade succeeds as expected. ( BZ#2026582 ) The tcmu-runner no longer fails to remove "blocklist" entries Previously, the tcmu-runner would execute incorrect commands to remove the "blocklist" entries resulting in a degradation in performance for iSCSI LUNs. With this release, the tcmu-runner was updated to execute the correct command when removing blocklist entries. The blocklist entries are cleaned up by tcmu-runner and the iSCSI LUNs work as expected. ( BZ#2041127 ) The tcmu-runner process now closes normally Previously, the tcmu-runner process incorrectly handled a failed path, causing the release of uninitialized g_object memory. This can cause the tcmu-runner process to terminate unexpectedly. The source code has been modified to skip the release of uninitialized g_object memory, resulting in the tcmu-runner process exiting normally. ( BZ#2007683 ) The RADOS Block Device handler correctly parses configuration strings Previously, the RADOS Block Device (RBD) handler used the strtok() function while parsing configuration strings, which is not thread-safe. This caused incorrect parsing of the configuration string of image names when creating or reopening an image. This resulted in the image failing to open. With this release, the RBD handler uses the thread-safe strtok_r() function, allowing for the correct parsing of configuration strings. ( BZ#2007687 ) 4.10. The Ceph Ansible utility The cephadm-adopt playbook now enables the pool application on the pool when creating a new nfs-ganesha pool Previously, when the cephadm-adopt playbook created a new nfs-ganesha pool, it did not enable the pool application on the pool. This resulted in a warning that one pool did not have the pool application enabled. With this update, the cephadm-adopt playbook sets the pool application on the created pool, and a warning after the adoption no longer occurs. ( BZ#1956840 ) The cephadm-adopt playbook does not create default realms for multisite configuration Previously, it was required for the cephadm-adopt playbook to create the default realms during the adoption process, even when there was no multisite configuration present. With this release, the cephadm-adopt playbook does not enforce the creation of default realms when there is no multisite configuration deployed. ( BZ#1988404 ) The Ceph Ansible cephadm-adopt.yml playbook can add nodes with a host's fully-qualified domain name Previously, the task that adds nodes in cephadm using the Ceph Ansible cephadm-adopt.yml playbook, was using the short host name, and was not matching the current fully-qualified domain name (FQDN) of a node. As a result, the adoption playbook failed because no match to the FQDN host name was found. With this release, the playbook uses the ansible_nodename fact instead of the ansble_hostname fact, allowing the adoption playbook to add nodes configured with a FQDN. ( BZ#1997083 ) The Ceph Ansible cephadm-adopt playbook now pulls container images successfully Previously, the Ceph Ansible cephadm-adopt playbook was not logging into the container registry on storage clusters that were being adopted. With this release, the Ceph Ansible cephadm-adopt playbook logs into the container registry, and pulls container images as expected. ( BZ#2000103 ) | [
"ceph health mute AUTH_INSECURE_GLOBAL_ID_RECLAIM 1w # 1 week ceph health mute AUTH_INSECURE_GLOBAL_ID_RECLAIM_ALLOWED 1w # 1 week",
"ceph config set mon auth_allow_insecure_global_id_reclaim false"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/release_notes/bug-fixes |
Chapter 4. Updating Red Hat build of OpenJDK 17 on Red Hat Enterprise Linux | Chapter 4. Updating Red Hat build of OpenJDK 17 on Red Hat Enterprise Linux The following sections provide instructions for updating Red Hat build of OpenJDK 17 on Red Hat Enterprise Linux. 4.1. Updating Red Hat build of OpenJDK 17 on RHEL by using yum You can update the installed Red Hat build of OpenJDK packages by using the yum system package manager. Prerequisites You must have root privileges on the system. Procedure Check the current Red Hat build of OpenJDK version: A list of installed Red Hat build of OpenJDK packages displays. Update a specific package. For example: Verify that the update worked by checking the current Red Hat build of OpenJDK versions: Note You can install multiple major versions of Red Hat build of OpenJDK on your local system. If you need to switch from one major version to another major version, issue the following command in your command-line interface (CLI) and then follow the onscreen prompts: 4.2. Updating Red Hat build of OpenJDK 17 on RHEL by using an archive You can update Red Hat build of OpenJDK by using an archive file. This is useful if the Red Hat build of OpenJDK administrator does not have root privileges. Prerequisites Know the generic path pointing to your JDK or JRE installation. For example, ~/jdks/java-17 Procedure Remove the existing symbolic link of the generic path to your JDK or JRE. For example: Install the latest version of the JDK or JRE in your installation location. Additional resources For instructions on installing a JRE, see Installing a JRE on RHEL by using an archive . For instructions on installing a JDK, see Installing Red Hat build of OpenJDK on RHEL by using an archive . Revised on 2024-10-29 18:38:39 UTC | [
"sudo yum list installed \"java*\"",
"Installed Packages java-1.8.0-openjdk.x86_64 1:1.8.0.322.b06-2.el8_5 @rhel-8-for-x86_64-appstream-rpms java-11-openjdk.x86_64 1:11.0.14.0.9-2.el8_5 @rhel-8-for-x86_64-appstream-rpms java-17-openjdk.x86_64 1:17.0.2.0.8-4.el8_5 @rhel-8-for-x86_64-appstream-rpms",
"sudo yum update java-17-openjdk",
"java -version openjdk version \"17.0.2\" 2022-01-18 LTS OpenJDK Runtime Environment 21.9 (build 17.0.2+8-LTS) OpenJDK 64-Bit Server VM 21.9 (build 17.0.2+8-LTS, mixed mode, sharing)",
"sudo update-alternatives --config 'java'",
"unlink ~/jdks/java-17"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/installing_and_using_red_hat_build_of_openjdk_17_on_rhel/updating-openjdk-on-rhel_openjdk |
13.3. Additional Resources | 13.3. Additional Resources For more information on disk quotas, refer to the following resources. 13.3.1. Installed Documentation The quotacheck , edquota , repquota , quota , quotaon , and quotaoff man pages | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/implementing_disk_quotas-additional_resources |
Release notes for Eclipse Temurin 17.0.11 | Release notes for Eclipse Temurin 17.0.11 Red Hat build of OpenJDK 17 Red Hat Customer Content Services | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.11/index |
Appendix A. Revision History | Appendix A. Revision History Revision History Revision 4.6-1 Tue Jul 17 2018 Marek Suchanek Asynchronous update Revision 4.6-0 Thu Aug 10 2017 Marek Suchanek Asynchronous update Revision 4.2-0 Tue May 10 2016 Milan Navratil Preparing document for 6.8 GA publication. Revision 4.1-0 Tue May 06 2014 Laura Bailey Added section on Performance Co-Pilot. Revision 4.0-43 Wed Nov 13 2013 Laura Bailey Building for Red Hat Enterprise Linux 6.5 GA. Revision 4.0-6 Thu Oct 4 2012 Laura Bailey Added new section on numastat utility ( BZ#853274 ). Revision 1.0-0 Friday December 02 2011 Laura Bailey Release for GA of Red Hat Enterprise Linux 6.2. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/appe-performance_tuning_guide-revision_history |
Chapter 4. Catalogs | Chapter 4. Catalogs 4.1. File-based catalogs Important Operator Lifecycle Manager (OLM) v1 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Operator Lifecycle Manager (OLM) v1 in OpenShift Container Platform supports file-based catalogs for discovering and sourcing cluster extensions, including Operators, on a cluster. Important Currently, Operator Lifecycle Manager (OLM) v1 cannot authenticate private registries, such as the Red Hat-provided Operator catalogs. This is a known issue. As a result, the OLM v1 procedures that rely on having the Red Hat Operators catalog installed do not work. ( OCPBUGS-36364 ) 4.1.1. Highlights File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible. The goal of this format is to enable Operator catalog editing, composability, and extensibility. Editing With file-based catalogs, users interacting with the contents of a catalog are able to make direct changes to the format and verify that their changes are valid. Because this format is plain text JSON or YAML, catalog maintainers can easily manipulate catalog metadata by hand or with widely known and supported JSON or YAML tooling, such as the jq CLI. This editability enables the following features and user-defined extensions: Promoting an existing bundle to a new channel Changing the default channel of a package Custom algorithms for adding, updating, and removing upgrade edges Composability File-based catalogs are stored in an arbitrary directory hierarchy, which enables catalog composition. For example, consider two separate file-based catalog directories: catalogA and catalogB . A catalog maintainer can create a new combined catalog by making a new directory catalogC and copying catalogA and catalogB into it. This composability enables decentralized catalogs. The format permits Operator authors to maintain Operator-specific catalogs, and it permits maintainers to trivially build a catalog composed of individual Operator catalogs. File-based catalogs can be composed by combining multiple other catalogs, by extracting subsets of one catalog, or a combination of both of these. Note Duplicate packages and duplicate bundles within a package are not permitted. The opm validate command returns an error if any duplicates are found. Because Operator authors are most familiar with their Operator, its dependencies, and its upgrade compatibility, they are able to maintain their own Operator-specific catalog and have direct control over its contents. With file-based catalogs, Operator authors own the task of building and maintaining their packages in a catalog. Composite catalog maintainers, however, only own the task of curating the packages in their catalog and publishing the catalog to users. Extensibility The file-based catalog specification is a low-level representation of a catalog. While it can be maintained directly in its low-level form, catalog maintainers can build interesting extensions on top that can be used by their own custom tooling to make any number of mutations. For example, a tool could translate a high-level API, such as (mode=semver) , down to the low-level, file-based catalog format for upgrade edges. Or a catalog maintainer might need to customize all of the bundle metadata by adding a new property to bundles that meet a certain criteria. While this extensibility allows for additional official tooling to be developed on top of the low-level APIs for future OpenShift Container Platform releases, the major benefit is that catalog maintainers have this capability as well. 4.1.2. Directory structure File-based catalogs can be stored and loaded from directory-based file systems. The opm CLI loads the catalog by walking the root directory and recursing into subdirectories. The CLI attempts to load every file it finds and fails if any errors occur. Non-catalog files can be ignored using .indexignore files, which have the same rules for patterns and precedence as .gitignore files. Example .indexignore file # Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml Catalog maintainers have the flexibility to choose their desired layout, but it is recommended to store each package's file-based catalog blobs in separate subdirectories. Each individual file can be either JSON or YAML; it is not necessary for every file in a catalog to use the same format. Basic recommended structure catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json └── deprecations.yaml This recommended structure has the property that each subdirectory in the directory hierarchy is a self-contained catalog, which makes catalog composition, discovery, and navigation trivial file system operations. The catalog can also be included in a parent catalog by copying it into the parent catalog's root directory. 4.1.3. Schemas File-based catalogs use a format, based on the CUE language specification , that can be extended with arbitrary schemas. The following _Meta CUE schema defines the format that all file-based catalog blobs must adhere to: _Meta schema _Meta: { // schema is required and must be a non-empty string schema: string & !="" // package is optional, but if it's defined, it must be a non-empty string package?: string & !="" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !="" // value is required, and it must not be null value: !=null } Note No CUE schemas listed in this specification should be considered exhaustive. The opm validate command has additional validations that are difficult or impossible to express concisely in CUE. An Operator Lifecycle Manager (OLM) catalog currently uses three schemas ( olm.package , olm.channel , and olm.bundle ), which correspond to OLM's existing package and bundle concepts. Each Operator package in a catalog requires exactly one olm.package blob, at least one olm.channel blob, and one or more olm.bundle blobs. Note All olm.* schemas are reserved for OLM-defined schemas. Custom schemas must use a unique prefix, such as a domain that you own. 4.1.3.1. olm.package schema The olm.package schema defines package-level metadata for an Operator. This includes its name, description, default channel, and icon. Example 4.1. olm.package schema #Package: { schema: "olm.package" // Package name name: string & !="" // A description of the package description?: string // The package's default channel defaultChannel: string & !="" // An optional icon icon?: { base64data: string mediatype: string } } 4.1.3.2. olm.channel schema The olm.channel schema defines a channel within a package, the bundle entries that are members of the channel, and the upgrade edges for those bundles. If a bundle entry represents an edge in multiple olm.channel blobs, it can only appear once per channel. It is valid for an entry's replaces value to reference another bundle name that cannot be found in this catalog or another catalog. However, all other channel invariants must hold true, such as a channel not having multiple heads. Example 4.2. olm.channel schema #Channel: { schema: "olm.channel" package: string & !="" name: string & !="" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !="" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !="" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=""] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !="" } Warning When using the skipRange field, the skipped Operator versions are pruned from the update graph and are longer installable by users with the spec.startingCSV property of Subscription objects. You can update an Operator incrementally while keeping previously installed versions available to users for future installation by using both the skipRange and replaces field. Ensure that the replaces field points to the immediate version of the Operator version in question. 4.1.3.3. olm.bundle schema Example 4.3. olm.bundle schema #Bundle: { schema: "olm.bundle" package: string & !="" name: string & !="" image: string & !="" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !="" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !="" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !="" } 4.1.3.4. olm.deprecations schema The optional olm.deprecations schema defines deprecation information for packages, bundles, and channels in a catalog. Operator authors can use this schema to provide relevant messages about their Operators, such as support status and recommended upgrade paths, to users running those Operators from a catalog. When this schema is defined, the OpenShift Container Platform web console displays warning badges for the affected elements of the Operator, including any custom deprecation messages, on both the pre- and post-installation pages of the OperatorHub. An olm.deprecations schema entry contains one or more of the following reference types, which indicates the deprecation scope. After the Operator is installed, any specified messages can be viewed as status conditions on the related Subscription object. Table 4.1. Deprecation reference types Type Scope Status condition olm.package Represents the entire package PackageDeprecated olm.channel Represents one channel ChannelDeprecated olm.bundle Represents one bundle version BundleDeprecated Each reference type has their own requirements, as detailed in the following example. Example 4.4. Example olm.deprecations schema with each reference type schema: olm.deprecations package: my-operator 1 entries: - reference: schema: olm.package 2 message: | 3 The 'my-operator' package is end of life. Please use the 'my-operator-new' package for support. - reference: schema: olm.channel name: alpha 4 message: | The 'alpha' channel is no longer supported. Please switch to the 'stable' channel. - reference: schema: olm.bundle name: my-operator.v1.68.0 5 message: | my-operator.v1.68.0 is deprecated. Uninstall my-operator.v1.68.0 and install my-operator.v1.72.0 for support. 1 Each deprecation schema must have a package value, and that package reference must be unique across the catalog. There must not be an associated name field. 2 The olm.package schema must not include a name field, because it is determined by the package field defined earlier in the schema. 3 All message fields, for any reference type, must be a non-zero length and represented as an opaque text blob. 4 The name field for the olm.channel schema is required. 5 The name field for the olm.bundle schema is required. Note The deprecation feature does not consider overlapping deprecation, for example package versus channel versus bundle. Operator authors can save olm.deprecations schema entries as a deprecations.yaml file in the same directory as the package's index.yaml file: Example directory structure for a catalog with deprecations my-catalog └── my-operator ├── index.yaml └── deprecations.yaml Additional resources Updating or filtering a file-based catalog image 4.1.4. Properties Properties are arbitrary pieces of metadata that can be attached to file-based catalog schemas. The type field is a string that effectively specifies the semantic and syntactic meaning of the value field. The value can be any arbitrary JSON or YAML. OLM defines a handful of property types, again using the reserved olm.* prefix. 4.1.4.1. olm.package property The olm.package property defines the package name and version. This is a required property on bundles, and there must be exactly one of these properties. The packageName field must match the bundle's first-class package field, and the version field must be a valid semantic version. Example 4.5. olm.package property #PropertyPackage: { type: "olm.package" value: { packageName: string & !="" version: string & !="" } } 4.1.4.2. olm.gvk property The olm.gvk property defines the group/version/kind (GVK) of a Kubernetes API that is provided by this bundle. This property is used by OLM to resolve a bundle with this property as a dependency for other bundles that list the same GVK as a required API. The GVK must adhere to Kubernetes GVK validations. Example 4.6. olm.gvk property #PropertyGVK: { type: "olm.gvk" value: { group: string & !="" version: string & !="" kind: string & !="" } } 4.1.4.3. olm.package.required The olm.package.required property defines the package name and version range of another package that this bundle requires. For every required package property a bundle lists, OLM ensures there is an Operator installed on the cluster for the listed package and in the required version range. The versionRange field must be a valid semantic version (semver) range. Example 4.7. olm.package.required property #PropertyPackageRequired: { type: "olm.package.required" value: { packageName: string & !="" versionRange: string & !="" } } 4.1.4.4. olm.gvk.required The olm.gvk.required property defines the group/version/kind (GVK) of a Kubernetes API that this bundle requires. For every required GVK property a bundle lists, OLM ensures there is an Operator installed on the cluster that provides it. The GVK must adhere to Kubernetes GVK validations. Example 4.8. olm.gvk.required property #PropertyGVKRequired: { type: "olm.gvk.required" value: { group: string & !="" version: string & !="" kind: string & !="" } } 4.1.5. Example catalog With file-based catalogs, catalog maintainers can focus on Operator curation and compatibility. Because Operator authors have already produced Operator-specific catalogs for their Operators, catalog maintainers can build their catalog by rendering each Operator catalog into a subdirectory of the catalog's root directory. There are many possible ways to build a file-based catalog; the following steps outline a simple approach: Maintain a single configuration file for the catalog, containing image references for each Operator in the catalog: Example catalog configuration file name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317 Run a script that parses the configuration file and creates a new catalog from its references: Example script name=USD(yq eval '.name' catalog.yaml) mkdir "USDname" yq eval '.name + "/" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + "|" + USDcatalog + "/" + .name + "/index.yaml"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render "USDimage" > "USDfile" done opm generate dockerfile "USDname" indexImage=USD(yq eval '.repo + ":" + .tag' catalog.yaml) docker build -t "USDindexImage" -f "USDname.Dockerfile" . docker push "USDindexImage" 4.1.6. Guidelines Consider the following guidelines when maintaining file-based catalogs. 4.1.6.1. Immutable bundles The general advice with Operator Lifecycle Manager (OLM) is that bundle images and their metadata should be treated as immutable. If a broken bundle has been pushed to a catalog, you must assume that at least one of your users has upgraded to that bundle. Based on that assumption, you must release another bundle with an upgrade edge from the broken bundle to ensure users with the broken bundle installed receive an upgrade. OLM will not reinstall an installed bundle if the contents of that bundle are updated in the catalog. However, there are some cases where a change in the catalog metadata is preferred: Channel promotion: If you already released a bundle and later decide that you would like to add it to another channel, you can add an entry for your bundle in another olm.channel blob. New upgrade edges: If you release a new 1.2.z bundle version, for example 1.2.4 , but 1.3.0 is already released, you can update the catalog metadata for 1.3.0 to skip 1.2.4 . 4.1.6.2. Source control Catalog metadata should be stored in source control and treated as the source of truth. Updates to catalog images should include the following steps: Update the source-controlled catalog directory with a new commit. Build and push the catalog image. Use a consistent tagging taxonomy, such as :latest or :<target_cluster_version> , so that users can receive updates to a catalog as they become available. 4.1.7. CLI usage For instructions about creating file-based catalogs by using the opm CLI, see Managing custom catalogs . For reference documentation about the opm CLI commands related to managing file-based catalogs, see CLI tools . 4.1.8. Automation Operator authors and catalog maintainers are encouraged to automate their catalog maintenance with CI/CD workflows. Catalog maintainers can further improve on this by building GitOps automation to accomplish the following tasks: Check that pull request (PR) authors are permitted to make the requested changes, for example by updating their package's image reference. Check that the catalog updates pass the opm validate command. Check that the updated bundle or catalog image references exist, the catalog images run successfully in a cluster, and Operators from that package can be successfully installed. Automatically merge PRs that pass the checks. Automatically rebuild and republish the catalog image. 4.2. Red Hat-provided catalogs Important Operator Lifecycle Manager (OLM) v1 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Red Hat provides several Operator catalogs that are included with OpenShift Container Platform by default. Important Currently, Operator Lifecycle Manager (OLM) v1 cannot authenticate private registries, such as the Red Hat-provided Operator catalogs. This is a known issue. As a result, the OLM v1 procedures that rely on having the Red Hat Operators catalog installed do not work. ( OCPBUGS-36364 ) 4.2.1. About Red Hat-provided Operator catalogs The Red Hat-provided catalog sources are installed by default in the openshift-marketplace namespace, which makes the catalogs available cluster-wide in all namespaces. The following Operator catalogs are distributed by Red Hat: Catalog Index image Description redhat-operators registry.redhat.io/redhat/redhat-operator-index:v4.17 Red Hat products packaged and shipped by Red Hat. Supported by Red Hat. certified-operators registry.redhat.io/redhat/certified-operator-index:v4.17 Products from leading independent software vendors (ISVs). Red Hat partners with ISVs to package and ship. Supported by the ISV. redhat-marketplace registry.redhat.io/redhat/redhat-marketplace-index:v4.17 Certified software that can be purchased from Red Hat Marketplace . community-operators registry.redhat.io/redhat/community-operator-index:v4.17 Software maintained by relevant representatives in the redhat-openshift-ecosystem/community-operators-prod/operators GitHub repository. No official support. During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from OpenShift Container Platform 4.8 to 4.9, the spec.image field in the CatalogSource object for the redhat-operators catalog is updated from: registry.redhat.io/redhat/redhat-operator-index:v4.8 to: registry.redhat.io/redhat/redhat-operator-index:v4.9 4.3. Managing catalogs Important Operator Lifecycle Manager (OLM) v1 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Cluster administrators can add catalogs , or curated collections of Operators and Kubernetes extensions, to their clusters. Operator authors publish their products to these catalogs. When you add a catalog to your cluster, you have access to the versions, patches, and over-the-air updates of the Operators and extensions that are published to the catalog. You can manage catalogs and extensions declaratively from the CLI by using custom resources (CRs). File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible. Important Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of OpenShift Container Platform that uses the Kubernetes version that removed the API. If your cluster is using custom catalogs, see Controlling Operator compatibility with OpenShift Container Platform versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades. 4.3.1. About catalogs in OLM v1 You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the Operator Lifecycle Manager (OLM) v1 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images. Important If you try to install an Operator or extension that does not have unique name, the installation might fail or lead to an unpredictable result. This occurs for the following reasons: If mulitple catalogs are installed on a cluster, Operator Lifecycle Manager (OLM) v1 does not include a mechanism to specify a catalog when you install an Operator or extension. OLM v1 requires that all of the Operators and extensions that are available to install on a cluster use a unique name for their bundles and packages. Additional resources File-based catalogs 4.3.2. Red Hat-provided Operator catalogs in OLM v1 Operator Lifecycle Manager (OLM) v1 does not include Red Hat-provided Operator catalogs by default. If you want to add a Red Hat-provided catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show how to create a catalog resources for OLM v1. Important Currently, Operator Lifecycle Manager (OLM) v1 cannot authenticate private registries, such as the Red Hat-provided Operator catalogs. This is a known issue. As a result, the OLM v1 procedures that rely on having the Red Hat Operators catalog installed do not work. ( OCPBUGS-36364 ) If you want to use a catalog that is hosted on a private registry, such as Red Hat-provided Operator catalogs from registry.redhat.io , you must have a pull secret scoped to the openshift-catalogd namespace. For more information, see "Creating a pull secret for catalogs hosted on a secure registry". Example Red Hat Operators catalog apiVersion: catalogd.operatorframework.io/v1alpha1 kind: ClusterCatalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.17 pullSecret: <pull_secret_name> pollInterval: <poll_interval_duration> 1 1 Specify the interval for polling the remote registry for newer image digests. The default value is 24h . Valid units include seconds ( s ), minutes ( m ), and hours ( h ). To disable polling, set a zero value, such as 0s . Example Certified Operators catalog apiVersion: catalogd.operatorframework.io/v1alpha1 kind: ClusterCatalog metadata: name: certified-operators spec: source: type: image image: ref: registry.redhat.io/redhat/certified-operator-index:v4.17 pullSecret: <pull_secret_name> pollInterval: 24h Example Community Operators catalog apiVersion: catalogd.operatorframework.io/v1alpha1 kind: ClusterCatalog metadata: name: community-operators spec: source: type: image image: ref: registry.redhat.io/redhat/community-operator-index:v4.17 pullSecret: <pull_secret_name> pollInterval: 24h The following command adds a catalog to your cluster: Command syntax USD oc apply -f <catalog_name>.yaml 1 1 Specifies the catalog CR, such as redhat-operators.yaml . 4.3.3. Creating a pull secret for catalogs hosted on a private registry If you want to use a catalog that is hosted on a private registry, such as Red Hat-provided Operator catalogs from registry.redhat.io , you must have a pull secret scoped to the openshift-catalogd namespace. Catalogd cannot read global pull secrets from OpenShift Container Platform clusters. Catalogd can read references to secrets only in the namespace where it is deployed. Important Currently, Operator Lifecycle Manager (OLM) v1 cannot authenticate private registries, such as the Red Hat-provided Operator catalogs. This is a known issue. As a result, the OLM v1 procedures that rely on having the Red Hat Operators catalog installed do not work. ( OCPBUGS-36364 ) Prerequisites Login credentials for the secure registry Docker or Podman installed on your workstation Procedure If you already have a .dockercfg file with login credentials for the secure registry, create a pull secret by running the following command: USD oc create secret generic <pull_secret_name> \ --from-file=.dockercfg=<file_path>/.dockercfg \ --type=kubernetes.io/dockercfg \ --namespace=openshift-catalogd Example 4.9. Example command USD oc create secret generic redhat-cred \ --from-file=.dockercfg=/home/<username>/.dockercfg \ --type=kubernetes.io/dockercfg \ --namespace=openshift-catalogd If you already have a USDHOME/.docker/config.json file with login credentials for the secured registry, create a pull secret by running the following command: USD oc create secret generic <pull_secret_name> \ --from-file=.dockerconfigjson=<file_path>/.docker/config.json \ --type=kubernetes.io/dockerconfigjson \ --namespace=openshift-catalogd Example 4.10. Example command USD oc create secret generic redhat-cred \ --from-file=.dockerconfigjson=/home/<username>/.docker/config.json \ --type=kubernetes.io/dockerconfigjson \ --namespace=openshift-catalogd If you do not have a Docker configuration file with login credentials for the secure registry, create a pull secret by running the following command: USD oc create secret docker-registry <pull_secret_name> \ --docker-server=<registry_server> \ --docker-username=<username> \ --docker-password=<password> \ --docker-email=<email> \ --namespace=openshift-catalogd Example 4.11. Example command USD oc create secret docker-registry redhat-cred \ --docker-server=registry.redhat.io \ --docker-username=username \ --docker-password=password \ [email protected] \ --namespace=openshift-catalogd 4.3.4. Adding a catalog to a cluster To add a catalog to a cluster, create a catalog custom resource (CR) and apply it to the cluster. Important Currently, Operator Lifecycle Manager (OLM) v1 cannot authenticate private registries, such as the Red Hat-provided Operator catalogs. This is a known issue. As a result, the OLM v1 procedures that rely on having the Red Hat Operators catalog installed do not work. ( OCPBUGS-36364 ) Prerequisites If you want to use a catalog that is hosted on a private registry, such as Red Hat-provided Operator catalogs from registry.redhat.io , you must have a pull secret scoped to the openshift-catalogd namespace. Catalogd cannot read global pull secrets from OpenShift Container Platform clusters. Catalogd can read references to secrets only in the namespace where it is deployed. Procedure Create a catalog custom resource (CR), similar to the following example: Example redhat-operators.yaml apiVersion: catalogd.operatorframework.io/v1alpha1 kind: ClusterCatalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.17 1 pullSecret: <pull_secret_name> 2 pollInterval: <poll_interval_duration> 3 1 Specify the catalog's image in the spec.source.image field. 2 If your catalog is hosted on a secure registry, such as registry.redhat.io , you must create a pull secret scoped to the openshift-catalog namespace. 3 Specify the interval for polling the remote registry for newer image digests. The default value is 24h . Valid units include seconds ( s ), minutes ( m ), and hours ( h ). To disable polling, set a zero value, such as 0s . Add the catalog to your cluster by running the following command: USD oc apply -f redhat-operators.yaml Example output catalog.catalogd.operatorframework.io/redhat-operators created Verification Run the following commands to verify the status of your catalog: Check if you catalog is available by running the following command: USD oc get clustercatalog Example output NAME AGE redhat-operators 20s Check the status of your catalog by running the following command: USD oc describe clustercatalog Example output Name: redhat-operators Namespace: Labels: <none> Annotations: <none> API Version: catalogd.operatorframework.io/v1alpha1 Kind: ClusterCatalog Metadata: Creation Timestamp: 2024-06-10T17:34:53Z Finalizers: catalogd.operatorframework.io/delete-server-cache Generation: 1 Resource Version: 46075 UID: 83c0db3c-a553-41da-b279-9b3cddaa117d Spec: Source: Image: Pull Secret: redhat-cred Ref: registry.redhat.io/redhat/redhat-operator-index:v4.17 Type: image Status: 1 Conditions: Last Transition Time: 2024-06-10T17:35:15Z Message: Reason: UnpackSuccessful 2 Status: True Type: Unpacked Content URL: https://catalogd-catalogserver.openshift-catalogd.svc/catalogs/redhat-operators/all.json Observed Generation: 1 Phase: Unpacked 3 Resolved Source: Image: Last Poll Attempt: 2024-06-10T17:35:10Z Ref: registry.redhat.io/redhat/redhat-operator-index:v4.17 Resolved Ref: registry.redhat.io/redhat/redhat-operator-index@sha256:f2ccc079b5e490a50db532d1dc38fd659322594dcf3e653d650ead0e862029d9 4 Type: image Events: <none> 1 Describes the status of the catalog. 2 Displays the reason the catalog is in the current state. 3 Displays the phase of the installation process. 4 Displays the image reference of the catalog. 4.3.5. Deleting a catalog You can delete a catalog by deleting its custom resource (CR). Prerequisites You have a catalog installed. Procedure Delete a catalog by running the following command: USD oc delete clustercatalog <catalog_name> Example output catalog.catalogd.operatorframework.io "my-catalog" deleted Verification Verify the catalog is deleted by running the following command: USD oc get clustercatalog 4.4. Creating catalogs Important Operator Lifecycle Manager (OLM) v1 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Catalog maintainers can create new catalogs in the file-based catalog format for use with Operator Lifecycle Manager (OLM) v1 on OpenShift Container Platform. Important Currently, Operator Lifecycle Manager (OLM) v1 cannot authenticate private registries, such as the Red Hat-provided Operator catalogs. This is a known issue. As a result, the OLM v1 procedures that rely on having the Red Hat Operators catalog installed do not work. ( OCPBUGS-36364 ) 4.4.1. Creating a file-based catalog image You can use the opm CLI to create a catalog image that uses the plain text file-based catalog format (JSON or YAML), which replaces the deprecated SQLite database format. Prerequisites You have installed the opm CLI. You have podman version 1.9.3+. A bundle image is built and pushed to a registry that supports Docker v2-2 . Procedure Initialize the catalog: Create a directory for the catalog by running the following command: USD mkdir <catalog_dir> Generate a Dockerfile that can build a catalog image by running the opm generate dockerfile command: USD opm generate dockerfile <catalog_dir> \ -i registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.17 1 1 Specify the official Red Hat base image by using the -i flag, otherwise the Dockerfile uses the default upstream image. The Dockerfile must be in the same parent directory as the catalog directory that you created in the step: Example directory structure . 1 ├── <catalog_dir> 2 └── <catalog_dir>.Dockerfile 3 1 Parent directory 2 Catalog directory 3 Dockerfile generated by the opm generate dockerfile command Populate the catalog with the package definition for your Operator by running the opm init command: USD opm init <operator_name> \ 1 --default-channel=preview \ 2 --description=./README.md \ 3 --icon=./operator-icon.svg \ 4 --output yaml \ 5 > <catalog_dir>/index.yaml 6 1 Operator, or package, name 2 Channel that subscriptions default to if unspecified 3 Path to the Operator's README.md or other documentation 4 Path to the Operator's icon 5 Output format: JSON or YAML 6 Path for creating the catalog configuration file This command generates an olm.package declarative config blob in the specified catalog configuration file. Add a bundle to the catalog by running the opm render command: USD opm render <registry>/<namespace>/<bundle_image_name>:<tag> \ 1 --output=yaml \ >> <catalog_dir>/index.yaml 2 1 Pull spec for the bundle image 2 Path to the catalog configuration file Note Channels must contain at least one bundle. Add a channel entry for the bundle. For example, modify the following example to your specifications, and add it to your <catalog_dir>/index.yaml file: Example channel entry --- schema: olm.channel package: <operator_name> name: preview entries: - name: <operator_name>.v0.1.0 1 1 Ensure that you include the period ( . ) after <operator_name> but before the v in the version. Otherwise, the entry fails to pass the opm validate command. Validate the file-based catalog: Run the opm validate command against the catalog directory: USD opm validate <catalog_dir> Check that the error code is 0 : USD echo USD? Example output 0 Build the catalog image by running the podman build command: USD podman build . \ -f <catalog_dir>.Dockerfile \ -t <registry>/<namespace>/<catalog_image_name>:<tag> Push the catalog image to a registry: If required, authenticate with your target registry by running the podman login command: USD podman login <registry> Push the catalog image by running the podman push command: USD podman push <registry>/<namespace>/<catalog_image_name>:<tag> Additional resources opm CLI reference 4.4.2. Updating or filtering a file-based catalog image You can use the opm CLI to update or filter a catalog image that uses the file-based catalog format. By extracting the contents of an existing catalog image, you can modify the catalog as needed, for example: Adding packages Removing packages Updating existing package entries Detailing deprecation messages per package, channel, and bundle You can then rebuild the image as an updated version of the catalog. Note Alternatively, if you already have a catalog image on a mirror registry, you can use the oc-mirror CLI plugin to automatically prune any removed images from an updated source version of that catalog image while mirroring it to the target registry. For more information about the oc-mirror plugin and this use case, see the "Keeping your mirror registry content updated" section, and specifically the "Pruning images" subsection, of "Mirroring images for a disconnected installation using the oc-mirror plugin". Prerequisites You have the following on your workstation: The opm CLI. podman version 1.9.3+. A file-based catalog image. A catalog directory structure recently initialized on your workstation related to this catalog. If you do not have an initialized catalog directory, create the directory and generate the Dockerfile. For more information, see the "Initialize the catalog" step from the "Creating a file-based catalog image" procedure. Procedure Extract the contents of the catalog image in YAML format to an index.yaml file in your catalog directory: USD opm render <registry>/<namespace>/<catalog_image_name>:<tag> \ -o yaml > <catalog_dir>/index.yaml Note Alternatively, you can use the -o json flag to output in JSON format. Modify the contents of the resulting index.yaml file to your specifications: Important After a bundle has been published in a catalog, assume that one of your users has installed it. Ensure that all previously published bundles in a catalog have an update path to the current or newer channel head to avoid stranding users that have that version installed. To add an Operator, follow the steps for creating package, bundle, and channel entries in the "Creating a file-based catalog image" procedure. To remove an Operator, delete the set of olm.package , olm.channel , and olm.bundle blobs that relate to the package. The following example shows a set that must be deleted to remove the example-operator package from the catalog: Example 4.12. Example removed entries --- defaultChannel: release-2.7 icon: base64data: <base64_string> mediatype: image/svg+xml name: example-operator schema: olm.package --- entries: - name: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.0' - name: example-operator.v2.7.1 replaces: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.1' - name: example-operator.v2.7.2 replaces: example-operator.v2.7.1 skipRange: '>=2.6.0 <2.7.2' - name: example-operator.v2.7.3 replaces: example-operator.v2.7.2 skipRange: '>=2.6.0 <2.7.3' - name: example-operator.v2.7.4 replaces: example-operator.v2.7.3 skipRange: '>=2.6.0 <2.7.4' name: release-2.7 package: example-operator schema: olm.channel --- image: example.com/example-inc/example-operator-bundle@sha256:<digest> name: example-operator.v2.7.0 package: example-operator properties: - type: olm.gvk value: group: example-group.example.io kind: MyObject version: v1alpha1 - type: olm.gvk value: group: example-group.example.io kind: MyOtherObject version: v1beta1 - type: olm.package value: packageName: example-operator version: 2.7.0 - type: olm.bundle.object value: data: <base64_string> - type: olm.bundle.object value: data: <base64_string> relatedImages: - image: example.com/example-inc/example-related-image@sha256:<digest> name: example-related-image schema: olm.bundle --- To add or update deprecation messages for an Operator, ensure there is a deprecations.yaml file in the same directory as the package's index.yaml file. For information on the deprecations.yaml file format, see "olm.deprecations schema". Save your changes. Validate the catalog: USD opm validate <catalog_dir> Rebuild the catalog: USD podman build . \ -f <catalog_dir>.Dockerfile \ -t <registry>/<namespace>/<catalog_image_name>:<tag> Push the updated catalog image to a registry: USD podman push <registry>/<namespace>/<catalog_image_name>:<tag> Verification In the web console, navigate to the OperatorHub configuration resource in the Administration Cluster Settings Configuration page. Add the catalog source or update the existing catalog source to use the pull spec for your updated catalog image. For more information, see "Adding a catalog source to a cluster" in the "Additional resources" of this section. After the catalog source is in a READY state, navigate to the Operators OperatorHub page and check that the changes you made are reflected in the list of Operators. Additional resources Packaging format Schemas olm.deprecations schema Mirroring images for a disconnected installation using the oc-mirror plugin Keeping your mirror registry content updated Adding a catalog source to a cluster | [
"Ignore everything except non-object .json and .yaml files **/* !*.json !*.yaml **/objects/*.json **/objects/*.yaml",
"catalog ├── packageA │ └── index.yaml ├── packageB │ ├── .indexignore │ ├── index.yaml │ └── objects │ └── packageB.v0.1.0.clusterserviceversion.yaml └── packageC └── index.json └── deprecations.yaml",
"_Meta: { // schema is required and must be a non-empty string schema: string & !=\"\" // package is optional, but if it's defined, it must be a non-empty string package?: string & !=\"\" // properties is optional, but if it's defined, it must be a list of 0 or more properties properties?: [... #Property] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null }",
"#Package: { schema: \"olm.package\" // Package name name: string & !=\"\" // A description of the package description?: string // The package's default channel defaultChannel: string & !=\"\" // An optional icon icon?: { base64data: string mediatype: string } }",
"#Channel: { schema: \"olm.channel\" package: string & !=\"\" name: string & !=\"\" entries: [...#ChannelEntry] } #ChannelEntry: { // name is required. It is the name of an `olm.bundle` that // is present in the channel. name: string & !=\"\" // replaces is optional. It is the name of bundle that is replaced // by this entry. It does not have to be present in the entry list. replaces?: string & !=\"\" // skips is optional. It is a list of bundle names that are skipped by // this entry. The skipped bundles do not have to be present in the // entry list. skips?: [...string & !=\"\"] // skipRange is optional. It is the semver range of bundle versions // that are skipped by this entry. skipRange?: string & !=\"\" }",
"#Bundle: { schema: \"olm.bundle\" package: string & !=\"\" name: string & !=\"\" image: string & !=\"\" properties: [...#Property] relatedImages?: [...#RelatedImage] } #Property: { // type is required type: string & !=\"\" // value is required, and it must not be null value: !=null } #RelatedImage: { // image is the image reference image: string & !=\"\" // name is an optional descriptive name for an image that // helps identify its purpose in the context of the bundle name?: string & !=\"\" }",
"schema: olm.deprecations package: my-operator 1 entries: - reference: schema: olm.package 2 message: | 3 The 'my-operator' package is end of life. Please use the 'my-operator-new' package for support. - reference: schema: olm.channel name: alpha 4 message: | The 'alpha' channel is no longer supported. Please switch to the 'stable' channel. - reference: schema: olm.bundle name: my-operator.v1.68.0 5 message: | my-operator.v1.68.0 is deprecated. Uninstall my-operator.v1.68.0 and install my-operator.v1.72.0 for support.",
"my-catalog └── my-operator ├── index.yaml └── deprecations.yaml",
"#PropertyPackage: { type: \"olm.package\" value: { packageName: string & !=\"\" version: string & !=\"\" } }",
"#PropertyGVK: { type: \"olm.gvk\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"#PropertyPackageRequired: { type: \"olm.package.required\" value: { packageName: string & !=\"\" versionRange: string & !=\"\" } }",
"#PropertyGVKRequired: { type: \"olm.gvk.required\" value: { group: string & !=\"\" version: string & !=\"\" kind: string & !=\"\" } }",
"name: community-operators repo: quay.io/community-operators/catalog tag: latest references: - name: etcd-operator image: quay.io/etcd-operator/index@sha256:5891b5b522d5df086d0ff0b110fbd9d21bb4fc7163af34d08286a2e846f6be03 - name: prometheus-operator image: quay.io/prometheus-operator/index@sha256:e258d248fda94c63753607f7c4494ee0fcbe92f1a76bfdac795c9d84101eb317",
"name=USD(yq eval '.name' catalog.yaml) mkdir \"USDname\" yq eval '.name + \"/\" + .references[].name' catalog.yaml | xargs mkdir for l in USD(yq e '.name as USDcatalog | .references[] | .image + \"|\" + USDcatalog + \"/\" + .name + \"/index.yaml\"' catalog.yaml); do image=USD(echo USDl | cut -d'|' -f1) file=USD(echo USDl | cut -d'|' -f2) opm render \"USDimage\" > \"USDfile\" done opm generate dockerfile \"USDname\" indexImage=USD(yq eval '.repo + \":\" + .tag' catalog.yaml) docker build -t \"USDindexImage\" -f \"USDname.Dockerfile\" . docker push \"USDindexImage\"",
"registry.redhat.io/redhat/redhat-operator-index:v4.8",
"registry.redhat.io/redhat/redhat-operator-index:v4.9",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: ClusterCatalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.17 pullSecret: <pull_secret_name> pollInterval: <poll_interval_duration> 1",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: ClusterCatalog metadata: name: certified-operators spec: source: type: image image: ref: registry.redhat.io/redhat/certified-operator-index:v4.17 pullSecret: <pull_secret_name> pollInterval: 24h",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: ClusterCatalog metadata: name: community-operators spec: source: type: image image: ref: registry.redhat.io/redhat/community-operator-index:v4.17 pullSecret: <pull_secret_name> pollInterval: 24h",
"oc apply -f <catalog_name>.yaml 1",
"oc create secret generic <pull_secret_name> --from-file=.dockercfg=<file_path>/.dockercfg --type=kubernetes.io/dockercfg --namespace=openshift-catalogd",
"oc create secret generic redhat-cred --from-file=.dockercfg=/home/<username>/.dockercfg --type=kubernetes.io/dockercfg --namespace=openshift-catalogd",
"oc create secret generic <pull_secret_name> --from-file=.dockerconfigjson=<file_path>/.docker/config.json --type=kubernetes.io/dockerconfigjson --namespace=openshift-catalogd",
"oc create secret generic redhat-cred --from-file=.dockerconfigjson=/home/<username>/.docker/config.json --type=kubernetes.io/dockerconfigjson --namespace=openshift-catalogd",
"oc create secret docker-registry <pull_secret_name> --docker-server=<registry_server> --docker-username=<username> --docker-password=<password> --docker-email=<email> --namespace=openshift-catalogd",
"oc create secret docker-registry redhat-cred --docker-server=registry.redhat.io --docker-username=username --docker-password=password [email protected] --namespace=openshift-catalogd",
"apiVersion: catalogd.operatorframework.io/v1alpha1 kind: ClusterCatalog metadata: name: redhat-operators spec: source: type: image image: ref: registry.redhat.io/redhat/redhat-operator-index:v4.17 1 pullSecret: <pull_secret_name> 2 pollInterval: <poll_interval_duration> 3",
"oc apply -f redhat-operators.yaml",
"catalog.catalogd.operatorframework.io/redhat-operators created",
"oc get clustercatalog",
"NAME AGE redhat-operators 20s",
"oc describe clustercatalog",
"Name: redhat-operators Namespace: Labels: <none> Annotations: <none> API Version: catalogd.operatorframework.io/v1alpha1 Kind: ClusterCatalog Metadata: Creation Timestamp: 2024-06-10T17:34:53Z Finalizers: catalogd.operatorframework.io/delete-server-cache Generation: 1 Resource Version: 46075 UID: 83c0db3c-a553-41da-b279-9b3cddaa117d Spec: Source: Image: Pull Secret: redhat-cred Ref: registry.redhat.io/redhat/redhat-operator-index:v4.17 Type: image Status: 1 Conditions: Last Transition Time: 2024-06-10T17:35:15Z Message: Reason: UnpackSuccessful 2 Status: True Type: Unpacked Content URL: https://catalogd-catalogserver.openshift-catalogd.svc/catalogs/redhat-operators/all.json Observed Generation: 1 Phase: Unpacked 3 Resolved Source: Image: Last Poll Attempt: 2024-06-10T17:35:10Z Ref: registry.redhat.io/redhat/redhat-operator-index:v4.17 Resolved Ref: registry.redhat.io/redhat/redhat-operator-index@sha256:f2ccc079b5e490a50db532d1dc38fd659322594dcf3e653d650ead0e862029d9 4 Type: image Events: <none>",
"oc delete clustercatalog <catalog_name>",
"catalog.catalogd.operatorframework.io \"my-catalog\" deleted",
"oc get clustercatalog",
"mkdir <catalog_dir>",
"opm generate dockerfile <catalog_dir> -i registry.redhat.io/openshift4/ose-operator-registry-rhel9:v4.17 1",
". 1 ├── <catalog_dir> 2 └── <catalog_dir>.Dockerfile 3",
"opm init <operator_name> \\ 1 --default-channel=preview \\ 2 --description=./README.md \\ 3 --icon=./operator-icon.svg \\ 4 --output yaml \\ 5 > <catalog_dir>/index.yaml 6",
"opm render <registry>/<namespace>/<bundle_image_name>:<tag> \\ 1 --output=yaml >> <catalog_dir>/index.yaml 2",
"--- schema: olm.channel package: <operator_name> name: preview entries: - name: <operator_name>.v0.1.0 1",
"opm validate <catalog_dir>",
"echo USD?",
"0",
"podman build . -f <catalog_dir>.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>",
"podman login <registry>",
"podman push <registry>/<namespace>/<catalog_image_name>:<tag>",
"opm render <registry>/<namespace>/<catalog_image_name>:<tag> -o yaml > <catalog_dir>/index.yaml",
"--- defaultChannel: release-2.7 icon: base64data: <base64_string> mediatype: image/svg+xml name: example-operator schema: olm.package --- entries: - name: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.0' - name: example-operator.v2.7.1 replaces: example-operator.v2.7.0 skipRange: '>=2.6.0 <2.7.1' - name: example-operator.v2.7.2 replaces: example-operator.v2.7.1 skipRange: '>=2.6.0 <2.7.2' - name: example-operator.v2.7.3 replaces: example-operator.v2.7.2 skipRange: '>=2.6.0 <2.7.3' - name: example-operator.v2.7.4 replaces: example-operator.v2.7.3 skipRange: '>=2.6.0 <2.7.4' name: release-2.7 package: example-operator schema: olm.channel --- image: example.com/example-inc/example-operator-bundle@sha256:<digest> name: example-operator.v2.7.0 package: example-operator properties: - type: olm.gvk value: group: example-group.example.io kind: MyObject version: v1alpha1 - type: olm.gvk value: group: example-group.example.io kind: MyOtherObject version: v1beta1 - type: olm.package value: packageName: example-operator version: 2.7.0 - type: olm.bundle.object value: data: <base64_string> - type: olm.bundle.object value: data: <base64_string> relatedImages: - image: example.com/example-inc/example-related-image@sha256:<digest> name: example-related-image schema: olm.bundle ---",
"opm validate <catalog_dir>",
"podman build . -f <catalog_dir>.Dockerfile -t <registry>/<namespace>/<catalog_image_name>:<tag>",
"podman push <registry>/<namespace>/<catalog_image_name>:<tag>"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/extensions/catalogs |
probe::tcpmib.PassiveOpens | probe::tcpmib.PassiveOpens Name probe::tcpmib.PassiveOpens - Count the passive creation of a socket Synopsis tcpmib.PassiveOpens Values sk pointer to the struct sock being acted on op value to be added to the counter (default value of 1) Description The packet pointed to by skb is filtered by the function tcpmib_filter_key . If the packet passes the filter is is counted in the global PassiveOpens (equivalent to SNMP's MIB TCP_MIB_PASSIVEOPENS) | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-tcpmib-passiveopens |
Chapter 77. Test scenarios (legacy) designer in Business Central | Chapter 77. Test scenarios (legacy) designer in Business Central Red Hat Process Automation Manager currently supports both the new Test Scenarios designer and the former Test Scenarios (Legacy) designer. The default designer is the new test scenarios designer, which supports testing of both rules and DMN models and provides an enhanced overall user experience with test scenarios. If required, you can continue to use the legacy test scenarios designer, which supports rule-based test scenarios only. 77.1. Creating and running a test scenario (legacy) You can create test scenarios in Business Central to test the functionality of business rule data before deployment. A basic test scenario must have at least the following data: Related data objects GIVEN facts EXPECT results Note The legacy test scenarios designer supports the LocalDate java built-in data type. You can use the LocalDate java built-in data type in the dd-mmm-yyyy date format. For example, you can set this in the 17-Oct-2020 date format. With this data, the test scenario can validate the expected and actual results for that rule instance based on the defined facts. You can also add a CALL METHOD and any available globals to a test scenario, but these scenario settings are optional. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Test Scenarios (Legacy) . Enter an informative Test Scenario name and select the appropriate Package . The package that you specify must be the same package where the required rule assets have been assigned or will be assigned. You can import data objects from any package into the asset's designer. Click Ok to create the test scenario. The new test scenario is now listed in the Test Scenarios panel of the Project Explorer , Click the Data Objects tab to verify that all data objects required for the rules that you want to test are listed. If not, click New item to import the needed data objects from other packages, or create data objects within your package. After all data objects are in place, return to the Model tab of the test scenarios designer and define the GIVEN and EXPECT data for the scenario, based on the available data objects. Figure 77.1. The test scenarios designer The GIVEN section defines the input facts for the test. For example, if an Underage rule in the project declines loan applications for applicants under the age of 21, then the GIVEN facts in the test scenario could be Applicant with age set to some integer less than 21. The EXPECT section defines the expected results based on the GIVEN input facts. That is, GIVEN the input facts, EXPECT these other facts to be valid or entire rules to be activated. For example, with the given facts of an applicant under the age of 21 in the scenario, the EXPECT results could be LoanApplication with approved set to false (as a result of the underage applicant), or could be the activation of the Underage rule as a whole. Optional: Add a CALL METHOD and any globals to the test scenario: CALL METHOD: Use this to invoke a method from another fact when the rule execution is initiated. Click CALL METHOD , select a fact, and click to select the method to invoke. You can invoke any Java class methods (such as methods from an ArrayList) from the Java library or from a JAR that was imported for the project (if applicable). globals: Use this to add any global variables in the project that you want to validate in the test scenario. Click globals to select the variable to be validated, and then in the test scenarios designer, click the global name and define field values to be applied to the global variable. If no global variables are available, then they must be created as new assets in Business Central. Global variables are named objects that are visible to the decision engine but are different from the objects for facts. Changes in the object of a global do not trigger the re-evaluation of rules. Click More at the bottom of the test scenarios designer to add other data blocks to the same scenario file as needed. After you have defined all GIVEN , EXPECT , and other data for the scenario, click Save in the test scenarios designer to save your work. Click Run scenario in the upper-right corner to run this .scenario file, or click Run all scenarios to run all saved .scenario files in the project package (if there are multiple). Although the Run scenario option does not require the individual .scenario file to be saved, the Run all scenarios option does require all .scenario files to be saved. If the test fails, address any problems described in the Alerts message at the bottom of the window, review all components in the scenario, and try again to validate the scenario until the scenario passes. Click Save in the test scenarios designer to save your work after all changes are complete. 77.1.1. Adding GIVEN facts in test scenarios (legacy) The GIVEN section defines input facts for the test. For example, if an Underage rule in the project declines loan applications for applicants under the age of 21, then the GIVEN facts in the test scenario could be Applicant with age set to some integer less than 21. Prerequisites All data objects required for your test scenario have been created or imported and are listed in the Data Objects tab of the Test Scenarios (Legacy) designer. Procedure In the Test Scenarios (Legacy) designer, click GIVEN to open the New input window with the available facts. Figure 77.2. Add GIVEN input to the test scenario The list includes the following options, depending on the data objects available in the Data Objects tab of the test scenarios designer: Insert a new fact: Use this to add a fact and modify its field values. Enter a variable for the fact as the Fact name . Modify an existing fact: (Appears only after another fact has been added.) Use this to specify a previously inserted fact to be modified in the decision engine between executions of the scenario. Delete an existing fact: (Appears only after another fact has been added.) Use this to specify a previously inserted fact to be deleted from the decision engine between executions of the scenario. Activate rule flow group: Use this to specify a rule flow group to be activated so that all rules within that group can be tested. Choose a fact for the desired input option and click Add . For example, set Insert a new fact: to Applicant and enter a or app or any other variable for the Fact name . Click the fact in the test scenarios designer and select the field to be modified. Figure 77.3. Modify a fact field Click the edit icon ( ) and select from the following field values: Literal value: Creates an open field in which you enter a specific literal value. Bound variable: Sets the value of the field to the fact bound to a selected variable. The field type must match the bound variable type. Create new fact: Enables you to create a new fact and assign it as a field value of the parent fact. Then you can click the child fact in the test scenarios designer and likewise assign field values or nest other facts similarly. Continue adding any other GIVEN input data for the scenario and click Save in the test scenarios designer to save your work. 77.1.2. Adding EXPECT results in test scenarios (legacy) The EXPECT section defines the expected results based on the GIVEN input facts. That is, GIVEN the input facts, EXPECT other specified facts to be valid or entire rules to be activated. For example, with the given facts of an applicant under the age of 21 in the scenario, the EXPECT results could be LoanApplication with approved set to false (as a result of the underage applicant), or could be the activation of the Underage rule as a whole. Prerequisites All data objects required for your test scenario have been created or imported and are listed in the Data Objects tab of the Test Scenarios (Legacy) designer. Procedure In the Test Scenarios (Legacy) designer, click EXPECT to open the New expectation window with the available facts. Figure 77.4. Add EXPECT results to the test scenario The list includes the following options, depending on the data in the GIVEN section and the data objects available in the Data Objects tab of the test scenarios designer: Rule: Use this to specify a particular rule in the project that is expected to be activated as a result of the GIVEN input. Type the name of a rule that is expected to be activated or select it from the list of rules, and then in the test scenarios designer, specify the number of times the rule should be activated. Fact value: Use this to select a fact and define values for it that are expected to be valid as a result of the facts defined in the GIVEN section. The facts are listed by the Fact name previously defined for the GIVEN input. Any fact that matches: Use this to validate that at least one fact with the specified values exists as a result of the GIVEN input. Choose a fact for the desired expectation (such as Fact value: application ) and click Add or OK . Click the fact in the test scenarios designer and select the field to be added and modified. Figure 77.5. Modify a fact field Set the field values to what is expected to be valid as a result of the GIVEN input (such as approved | equals | false ). Note In the legacy test scenarios designer, you can use ["value1", "value2"] string format in the EXPECT field to validate the list of strings. Continue adding any other EXPECT input data for the scenario and click Save in the test scenarios designer to save your work. After you have defined and saved all GIVEN , EXPECT , and other data for the scenario, click Run scenario in the upper-right corner to run this .scenario file, or click Run all scenarios to run all saved .scenario files in the project package (if there are multiple). Although the Run scenario option does not require the individual .scenario file to be saved, the Run all scenarios option does require all .scenario files to be saved. If the test fails, address any problems described in the Alerts message at the bottom of the window, review all components in the scenario, and try again to validate the scenario until the scenario passes. Click Save in the test scenarios designer to save your work after all changes are complete. | null | https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/test-scenarios-legacy-designer-con |
Chapter 3. Debug symbols for Red Hat build of OpenJDK 11 | Chapter 3. Debug symbols for Red Hat build of OpenJDK 11 Debug symbols help in investigating a crash in Red Hat build of OpenJDK applications. 3.1. Installing the debug symbols This procedure describes how to install the debug symbols for Red Hat build of OpenJDK. Prerequisites Installed the gdb package on your local sytem. You can issue the sudo yum install gdb command on your CLI to install this package on your local system. Procedure To install the debug symbols, enter the following command: These commands install java-11-openjdk-debuginfo , java-11-openjdk-headless-debuginfo , and additional packages that provide debug symbols for Red Hat build of OpenJDK 11 binaries. These packages are not self-sufficient and do not contain executable binaries. Note The debuginfo-install is provided by the yum-utils package. To verify that the debug symbols are installed, enter the following command: 3.2. Checking the installation location of debug symbols This procedure explains how to find the location of debug symbols. Note If the debuginfo package is installed, but you cannot get the installation location of the package, then check if the correct package and java versions are installed. After confirming the versions, check the location of debug symbols again. Prerequisites Installed the gdb package on your local sytem. You can issue the sudo yum install gdb command on your CLI to install this package on your local system. Installed the debug symbols package. See Installing the debug symbols . Procedure To find the location of debug symbols, use gdb with which java commands: Use the following commands to explore the *-debug directory to see all the debug versions of the libraries, which include java , javac , and javah : Note The javac and javah tools are provided by the java-11-openjdk-devel package. You can install the package using the command: USD sudo debuginfo-install java-11-openjdk-devel . 3.3. Checking the configuration of debug symbols You can check and set configurations for debug symbols. Enter the following command to get a list of the installed packages: If some debug information packages have not been installed, enter the following command to install the missing packages: Run the following command if you want to hit a specific breakpoint: The above command completes the following tasks: Handles the SIGSEGV error as the JVM uses SEGV for stack overflow check. Sets pending breakpoints to yes . Calls the break statement in JavaCalls::call function. The function to starts the application in HotSpot (libjvm.so). 3.4. Configuring the debug symbols in a fatal error log file When a Java application is down due to a JVM crash, a fatal error log file is generated, for example: hs_error , java_error . These error log files are generated in current working directory of the application. The crash file contains information from the stack. Procedure You can remove all the debug symbols by using the strip -g command. The following code shows an example of non-stripped hs_error file: The following code shows an example of stripped hs_error file: Enter the following command to check that you have the same version of debug symbols and the fatal error log file: Note You can also use the sudo update-alternatives --config 'java' to complete this check. Use the nm command to ensure that libjvm.so has ELF data and text symbols: Additional resources The crash file hs_error is incomplete without the debug symbols installed. For more information, see Java application down due to JVM crash . | [
"sudo debuginfo-install java-11-openjdk sudo debuginfo-install java-11-openjdk-headless",
"gdb which java Reading symbols from /usr/bin/java...Reading symbols from /usr/lib/debug/usr/lib/jvm/java-11-openjdk-11.0.14.0.9-2.el8_5/bin/java-11-openjdk-11.0.14.0.9-2.el8_5.x86_64.debug...done. (gdb)",
"gdb which java Reading symbols from /usr/bin/java...Reading symbols from /usr/lib/debug/usr/lib/jvm/java-11-openjdk-11.0.14.0.9-2.el8_5/bin/java-11-openjdk-11.0.14.0.9-2.el8_5.x86_64.debug...done. (gdb)",
"cd /usr/lib/debug/lib/jvm/java-11-openjdk-11.0.14.0.9-2.el8_5",
"tree OJDK 11 version: └── java-11-openjdk-11.0.14.0.9-2.el8_5 ├── bin │ │ │── java-java-11-openjdk-11.0.14.0.9-2.el8_5.x86_64.debug │ ├── javac-java-11-openjdk-11.0.14.0.9-2.el8_5.x86_64.debug │ ├── javadoc-java-11-openjdk-11.0.14.0.9-2.el8_5.x86_64.debug │ └── lib ├── jexec-java-11-openjdk-11.0.14.0.9-2.el8_5.x86_64.debug ├── jli │ └── libjli.so-java-11-openjdk-11.0.14.0.9-2.el8_5.x86_64.debug ├── jspawnhelper-java-11-openjdk-11.0.14.0.9-2.el8_5.x86_64.debug │",
"sudo yum list installed | grep 'java-11-openjdk-debuginfo'",
"sudo yum debuginfo-install glibc-2.28-151.el8.x86_64 libgcc-8.4.1-1.el8.x86_64 libstdc++-8.4.1-1.el8.x86_64 sssd-client-2.4.0-9.el8.x86_64 zlib-1.2.11-17.el8.x86_64",
"gdb -ex 'handle SIGSEGV noprint nostop pass' -ex 'set breakpoint pending on' -ex 'break JavaCalls::call' -ex 'run' --args java ./HelloWorld",
"Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0xb83d2a] Unsafe_SetLong+0xda j sun.misc.Unsafe.putLong(Ljava/lang/Object;JJ)V+0 j Crash.main([Ljava/lang/String;)V+8 v ~StubRoutines::call_stub V [libjvm.so+0x6c0e65] JavaCalls::call_helper(JavaValue*, methodHandle*, JavaCallArguments*, Thread*)+0xc85 V [libjvm.so+0x73cc0d] jni_invoke_static(JNIEnv_*, JavaValue*, _jobject*, JNICallType, _jmethodID*, JNI_ArgumentPusher*, Thread*) [clone .constprop.1]+0x31d V [libjvm.so+0x73fd16] jni_CallStaticVoidMethod+0x186 C [libjli.so+0x48a2] JavaMain+0x472 C [libpthread.so.0+0x9432] start_thread+0xe2",
"Stack: [0x00007ff7e1a44000,0x00007ff7e1b44000], sp=0x00007ff7e1b42850, free space=1018k Native frames: (J=compiled Java code, j=interpreted, Vv=VM code, C=native code) V [libjvm.so+0xa7ecab] j sun.misc.Unsafe.putAddress(JJ)V+0 j Crash.crash()V+5 j Crash.main([Ljava/lang/String;)V+0 v ~StubRoutines::call_stub V [libjvm.so+0x67133a] V [libjvm.so+0x682bca] V [libjvm.so+0x6968b6] C [libjli.so+0x3989] C [libpthread.so.0+0x7dd5] start_thread+0xc5",
"java -version",
"/usr/lib/debug/usr/lib/jvm/java-11-openjdk-11.0.14.0.9-2.el8_5/lib/server/libjvm.so-11.0.14.0.9-2.el8_5.x86_64.debug"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/installing_and_using_red_hat_build_of_openjdk_11_on_rhel/installing-and-configuring-debug-symbols |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.