title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
function::sprint_syms
function::sprint_syms Name function::sprint_syms - Return stack for kernel addresses from string Synopsis Arguments callers String with list of hexadecimal (kernel) addresses Description Perform a symbolic lookup of the addresses in the given string, which are assumed to be the result of a prior calls to stack , callers , and similar functions. Returns a simple backtrace from the given hex string. One line per address. Includes the symbol name (or hex address if symbol couldn't be resolved) and module name (if found), as obtained from symdata . Includes the offset from the start of the function if found, otherwise the offset will be added to the module (if found, between brackets). Returns the backtrace as string (each line terminated by a newline character). Note that the returned stack will be truncated to MAXSTRINGLEN, to print fuller and richer stacks use print_syms .
[ "sprint_syms(callers:string)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-sprint-syms
Chapter 29. Installation and Booting
Chapter 29. Installation and Booting Graphics cards using the ast module can now be used during installation Due to missing dependencies for the ast module in the installation system, graphics cards that rely on this module were unable to be used during installation of Red Hat Enterprise Linux 7. These dependencies have now been added. (BZ#1272658) Installations can now be performed on disks containing invalid or unsupported partition tables. Previously, when attempting to install Red Hat Enterprise Linux 7 on a disk with a corrupt or unsupported partition table, the installation failed, most commonly when attempting to write to the disk. Support for the removal of invalid and unsupported partition tables has been added, and installations can now be performed on disks with such partition tables. (BZ#1266199) Multiple inst.dd options are now supported to load driver disks The job for loading driver disks based on the inst.dd option was scheduled with a unique option. When multiple inst.dd sources were specified as boot options, only the last one was actually loaded and applied. This update ensures the job is no longer called as unique. As a result, multiple inst.dd boot options can now be specified to provide drivers via multiple driver update images from different sources. (BZ#1268792) Help for the subscription manager screen during installation The installer's built-in help system now includes information regarding the subscription manager screen. (BZ#1260071) The Initial Setup utility starts correctly Due to a race condition between the initial-setup-text service and the initial-setup-graphical service, the interface of the Initial Setup utility sometimes started incorrectly. The two services have now been combined into a single service, initial-setup . The original services are still available for compatibility, but are not used by default. As a result, the interface now displays correctly. (BZ#1249598) VNC installation using IPv6 works correctly Due to an error in the processing of IPv6 addresses, IPv6 address lookup failed. Consequently, it was not possible to install through VNC using IPv6. This bug has been fixed. (BZ#1267872) HyperPAV aliases used during installation are now available on the installed system Previously, HyperPAV aliases activated during installation were not correctly configured on the installed system. HyperPAV handling has now been improved, and any HyperPAV aliases used during installation are now automatically configured on the installed system. (BZ#1031589) Errors in custom partitioning are correctly detected Previously, errors in custom partitioning were not displayed to the user properly, allowing the installation to continue with an invalid custom partition configuration, leading to unexpected behavior. This bug has been fixed and errors in custom partitioning are now correctly reported to the user so they can be adjusted before continuing the installation. (BZ# 1269195 ) Static routes configured during installation are now automatically configured on the installed system Previously, static route configuration files were not copied from the installation environment to the installed system. Consequently, static route configuration during installation was lost after the installation finished. These files are now copied, and static routes configured during installation are automatically configured on the installed system. (BZ#1255801) The grub2-mkconfig utility now honors certain grubby configuration variables Previously, when grubby added some entries to the grub configuration file, debug entries in particular, grub2-mkconfig failed to recognize and replicate those entries when re-run. This update ensures that if MAKEDEBUG=yes is specified in /etc/sysconfig/kernel , grub2-mkconfig does replicate the new grubby configuration entries. (BZ#1226325) GRUB2 is now correctly configured when upgrading the kernel and redhat-release-* Previously, if a redhat-release-* package and a kernel package were present in the same Yum transaction, the GRUB2 boot loader was reconfigured incorrectly. As a consequence, GRUB2 failed to boot the newly installed kernel. With this upate, GRUB2 is now correctly reconfigured and can boot the new kernel in this situation. (BZ#1289314) Kickstart files valid for Red Hat Enterprise Linux 6 are now correctly recognized by ksvalidator Previously, when using the ksvalidator utility to validate a Kickstart file made for Red Hat Enterprise Linux 6 that uses the logvol command with the --reserved-percent option, ksvalidator incorrectly stated that --reserved-percent is not a valid option. This bug has been fixed. (BZ# 1290244 ) Anaconda no longer crashes when adding iSCSI devices Previously, the Anaconda installer terminated unexpectedly when attempting to add certain iSCSI devices using the Add a disk button in the Storage screen. This bug has now been fixed. (BZ# 1255280 ) The Anaconda installer correctly allows adjustment of a problematic disk selection Previously, if a problem occurred with the selection of disks during installation of Red Hat Enterprise Linux 7, an error was displayed after the installation started, and thus caused the installation to fail. With this update, a warning is displayed at the proper time, allowing the disk selection to be adjusted before proceeding. (BZ#1265330) The anaconda-user-help package is now upgraded correctly The anaconda-user-help package was not upgraded correctly when upgrading from Red Hat Enterprise Linux 7.1. This has been fixed and the package is now upgraded correctly. (BZ#1275285) A wider variety of partitions can be used as /boot Previously, the GRUB2 boot loader only supported 8-bit device node minor numbers. Consequently, boot loader installation failed on device nodes with minor numbers larger than 255 . All valid Linux device node minor numbers are now supported, and as a result a wider variety of partitions can be used as /boot partitions. (BZ# 1279599 ) Incorrect escaping of the / character in systemd no longer prevents the system from booting Previously, systemd incorrectly handled the LABEL=/ option in the initial RAM disk (initrd). As a consequence, the label was not found, and the system failed to boot when the root partition LABEL included the / character. With this update, / is escaped correctly in the described situation, and the system no longer fails to boot. Updating to a higher minor version of Red Hat Enterprise Linux updates the kernel and rebuilds the initrd . You can also rebuild the initrd by running the dracut -f command. (BZ#1306126) The default size of the /boot partition is now 1 GB In releases of Red Hat Enterprise Linux 7, the default size of the /boot partition was set to 500 MB. This could lead to problems on systems with multiple kernels and additional packages such as kernel-debuginfo installed. The /boot partition could become full or almost full in such scenario, which then prevented the system from upgrading and required manual cleanup to free additional space. In Red Hat Enterprise Linux 7.3, the default size of the /boot partition is increased to 1 GB, and these problems no longer occur on newly installed systems. Note that installations made with versions will not have their /boot partitions resized, and may still require manual cleanup in order to upgrade. (BZ#1369837) biosboot and prepboot are now included in the Kickstart file after installation When a Kickstart file included instructions to create biosboot or prepboot partitions, the Blivet module did not pass this information in Kickstart data. Consequently, after a Kickstart installation, the Kickstart file on the newly installed system did not include the options for creating biosboot and prepboot partitions and could not be reused successfully on other systems. With this update, the Kickstart output includes these options as expected, and the Kickstart file can be used on other systems to create the biosboot and prepboot partitions. (BZ# 1242666 ) os-prober now uses device mapper alias names in the boot loader configuration The os-prober component previously used the numeric device mapper device in the boot loader configuration. After reboot, when the installer disk image was no longer mounted, the number changed, which rendered the boot entry unusable. Consequently, when two instances of Red Hat Enterprise Linux were installed on one machine, one of them failed to boot. To fix this bug, os-prober now uses device mapper alias names instead of the direct enumerated device mapper names. Because the alias names are more stable, the boot entry works as expected in the described situation. (BZ# 1300262 ) Installations on IBM z Systems now generate correct Kickstart files Previously, the anaconda-ks.cfg file, which is a Kickstart file generated during system installation and which contains all selections made during the install process, was representing disk sizes as decimal numbers when installing on IBM z Systems DASDs. This bug caused the Kickstart file to be invalid because only integers are accepted when specifying disk size, and users had to manually edit the file before using it to reproduce the installation. This bug has been fixed, and Kickstart files generated during installation on IBM z Systems can now be used in subsequent installations without any editing. (BZ#1257997) Formatting DASDs works correctly during a text-based installation Previously, a bug prevented DASDs from being correctly formatted during a text-based installaton. As a consequence, DASDs that were unformatted or incorrectly formatted had to be manually formatted before use. This bug has been fixed, and the installer can now format DASDs when performing a text-based installation. (BZ# 1259437 ) Initial Setup now displays the correct window title The Initial Setup tool, which is automatically displayed after the first post-installation reboot and which allows you to configure settings like network connections and to register your system, previously displayed an incorrect string __main__.py in the window title. This update fixes the bug. (BZ#1267203) Installation no longer fails when using %packages --nobase --nocore in a Kickstart file Previously, using a Kickstart file which contained the %packages section and specified the --nobase and --nocore options at the same time caused the installation to fail with a traceback message due to a missing yum-langpacks package. The package is now available, and the described problem no longer occurs. (BZ# 1271766 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.3_release_notes/bug_fixes_installation_and_booting
Introduction To System Administration
Introduction To System Administration Red Hat Enterprise Linux 4 For Red Hat Enterprise Linux 4 Edition 2
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/index
12.2. File System-Specific Information for fsck
12.2. File System-Specific Information for fsck 12.2.1. ext2, ext3, and ext4 All of these file sytems use the e2fsck binary to perform file system checks and repairs. The file names fsck.ext2 , fsck.ext3 , and fsck.ext4 are hardlinks to this same binary. These binaries are run automatically at boot time and their behavior differs based on the file system being checked and the state of the file system. A full file system check and repair is invoked for ext2, which is not a metadata journaling file system, and for ext4 file systems without a journal. For ext3 and ext4 file systems with metadata journaling, the journal is replayed in userspace and the binary exits. This is the default action as journal replay ensures a consistent file system after a crash. If these file systems encounter metadata inconsistencies while mounted, they record this fact in the file system superblock. If e2fsck finds that a file system is marked with such an error, e2fsck performs a full check after replaying the journal (if present). e2fsck may ask for user input during the run if the -p option is not specified. The -p option tells e2fsck to automatically do all repairs that may be done safely. If user intervention is required, e2fsck indicates the unfixed problem in its output and reflect this status in the exit code. Commonly used e2fsck run-time options include: -n No-modify mode. Check-only operation. -b superblock Specify block number of an alternate suprerblock if the primary one is damaged. -f Force full check even if the superblock has no recorded errors. -j journal-dev Specify the external journal device, if any. -p Automatically repair or "preen" the file system with no user input. -y Assume an answer of "yes" to all questions. All options for e2fsck are specified in the e2fsck(8) manual page. The following five basic phases are performed by e2fsck while running: Inode, block, and size checks. Directory structure checks. Directory connectivity checks. Reference count checks. Group summary info checks. The e2image(8) utility can be used to create a metadata image prior to repair for diagnostic or testing purposes. The -r option should be used for testing purposes in order to create a sparse file of the same size as the file system itself. e2fsck can then operate directly on the resulting file. The -Q option should be specified if the image is to be archived or provided for diagnostic. This creates a more compact file format suitable for transfer. 12.2.2. XFS No repair is performed automatically at boot time. To initiate a file system check or repair, use the xfs_repair tool. Note Although an fsck.xfs binary is present in the xfsprogs package, this is present only to satisfy initscripts that look for an fsck. file system binary at boot time. fsck.xfs immediately exits with an exit code of 0. Older xfsprogs packages contain an xfs_check tool. This tool is very slow and does not scale well for large file systems. As such, it has been deprecated in favor of xfs_repair -n . A clean log on a file system is required for xfs_repair to operate. If the file system was not cleanly unmounted, it should be mounted and unmounted prior to using xfs_repair . If the log is corrupt and cannot be replayed, the -L option may be used to zero the log. Important The -L option must only be used if the log cannot be replayed. The option discards all metadata updates in the log and results in further inconsistencies. It is possible to run xfs_repair in a dry run, check-only mode by using the -n option. No changes will be made to the file system when this option is specified. xfs_repair takes very few options. Commonly used options include: -n No modify mode. Check-only operation. -L Zero metadata log. Use only if log cannot be replayed with mount. -m maxmem Limit memory used during run to maxmem MB. 0 can be specified to obtain a rough estimate of the minimum memory required. -l logdev Specify the external log device, if present. All options for xfs_repair are specified in the xfs_repair(8) manual page. The following eight basic phases are performed by xfs_repair while running: Inode and inode blockmap (addressing) checks. Inode allocation map checks. Inode size checks. Directory checks. Pathname checks. Link count checks. Freemap checks. Super block checks. For more information, see the xfs_repair(8) manual page. xfs_repair is not interactive. All operations are performed automatically with no input from the user. If it is desired to create a metadata image prior to repair for diagnostic or testing purposes, the xfs_metadump(8) and xfs_mdrestore(8) utilities may be used. 12.2.3. Btrfs Note Btrfs is available as a Technology Preview feature in Red Hat Enterprise Linux 7 but has been deprecated since the Red Hat Enterprise Linux 7.4 release. It will be removed in a future major release of Red Hat Enterprise Linux. For more information, see Deprecated Functionality in the Red Hat Enterprise Linux 7.4 Release Notes. The btrfsck tool is used to check and repair btrfs file systems. This tool is still in early development and may not detect or repair all types of file system corruption. By default, btrfsck does not make changes to the file system; that is, it runs check-only mode by default. If repairs are desired the --repair option must be specified. The following three basic phases are performed by btrfsck while running: Extent checks. File system root checks. Root reference count checks. The btrfs-image(8) utility can be used to create a metadata image prior to repair for diagnostic or testing purposes.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/fsck-fs-specific
Getting Started Guide
Getting Started Guide Red Hat Single Sign-On 7.6 For Use with Red Hat Single Sign-On 7.6 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.6/html/getting_started_guide/index
Chapter 1. Security Overview
Chapter 1. Security Overview Because of the increased reliance on powerful, networked computers to help run businesses and keep track of our personal information, industries have been formed around the practice of network and computer security. Enterprises have solicited the knowledge and skills of security experts to properly audit systems and tailor solutions to fit the operating requirements of the organization. Because most organizations are dynamic in nature, with workers accessing company IT resources locally and remotely, the need for secure computing environments has become more pronounced. Unfortunately, most organizations (as well as individual users) regard security as an afterthought, a process that is overlooked in favor of increased power, productivity, and budgetary concerns. Proper security implementation is often enacted postmortem - after an unauthorized intrusion has already occurred. Security experts agree that the right measures taken prior to connecting a site to an untrusted network, such as the Internet, is an effective means of thwarting most attempts at intrusion. 1.1. What is Computer Security? Computer security is a general term that covers a wide area of computing and information processing. Industries that depend on computer systems and networks to conduct daily business transactions and access crucial information regard their data as an important part of their overall assets. Several terms and metrics have entered our daily business vocabulary, such as total cost of ownership (TCO) and quality of service (QoS). In these metrics, industries calculate aspects such as data integrity and high-availability as part of their planning and process management costs. In some industries, such as electronic commerce, the availability and trustworthiness of data can be the difference between success and failure. 1.1.1. How did Computer Security Come about? Many readers may recall the movie "Wargames," starring Matthew Broderick in his portrayal of a high school student who breaks into the United States Department of Defense (DoD) supercomputer and inadvertently causes a nuclear war threat. In this movie, Broderick uses his modem to dial into the DoD computer (called WOPR) and plays games with the artificially intelligent software controlling all of the nuclear missile silos. The movie was released during the "cold war" between the former Soviet Union and the United States and was considered a success in its theatrical release in 1983. The popularity of the movie inspired many individuals and groups to begin implementing some of the methods that the young protagonist used to crack restricted systems, including what is known as war dialing - a method of searching phone numbers for analog modem connections in a defined area code and phone prefix combination. More than 10 years later, after a four-year, multi-jurisdictional pursuit involving the Federal Bureau of Investigation (FBI) and the aid of computer professionals across the country, infamous computer cracker Kevin Mitnick was arrested and charged with 25 counts of computer and access device fraud that resulted in an estimated USUSD80 Million in losses of intellectual property and source code from Nokia, NEC, Sun Microsystems, Novell, Fujitsu, and Motorola. At the time, the FBI considered it to be the largest computer-related criminal offense in U.S. history. He was convicted and sentenced to a combined 68 months in prison for his crimes, of which he served 60 months before his parole on January 21, 2000. Mitnick was further barred from using computers or doing any computer-related consulting until 2003. Investigators say that Mitnick was an expert in social engineering - using human beings to gain access to passwords and systems using falsified credentials. Information security has evolved over the years due to the increasing reliance on public networks to disclose personal, financial, and other restricted information. There are numerous instances such as the Mitnick and the Vladimir Levin cases (refer to Section 1.1.2, "Computer Security Timeline" for more information) that prompted organizations across all industries to rethink the way they handle information transmission and disclosure. The popularity of the Internet was one of the most important developments that prompted an intensified effort in data security. An ever-growing number of people are using their personal computers to gain access to the resources that the Internet has to offer. From research and information retrieval to electronic mail and commerce transaction, the Internet has been regarded as one of the most important developments of the 20th century. The Internet and its earlier protocols, however, were developed as a trust-based system. That is, the Internet Protocol was not designed to be secure in itself. There are no approved security standards built into the TCP/IP communications stack, leaving it open to potentially malicious users and processes across the network. Modern developments have made Internet communication more secure, but there are still several incidents that gain national attention and alert us to the fact that nothing is completely safe. 1.1.2. Computer Security Timeline Several key events contributed to the birth and rise of computer security. The following timeline lists some of the more important events that brought attention to computer and information security and its importance today. 1.1.2.1. The 1960s Students at the Massachusetts Institute of Technology (MIT) form the Tech Model Railroad Club (TMRC) begin exploring and programming the school's PDP-1 mainframe computer system. The group eventually coined the term "hacker" in the context it is known today. The DoD creates the Advanced Research Projects Agency Network (ARPANet), which gains popularity in research and academic circles as a conduit for the electronic exchange of data and information. This paves the way for the creation of the carrier network known today as the Internet. Ken Thompson develops the UNIX operating system, widely hailed as the most "hacker-friendly" OS because of its accessible developer tools and compilers, and its supportive user community. Around the same time, Dennis Ritchie develops the C programming language, arguably the most popular hacking language in computer history. 1.1.2.2. The 1970s Bolt, Beranek, and Newman, a computing research and development contractor for government and industry, develops the Telnet protocol, a public extension of the ARPANet. This opens doors for the public use of data networks which were once restricted to government contractors and academic researchers. Telnet, though, is also arguably the most insecure protocol for public networks, according to several security researchers. Steve Jobs and Steve Wozniak found Apple Computer and begin marketing the Personal Computer (PC). The PC is the springboard for several malicious users to learn the craft of cracking systems remotely using common PC communication hardware such as analog modems and war dialers. Jim Ellis and Tom Truscott create USENET, a bulletin-board-style system for electronic communication between disparate users. USENET quickly becomes one of the most popular forums for the exchange of ideas in computing, networking, and, of course, cracking. 1.1.2.3. The 1980s IBM develops and markets PCs based on the Intel 8086 microprocessor, a relatively inexpensive architecture that brought computing from the office to the home. This serves to commodify the PC as a common and accessible tool that was fairly powerful and easy to use, aiding in the proliferation of such hardware in the homes and offices of malicious users. The Transmission Control Protocol, developed by Vint Cerf, is split into two separate parts. The Internet Protocol is born from this split, and the combined TCP/IP protocol becomes the standard for all Internet communication today. Based on developments in the area of phreaking , or exploring and hacking the telephone system, the magazine 2600: The Hacker Quarterly is created and begins discussion on topics such as cracking computers and computer networks to a broad audience. The 414 gang (named after the area code where they lived and hacked from) are raided by authorities after a nine-day cracking spree where they break into systems from such top-secret locations as the Los Alamos National Laboratory, a nuclear weapons research facility. The Legion of Doom and the Chaos Computer Club are two pioneering cracker groups that begin exploiting vulnerabilities in computers and electronic data networks. The Computer Fraud and Abuse Act of 1986 is voted into law by congress based on the exploits of Ian Murphy, also known as Captain Zap, who broke into military computers, stole information from company merchandise order databases, and used restricted government telephone switchboards to make phone calls. Based on the Computer Fraud and Abuse Act, the courts convict Robert Morris, a graduate student, for unleashing the Morris Worm to over 6,000 vulnerable computers connected to the Internet. The most prominent case ruled under this act was Herbert Zinn, a high-school dropout who cracked and misused systems belonging to AT&T and the DoD. Based on concerns that the Morris Worm ordeal could be replicated, the Computer Emergency Response Team (CERT) is created to alert computer users of network security issues. Clifford Stoll writes The Cuckoo's Egg , Stoll's account of investigating crackers who exploit his system. 1.1.2.4. The 1990s ARPANet is decommissioned. Traffic from that network is transferred to the Internet. Linus Torvalds develops the Linux kernel for use with the GNU operating system; the widespread development and adoption of Linux is largely due to the collaboration of users and developers communicating via the Internet. Because of its roots in UNIX, Linux is most popular among hackers and administrators who found it quite useful for building secure alternatives to legacy servers running proprietary (closed-source) operating systems. The graphical Web browser is created and sparks an exponentially higher demand for public Internet access. Vladimir Levin and accomplices illegally transfer USUSD10 Million in funds to several accounts by cracking into the CitiBank central database. Levin is arrested by Interpol and almost all of the money is recovered. Possibly the most heralded of all crackers is Kevin Mitnick, who hacked into several corporate systems, stealing everything from personal information of celebrities to over 20,000 credit card numbers and source code for proprietary software. He is arrested and convicted of wire fraud charges and serves 5 years in prison. Kevin Poulsen and an unknown accomplice rig radio station phone systems to win cars and cash prizes. He is convicted for computer and wire fraud and is sentenced to 5 years in prison. The stories of cracking and phreaking become legend, and several prospective crackers convene at the annual DefCon convention to celebrate cracking and exchange ideas between peers. A 19-year-old Israeli student is arrested and convicted for coordinating numerous break-ins to US government systems during the Persian-Gulf conflict. Military officials call it "the most organized and systematic attack" on government systems in US history. US Attorney General Janet Reno, in response to escalated security breaches in government systems, establishes the National Infrastructure Protection Center. British communications satellites are taken over and ransomed by unknown offenders. The British government eventually seizes control of the satellites. 1.1.3. Security Today In February of 2000, a Distributed Denial of Service (DDoS) attack was unleashed on several of the most heavily-trafficked sites on the Internet. The attack rendered yahoo.com, cnn.com, amazon.com, fbi.gov, and several other sites completely unreachable to normal users, as it tied up routers for several hours with large-byte ICMP packet transfers, also called a ping flood . The attack was brought on by unknown assailants using specially created, widely available programs that scanned vulnerable network servers, installed client applications called trojans on the servers, and timed an attack with every infected server flooding the victim sites and rendering them unavailable. Many blame the attack on fundamental flaws in the way routers and the protocols used are structured to accept all incoming data, no matter where or for what purpose the packets are sent. This brings us to the new millennium, a time where an estimated 945 Million people use or have used the Internet worldwide (Computer Industry Almanac, 2004). At the same time: On any given day, there are approximately 225 major incidences of security breach reported to the CERT Coordination Center at Carnegie Mellon University. [1] In 2003, the number of CERT reported incidences jumped to 137,529 from 82,094 in 2002 and from 52,658 in 2001. [2] The worldwide economic impact of the three most dangerous Internet Viruses of the last three years was estimated at USUSD13.2 Billion. [3] Computer security has become a quantifiable and justifiable expense for all IT budgets. Organizations that require data integrity and high availability elicit the skills of system administrators, developers, and engineers to ensure 24x7 reliability of their systems, services, and information. Falling victim to malicious users, processes, or coordinated attacks is a direct threat to the success of the organization. Unfortunately, system and network security can be a difficult proposition, requiring an intricate knowledge of how an organization regards, uses, manipulates, and transmits its information. Understanding the way an organization (and the people that make up the organization) conducts business is paramount to implementing a proper security plan. 1.1.4. Standardizing Security Enterprises in every industry rely on regulations and rules that are set by standards making bodies such as the American Medical Association (AMA) or the Institute of Electrical and Electronics Engineers (IEEE). The same ideals hold true for information security. Many security consultants and vendors agree upon the standard security model known as CIA, or Confidentiality, Integrity, and Availability . This three-tiered model is a generally accepted component to assessing risks of sensitive information and establishing security policy. The following describes the CIA model in further detail: Confidentiality - Sensitive information must be available only to a set of pre-defined individuals. Unauthorized transmission and usage of information should be restricted. For example, confidentiality of information ensures that a customer's personal or financial information is not obtained by an unauthorized individual for malicious purposes such as identity theft or credit fraud. Integrity - Information should not be altered in ways that render it incomplete or incorrect. Unauthorized users should be restricted from the ability to modify or destroy sensitive information. Availability - Information should be accessible to authorized users any time that it is needed. Availability is a warranty that information can be obtained with an agreed-upon frequency and timeliness. This is often measured in terms of percentages and agreed to formally in Service Level Agreements (SLAs) used by network service providers and their enterprise clients. [1] Source: http://www.cert.org [2] Source: http://www.cert.org/stats/ [3] Source: http://www.newsfactor.com/perl/story/16407.html
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/ch-sgs-ov
17.2. Requesting Certificates through the Console
17.2. Requesting Certificates through the Console The Certificate Setup Wizard for the CA, OCSP, KRA, and TKS automates the certificate enrollment process for subsystem certificates. The Console can create, submit, and install certificate requests and certificates for any of the certificates used by that subsystem. These certificates can be a server certificate or subsystem-specific certificate, such as a CA signing certificate or KRA transport certificate. 17.2.1. Requesting Signing Certificates Note It is important that the user generate and submit the client request from the computer that will be used later to access the subsystem because part of the request process generates a private key on the local machine. If location independence is required, use a hardware token, such as a smart card, to store the key pair and the certificate. Open the subsystem console. For example: In the Configuration tab, select System Keys and Certificates in the navigation tree. In the right panel, select the Local Certificates tab. Click Add/Renew . Select the Request a certificate radio button. Choose the signing certificate type to request. Select which type of CA will sign the request, either a root CA or a subordinate CA. Set the key-pair information and set the location to generate the keys (the token), which can be either the internal security database directory or one of the listed external tokens. To create a new certificate, you must create a new key pair. Using an existing key pair will simply renew an existing certificate. Select the message digest algorithm. Give the subject name. Either enter values for individual DN attributes to build the subject DN or enter the full string. The certificate request forms support all UTF-8 characters for the common name, organizational unit, and requester name fields. This support does not include supporting internationalized domain names. Specify the start and end dates of the validity period for the certificate and the time at which the validity period will start and end on those dates. The default validity period is five years. Set the standard extensions for the certificate. The required extensions are chosen by default. To change the default choices, read the guidelines explained in Appendix B, Defaults, Constraints, and Extensions for Certificates and CRLs . Note Certificate extensions are required to set up a CA hierarchy. Subordinate CAs must have certificates that include the extension identifying them as either a subordinate SSL CA (which allows them to issue certificates for SSL) or a subordinate email CA (which allows them to issue certificates for secure email). Disabling certificate extensions means that CA hierarchies cannot be set up. Basic Constraints. The associated fields are CA setting and a numeric setting for the certification path length. Extended Key Usage. Authority Key Identifier. Subject Key Identifier. Key Usage. The digital signature (bit 0), non-repudiation (bit 1), key certificate sign (bit 5), and CRL sign (bit 6) bits are set by default. The extension is marked critical as recommended by the PKIX standard and RFC 2459. See RFC 2459 for a description of the Key Usage extension. Base-64 SEQUENCE of extensions. This is for custom extensions. Paste the extension in MIME 64 DER-encoded format into the text field. To add multiple extensions, use the ExtJoiner program. For information on using the tools, see the Certificate System Command-Line Tools Guide . The wizard generates the key pairs and displays the certificate signing request. The request is in base-64 encoded PKCS #10 format and is bounded by the marker lines -----BEGIN NEW CERTIFICATE REQUEST----- and -----END NEW CERTIFICATE REQUEST----- . For example: The wizard also copies the certificate request to a text file it creates in the configuration directory, which is located in /var/lib/pki/ instance_name / subsystem_type /conf/ . The name of the text file depends on the type of certificate requested. The possible text files are listed in Table 17.1, "Files Created for Certificate Signing Requests" . Table 17.1. Files Created for Certificate Signing Requests Filename Certificate Signing Request cacsr.txt CA signing certificate ocspcsr.txt Certificate Manager OCSP signing certificate ocspcsr.txt OCSP signing certificate Do not modify the certificate request before sending it to the CA. The request can either be submitted automatically through the wizard or copied to the clipboard and manually submitted to the CA through its end-entities page. Note The wizard's auto-submission feature can submit requests to a remote Certificate Manager only. It cannot be used for submitting the request to a third-party CA. To submit it to a third-party CA, use the certificate request file. Retrieve the certificate. Open the Certificate Manager end-entities page. Click the Retrieval tab. Fill in the request ID number that was created when the certificate request was submitted, and click Submit . The page shows the status of the certificate request. If the status is complete , then there is a link to the certificate. Click the Issued certificate link. The new certificate information is shown in pretty-print format, in base-64 encoded format, and in PKCS #7 format. Copy the base-64 encoded certificate, including the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- marker lines, to a text file. Save the text file, and use it to store a copy of the certificate in a subsystem's internal database. See Section 15.3.2.1, "Creating Users" . Note pkiconsole is being deprecated. 17.2.2. Requesting Other Certificates Note It is important that the user generate and submit the client request from the computer that will be used later to access the subsystem because part of the request process generates a private key on the local machine. If location independence is required, use a hardware token, such as a smart card, to store the key pair and the certificate. Open the subsystem console. For example: In the Configuration tab, select System Keys and Certificates in the navigation tree. In the right panel, select the Local Certificates tab. Click Add/Renew . Select the Request a certificate radio button. Choose the certificate type to request. The types of certificates that can be requested varies depending on the subsystem. Note If selecting to create an "other" certificate, the Certificate Type field becomes active. Fill in the type of certificate to create, either caCrlSigning for the CRL signing certificate, caSignedLogCert for an audit log signing certificate, or client for an SSL client certificate. Select which type of CA will sign the request. The options are to use the local CA signing certificate or to create a request to submit to another CA. Set the key-pair information and set the location to generate the keys (the token), which can be either the internal security database directory or one of the listed external tokens. To create a new certificate, you must create a new key pair. Using an existing key pair will simply renew an existing certificate. Give the subject name. Either enter values for individual DN attributes to build the subject DN or enter the full string. Note For an SSL server certificate, the common name must be the fully-qualified host name of the Certificate System in the format machine_name.domain.domain . The CA certificate request forms support all UTF-8 characters for the common name, organizational unit, and requester name fields. This support does not include supporting internationalized domain names. Specify the start and end dates of the validity period for the certificate and the time at which the validity period will start and end on those dates. The default validity period is five years. Set the standard extensions for the certificate. The required extensions are chosen by default. To change the default choices, read the guidelines explained in Appendix B, Defaults, Constraints, and Extensions for Certificates and CRLs . Extended Key Usage. Authority Key Identifier. Subject Key Identifier. Key Usage. The digital signature (bit 0), non-repudiation (bit 1), key certificate sign (bit 5), and CRL sign (bit 6) bits are set by default. The extension is marked critical as recommended by the PKIX standard and RFC 2459. See RFC 2459 for a description of the Key Usage extension. Base-64 SEQUENCE of extensions. This is for custom extensions. Paste the extension in MIME 64 DER-encoded format into the text field. To add multiple extensions, use the ExtJoiner program. For information on using the tools, see the Certificate System Command-Line Tools Guide . The wizard generates the key pairs and displays the certificate signing request. The request is in base-64 encoded PKCS #10 format and is bounded by the marker lines -----BEGIN NEW CERTIFICATE REQUEST----- and -----END NEW CERTIFICATE REQUEST----- . For example: The wizard also copies the certificate request to a text file it creates in the configuration directory, which is located in /var/lib/pki/ instance_name / subsystem_type /conf/ . The name of the text file depends on the type of certificate requested. The possible text files are listed in Table 17.2, "Files Created for Certificate Signing Requests" . Table 17.2. Files Created for Certificate Signing Requests Filename Certificate Signing Request kracsr.txt KRA transport certificate sslcsr.txt SSL server certificate othercsr.txt Other certificates, such as Certificate Manager CRL signing certificate or SSL client certificate Do not modify the certificate request before sending it to the CA. The request can either be submitted automatically through the wizard or copied to the clipboard and manually submitted to the CA through its end-entities page. Note The wizard's auto-submission feature can submit requests to a remote Certificate Manager only. It cannot be used for submitting the request to a third-party CA. To submit the request to a third-party CA, use one of the certificate request files. Retrieve the certificate. Open the Certificate Manager end-entities page. Click the Retrieval tab. Fill in the request ID number that was created when the certificate request was submitted, and click Submit . The page shows the status of the certificate request. If the status is complete , then there is a link to the certificate. Click the Issued certificate link. The new certificate information is shown in pretty-print format, in base-64 encoded format, and in PKCS #7 format. Copy the base-64 encoded certificate, including the -----BEGIN CERTIFICATE----- and -----END CERTIFICATE----- marker lines, to a text file. Save the text file, and use it to store a copy of the certificate in a subsystem's internal database. See Section 15.3.2.1, "Creating Users" .
[ "pkiconsole https://server.example.com:8443/ca", "-----BEGIN NEW CERTIFICATE REQUEST----- MIICJzCCAZCgAwIBAgIBAzANBgkqhkiG9w0BAQQFADBC6SAwHgYDVQQKExdOZXRzY2FwZSBDb21tdW5pY2 F0aW9uczngjhnMVQ2VydGlmaWNhdGUgQXV0aG9yaXR5MB4XDTk4MDgyNzE5MDAwMFoXDTk5MDIyMzE5MDA wMnbjdgngYoxIDAeBgNVBAoTF05ldHNjYXBlIENvbW11bmljYXRpb25zMQ8wDQYDVQQLEwZQZW9wbGUxFz AVBgoJkiaJkIsZAEBEwdzdXByaXlhMRcwFQYDVQQDEw5TdXByaXlhIFNoZXR0eTEjMCEGCSqGSIb3Dbndg JARYUc3Vwcml5Yhvfggsvwryw4y7214vAOBgNVHQ8BAf8EBAMCBLAwFAYJYIZIAYb4QgEBAQHBAQDAgCAM A0GCSqGSIb3DQEBBAUAA4GBAFi9FzyJlLmS+kzsue0kTXawbwamGdYql2w4hIBgdR+jWeLmD4CP4x -----END NEW CERTIFICATE REQUEST-----", "https://server.example.com:8443/ca/ee/ca", "pkiconsole https://server.example.com:8443/ca", "-----BEGIN NEW CERTIFICATE REQUEST----- MIICJzCCAZCgAwIBAgIBAzANBgkqhkiG9w0BAQQFADBC6SAwHgYDVQQKExdOZXRzY2FwZSBDb21tdW5pY2 F0aW9uczngjhnMVQ2VydGlmaWNhdGUgQXV0aG9yaXR5MB4XDTk4MDgyNzE5MDAwMFoXDTk5MDIyMzE5MDA wMnbjdgngYoxIDAeBgNVBAoTF05ldHNjYXBlIENvbW11bmljYXRpb25zMQ8wDQYDVQQLEwZQZW9wbGUxFz AVBgoJkiaJkIsZAEBEwdzdXByaXlhMRcwFQYDVQQDEw5TdXByaXlhIFNoZXR0eTEjMCEGCSqGSIb3Dbndg JARYUc3Vwcml5Yhvfggsvwryw4y7214vAOBgNVHQ8BAf8EBAMCBLAwFAYJYIZIAYb4QgEBAQHBAQDAgCAM A0GCSqGSIb3DQEBBAUAA4GBAFi9FzyJlLmS+kzsue0kTXawbwamGdYql2w4hIBgdR+jWeLmD4CP4x -----END NEW CERTIFICATE REQUEST-----", "https://server.example.com:8443/ca/ee/ca" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/requesting_a_subsystem_server_or_signing_certificate_through_the_console
7.113. libxml2
7.113. libxml2 7.113.1. RHSA-2015:1419 - Low: libxml2 security and bug fix update Updated libxml2 packages that fix one security issue and one bug are now available for Red Hat Enterprise Linux 6. Red Hat Product Security has rated this update as having Low security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link in the References section. The libxml2 library is a development toolbox providing the implementation of various XML standards. Security Fix CVE-2015-1819 A denial of service flaw was found in the way the libxml2 library parsed certain XML files. An attacker could provide a specially crafted XML file that, when parsed by an application using libxml2, could cause that application to use an excessive amount of memory. This issue was discovered by Florian Weimer of Red Hat Product Security. Users of libxml2 are advised to upgrade to these updated packages, which contain backported patches to correct these issues. The desktop must be restarted (log out, then log back in) for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-libxml2
6.16. Improving Uptime with Virtual Machine High Availability
6.16. Improving Uptime with Virtual Machine High Availability 6.16.1. What is High Availability? High availability is recommended for virtual machines running critical workloads. A highly available virtual machine is automatically restarted, either on its original host or another host in the cluster, if its process is interrupted, such as in the following scenarios: A host becomes non-operational due to hardware failure. A host is put into maintenance mode for scheduled downtime. A host becomes unavailable because it has lost communication with an external storage resource. A highly available virtual machine is not restarted if it is shut down cleanly, such as in the following scenarios: The virtual machine is shut down from within the guest. The virtual machine is shut down from the Manager. The host is shut down by an administrator without being put in maintenance mode first. With storage domains V4 or later, virtual machines have the additional capability to acquire a lease on a special volume on the storage, enabling a virtual machine to start on another host even if the original host loses power. The functionality also prevents the virtual machine from being started on two different hosts, which may lead to corruption of the virtual machine disks. With high availability, interruption to service is minimal because virtual machines are restarted within seconds with no user intervention required. High availability keeps your resources balanced by restarting guests on a host with low current resource utilization, or based on any workload balancing or power saving policies that you configure. This ensures that there is sufficient capacity to restart virtual machines at all times. High Availability and Storage I/O Errors If a storage I/O error occurs, the virtual machine is paused. You can define how the host handles highly available virtual machines after the connection with the storage domain is reestablished; they can either be resumed, ungracefully shut down, or remain paused. For more information about these options, see Section A.1.6, "Virtual Machine High Availability Settings Explained" . 6.16.2. High Availability Considerations A highly available host requires a power management device and fencing parameters. In addition, for a virtual machine to be highly available when its host becomes non-operational, it needs to be started on another available host in the cluster. To enable the migration of highly available virtual machines: Power management must be configured for the hosts running the highly available virtual machines. The host running the highly available virtual machine must be part of a cluster which has other available hosts. The destination host must be running. The source and destination host must have access to the data domain on which the virtual machine resides. The source and destination host must have access to the same virtual networks and VLANs. There must be enough CPUs on the destination host that are not in use to support the virtual machine's requirements. There must be enough RAM on the destination host that is not in use to support the virtual machine's requirements. 6.16.3. Configuring a Highly Available Virtual Machine High availability must be configured individually for each virtual machine. Configuring a Highly Available Virtual Machine Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the High Availability tab. Select the Highly Available check box to enable high availability for the virtual machine. Select the storage domain to hold the virtual machine lease, or select No VM Lease to disable the functionality, from the Target Storage Domain for VM Lease drop-down list. See Section 6.16.1, "What is High Availability?" for more information about virtual machine leases. Important This functionality is only available on storage domains that are V4 or later. Select AUTO_RESUME , LEAVE_PAUSED , or KILL from the Resume Behavior drop-down list. If you defined a virtual machine lease, KILL is the only option available. For more information see Section A.1.6, "Virtual Machine High Availability Settings Explained" . Select Low , Medium , or High from the Priority drop-down list. When migration is triggered, a queue is created in which the high priority virtual machines are migrated first. If a cluster is running low on resources, only the high priority virtual machines are migrated. Click OK .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-Improving_Uptime_with_Virtual_Machine_High_Availability
4.6.2. REAL SERVER Subsection
4.6.2. REAL SERVER Subsection Clicking on the REAL SERVER subsection link at the top of the panel displays the EDIT REAL SERVER subsection. It displays the status of the physical server hosts for a particular virtual service. Figure 4.7. The REAL SERVER Subsection Click the ADD button to add a new server. To delete an existing server, select the radio button beside it and click the DELETE button. Click the EDIT button to load the EDIT REAL SERVER panel, as seen in Figure 4.8, "The REAL SERVER Configuration Panel" . Figure 4.8. The REAL SERVER Configuration Panel This panel consists of three entry fields: Name A descriptive name for the real server. Note This name is not the host name for the machine, so make it descriptive and easily identifiable. Address The real server's IP address. Since the listening port is already specified for the associated virtual server, do not add a port number. Weight An integer value indicating this host's capacity relative to that of other hosts in the pool. The value can be arbitrary, but treat it as a ratio in relation to other real servers in the pool. For more on server weight, see Section 1.3.2, "Server Weight and Scheduling" . Warning Remember to click the ACCEPT button after making any changes in this panel. To make sure you do not lose any changes when selecting a new panel.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s2-piranha-virtservs-rs-vsa
Chapter 6. Traffic splitting
Chapter 6. Traffic splitting 6.1. Traffic splitting overview In a Knative application, traffic can be managed by creating a traffic split. A traffic split is configured as part of a route, which is managed by a Knative service. Configuring a route allows requests to be sent to different revisions of a service. This routing is determined by the traffic spec of the Service object. A traffic spec declaration consists of one or more revisions, each responsible for handling a portion of the overall traffic. The percentages of traffic routed to each revision must add up to 100%, which is ensured by a Knative validation. The revisions specified in a traffic spec can either be a fixed, named revision, or can point to the "latest" revision, which tracks the head of the list of all revisions for the service. The "latest" revision is a type of floating reference that updates if a new revision is created. Each revision can have a tag attached that creates an additional access URL for that revision. The traffic spec can be modified by: Editing the YAML of a Service object directly. Using the Knative ( kn ) CLI --traffic flag. Using the OpenShift Container Platform web console. When you create a Knative service, it does not have any default traffic spec settings. 6.2. Traffic spec examples The following example shows a traffic spec where 100% of traffic is routed to the latest revision of the service. Under status , you can see the name of the latest revision that latestRevision resolves to: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: ... traffic: - latestRevision: true percent: 100 status: ... traffic: - percent: 100 revisionName: example-service The following example shows a traffic spec where 100% of traffic is routed to the revision tagged as current , and the name of that revision is specified as example-service . The revision tagged as latest is kept available, even though no traffic is routed to it: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: ... traffic: - tag: current revisionName: example-service percent: 100 - tag: latest latestRevision: true percent: 0 The following example shows how the list of revisions in the traffic spec can be extended so that traffic is split between multiple revisions. This example sends 50% of traffic to the revision tagged as current , and 50% of traffic to the revision tagged as candidate . The revision tagged as latest is kept available, even though no traffic is routed to it: apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: ... traffic: - tag: current revisionName: example-service-1 percent: 50 - tag: candidate revisionName: example-service-2 percent: 50 - tag: latest latestRevision: true percent: 0 6.3. Traffic splitting using the Knative CLI Using the Knative ( kn ) CLI to create traffic splits provides a more streamlined and intuitive user interface over modifying YAML files directly. You can use the kn service update command to split traffic between revisions of a service. 6.3.1. Creating a traffic split by using the Knative CLI Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have installed the Knative ( kn ) CLI. You have created a Knative service. Procedure Specify the revision of your service and what percentage of traffic you want to route to it by using the --traffic tag with a standard kn service update command: Example command USD kn service update <service_name> --traffic <revision>=<percentage> Where: <service_name> is the name of the Knative service that you are configuring traffic routing for. <revision> is the revision that you want to configure to receive a percentage of traffic. You can either specify the name of the revision, or a tag that you assigned to the revision by using the --tag flag. <percentage> is the percentage of traffic that you want to send to the specified revision. Optional: The --traffic flag can be specified multiple times in one command. For example, if you have a revision tagged as @latest and a revision named stable , you can specify the percentage of traffic that you want to split to each revision as follows: Example command USD kn service update showcase --traffic @latest=20,stable=80 If you have multiple revisions and do not specify the percentage of traffic that should be split to the last revision, the --traffic flag can calculate this automatically. For example, if you have a third revision named example , and you use the following command: Example command USD kn service update showcase --traffic @latest=10,stable=60 The remaining 30% of traffic is split to the example revision, even though it was not specified. 6.4. CLI flags for traffic splitting The Knative ( kn ) CLI supports traffic operations on the traffic block of a service as part of the kn service update command. 6.4.1. Knative CLI traffic splitting flags The following table displays a summary of traffic splitting flags, value formats, and the operation the flag performs. The Repetition column denotes whether repeating the particular value of flag is allowed in a kn service update command. Flag Value(s) Operation Repetition --traffic RevisionName=Percent Gives Percent traffic to RevisionName Yes --traffic Tag=Percent Gives Percent traffic to the revision having Tag Yes --traffic @latest=Percent Gives Percent traffic to the latest ready revision No --tag RevisionName=Tag Gives Tag to RevisionName Yes --tag @latest=Tag Gives Tag to the latest ready revision No --untag Tag Removes Tag from revision Yes 6.4.1.1. Multiple flags and order precedence All traffic-related flags can be specified using a single kn service update command. kn defines the precedence of these flags. The order of the flags specified when using the command is not taken into account. The precedence of the flags as they are evaluated by kn are: --untag : All the referenced revisions with this flag are removed from the traffic block. --tag : Revisions are tagged as specified in the traffic block. --traffic : The referenced revisions are assigned a portion of the traffic split. You can add tags to revisions and then split traffic according to the tags you have set. 6.4.1.2. Custom URLs for revisions Assigning a --tag flag to a service by using the kn service update command creates a custom URL for the revision that is created when you update the service. The custom URL follows the pattern https://<tag>-<service_name>-<namespace>.<domain> or http://<tag>-<service_name>-<namespace>.<domain> . The --tag and --untag flags use the following syntax: Require one value. Denote a unique tag in the traffic block of the service. Can be specified multiple times in one command. 6.4.1.2.1. Example: Assign a tag to a revision The following example assigns the tag latest to a revision named example-revision : USD kn service update <service_name> --tag @latest=example-tag 6.4.1.2.2. Example: Remove a tag from a revision You can remove a tag to remove the custom URL, by using the --untag flag. Note If a revision has its tags removed, and it is assigned 0% of the traffic, the revision is removed from the traffic block entirely. The following command removes all tags from the revision named example-revision : USD kn service update <service_name> --untag example-tag 6.5. Splitting traffic between revisions After you create a serverless application, the application is displayed in the Topology view of the Developer perspective in the OpenShift Container Platform web console. The application revision is represented by the node, and the Knative service is indicated by a quadrilateral around the node. Any new change in the code or the service configuration creates a new revision, which is a snapshot of the code at a given time. For a service, you can manage the traffic between the revisions of the service by splitting and routing it to the different revisions as required. 6.5.1. Managing traffic between revisions by using the OpenShift Container Platform web console Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on your cluster. You have logged in to the OpenShift Container Platform web console. Procedure To split traffic between multiple revisions of an application in the Topology view: Click the Knative service to see its overview in the side panel. Click the Resources tab, to see a list of Revisions and Routes for the service. Figure 6.1. Serverless application Click the service, indicated by the S icon at the top of the side panel, to see an overview of the service details. Click the YAML tab and modify the service configuration in the YAML editor, and click Save . For example, change the timeoutseconds from 300 to 301 . This change in the configuration triggers a new revision. In the Topology view, the latest revision is displayed and the Resources tab for the service now displays the two revisions. In the Resources tab, click Set Traffic Distribution to see the traffic distribution dialog box: Add the split traffic percentage portion for the two revisions in the Splits field. Add tags to create custom URLs for the two revisions. Click Save to see two nodes representing the two revisions in the Topology view. Figure 6.2. Serverless application revisions 6.6. Rerouting traffic using blue-green strategy You can safely reroute traffic from a production version of an app to a new version, by using a blue-green deployment strategy . 6.6.1. Routing and managing traffic by using a blue-green deployment strategy Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. Install the OpenShift CLI ( oc ). Procedure Create and deploy an app as a Knative service. Find the name of the first revision that was created when you deployed the service, by viewing the output from the following command: USD oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}' Example command USD oc get ksvc showcase -o=jsonpath='{.status.latestCreatedRevisionName}' Example output USD showcase-00001 Add the following YAML to the service spec to send inbound traffic to the revision: ... spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic goes to this revision ... Verify that you can view your app at the URL output you get from running the following command: USD oc get ksvc <service_name> Deploy a second revision of your app by modifying at least one field in the template spec of the service and redeploying it. For example, you can modify the image of the service, or an env environment variable. You can redeploy the service by applying the service YAML file, or by using the kn service update command if you have installed the Knative ( kn ) CLI. Find the name of the second, latest revision that was created when you redeployed the service, by running the command: USD oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}' At this point, both the first and second revisions of the service are deployed and running. Update your existing service to create a new, test endpoint for the second revision, while still sending all other traffic to the first revision: Example of updated service spec with test endpoint ... spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic is still being routed to the first revision - revisionName: <second_revision_name> percent: 0 # No traffic is routed to the second revision tag: v2 # A named route ... After you redeploy this service by reapplying the YAML resource, the second revision of the app is now staged. No traffic is routed to the second revision at the main URL, and Knative creates a new service named v2 for testing the newly deployed revision. Get the URL of the new service for the second revision, by running the following command: USD oc get ksvc <service_name> --output jsonpath="{.status.traffic[*].url}" You can use this URL to validate that the new version of the app is behaving as expected before you route any traffic to it. Update your existing service again, so that 50% of traffic is sent to the first revision, and 50% is sent to the second revision: Example of updated service spec splitting traffic 50/50 between revisions ... spec: traffic: - revisionName: <first_revision_name> percent: 50 - revisionName: <second_revision_name> percent: 50 tag: v2 ... When you are ready to route all traffic to the new version of the app, update the service again to send 100% of traffic to the second revision: Example of updated service spec sending all traffic to the second revision ... spec: traffic: - revisionName: <first_revision_name> percent: 0 - revisionName: <second_revision_name> percent: 100 tag: v2 ... Tip You can remove the first revision instead of setting it to 0% of traffic if you do not plan to roll back the revision. Non-routeable revision objects are then garbage-collected. Visit the URL of the first revision to verify that no more traffic is being sent to the old version of the app.
[ "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: traffic: - latestRevision: true percent: 100 status: traffic: - percent: 100 revisionName: example-service", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: traffic: - tag: current revisionName: example-service percent: 100 - tag: latest latestRevision: true percent: 0", "apiVersion: serving.knative.dev/v1 kind: Service metadata: name: example-service namespace: default spec: traffic: - tag: current revisionName: example-service-1 percent: 50 - tag: candidate revisionName: example-service-2 percent: 50 - tag: latest latestRevision: true percent: 0", "kn service update <service_name> --traffic <revision>=<percentage>", "kn service update showcase --traffic @latest=20,stable=80", "kn service update showcase --traffic @latest=10,stable=60", "kn service update <service_name> --tag @latest=example-tag", "kn service update <service_name> --untag example-tag", "oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}'", "oc get ksvc showcase -o=jsonpath='{.status.latestCreatedRevisionName}'", "showcase-00001", "spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic goes to this revision", "oc get ksvc <service_name>", "oc get ksvc <service_name> -o=jsonpath='{.status.latestCreatedRevisionName}'", "spec: traffic: - revisionName: <first_revision_name> percent: 100 # All traffic is still being routed to the first revision - revisionName: <second_revision_name> percent: 0 # No traffic is routed to the second revision tag: v2 # A named route", "oc get ksvc <service_name> --output jsonpath=\"{.status.traffic[*].url}\"", "spec: traffic: - revisionName: <first_revision_name> percent: 50 - revisionName: <second_revision_name> percent: 50 tag: v2", "spec: traffic: - revisionName: <first_revision_name> percent: 0 - revisionName: <second_revision_name> percent: 100 tag: v2" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/serving/traffic-splitting
Chapter 3. OpenShift Data Foundation operators
Chapter 3. OpenShift Data Foundation operators Red Hat OpenShift Data Foundation is comprised of the following three Operator Lifecycle Manager (OLM) operator bundles, deploying four operators which codify administrative tasks and custom resources so that task and resource characteristics can be easily automated: OpenShift Data Foundation odf-operator OpenShift Container Storage ocs-operator rook-ceph-operator Multicloud Object Gateway mcg-operator Administrators define the desired end state of the cluster, and the OpenShift Data Foundation operators ensure the cluster is either in that state or approaching that state, with minimal administrator intervention. 3.1. OpenShift Data Foundation operator The odf-operator can be described as a "meta" operator for OpenShift Data Foundation, that is, an operator meant to influence other operators. The odf-operator has the following primary functions: Enforces the configuration and versioning of the other operators that comprise OpenShift Data Foundation. It does this by using two primary mechanisms: operator dependencies and Subscription management. The odf-operator bundle specifies dependencies on other OLM operators to make sure they are always installed at specific versions. The operator itself manages the Subscriptions for all other operators to make sure the desired versions of those operators are available for installation by the OLM. Provides the OpenShift Data Foundation external plugin for the OpenShift Console. Provides an API to integrate storage solutions with the OpenShift Console. 3.1.1. Components The odf-operator has a dependency on the ocs-operator package. It also manages the Subscription of the mcg-operator . In addition, the odf-operator bundle defines a second Deployment for the OpenShift Data Foundation external plugin for the OpenShift Console. This defines an nginx -based Pod that serves the necessary files to register and integrate OpenShift Data Foundation dashboards directly into the OpenShift Container Platform Console. 3.1.2. Design diagram This diagram illustrates how odf-operator is integrated with the OpenShift Container Platform. Figure 3.1. OpenShift Data Foundation Operator 3.1.3. Responsibilites The odf-operator defines the following CRD: StorageSystem The StorageSystem CRD represents an underlying storage system that provides data storage and services for OpenShift Container Platform. It triggers the operator to ensure the existence of a Subscription for a given Kind of storage system. 3.1.4. Resources The ocs-operator creates the following CRs in response to the spec of a given StorageSystem. Operator Lifecycle Manager Resources Creates a Subscription for the operator which defines and reconciles the given StorageSystem's Kind. 3.1.5. Limitation The odf-operator does not provide any data storage or services itself. It exists as an integration and management layer for other storage systems. 3.1.6. High availability High availability is not a primary requirement for the odf-operator Pod similar to most of the other operators. In general, there are no operations that require or benefit from process distribution. OpenShift Container Platform quickly spins up a replacement Pod whenever the current Pod becomes unavailable or is deleted. 3.1.7. Relevant config files The odf-operator comes with a ConfigMap of variables that can be used to modify the behavior of the operator. 3.1.8. Relevant log files To get an understanding of the OpenShift Data Foundation and troubleshoot issues, you can look at the following: Operator Pod logs StorageSystem status Underlying storage system CRD statuses Operator Pod logs Each operator provides standard Pod logs that include information about reconciliation and errors encountered. These logs often have information about successful reconciliation which can be filtered out and ignored. StorageSystem status and events The StorageSystem CR stores the reconciliation details in the status of the CR and has associated events. The spec of the StorageSystem contains the name, namespace, and Kind of the actual storage system's CRD, which the administrator can use to find further information on the status of the storage system. 3.1.9. Lifecycle The odf-operator is required to be present as long as the OpenShift Data Foundation bundle remains installed. This is managed as part of OLM's reconciliation of the OpenShift Data Foundation CSV. At least one instance of the pod should be in Ready state. The operator operands such as CRDs should not affect the lifecycle of the operator. The creation and deletion of StorageSystems is an operation outside the operator's control and must be initiated by the administrator or automated with the appropriate application programming interface (API) calls. 3.2. OpenShift Container Storage operator The ocs-operator can be described as a "meta" operator for OpenShift Data Foundation, that is, an operator meant to influence other operators and serves as a configuration gateway for the features provided by the other operators. It does not directly manage the other operators. The ocs-operator has the following primary functions: Creates Custom Resources (CRs) that trigger the other operators to reconcile against them. Abstracts the Ceph and Multicloud Object Gateway configurations and limits them to known best practices that are validated and supported by Red Hat. Creates and reconciles the resources required to deploy containerized Ceph and NooBaa according to the support policies. 3.2.1. Components The ocs-operator does not have any dependent components. However, the operator has a dependency on the existence of all the custom resource definitions (CRDs) from other operators, which are defined in the ClusterServiceVersion (CSV). 3.2.2. Design diagram This diagram illustrates how OpenShift Container Storage is integrated with the OpenShift Container Platform. Figure 3.2. OpenShift Container Storage Operator 3.2.3. Responsibilities The two ocs-operator CRDs are: OCSInitialization StorageCluster OCSInitialization is a singleton CRD used for encapsulating operations that apply at the operator level. The operator takes care of ensuring that one instance always exists. The CR triggers the following: Performs initialization tasks required for OpenShift Container Storage. If needed, these tasks can be triggered to run again by deleting the OCSInitialization CRD. Ensures that the required Security Context Constraints (SCCs) for OpenShift Container Storage are present. Manages the deployment of the Ceph toolbox Pod, used for performing advanced troubleshooting and recovery operations. The StorageCluster CRD represents the system that provides the full functionality of OpenShift Container Storage. It triggers the operator to ensure the generation and reconciliation of Rook-Ceph and NooBaa CRDs. The ocs-operator algorithmically generates the CephCluster and NooBaa CRDs based on the configuration in the StorageCluster spec. The operator also creates additional CRs, such as CephBlockPools , Routes , and so on. These resources are required for enabling different features of OpenShift Container Storage. Currently, only one StorageCluster CR per OpenShift Container Platform cluster is supported. 3.2.4. Resources The ocs-operator creates the following CRs in response to the spec of the CRDs it defines . The configuration of some of these resources can be overridden, allowing for changes to the generated spec or not creating them altogether. General resources Events Creates various events when required in response to reconciliation. Persistent Volumes (PVs) PVs are not created directly by the operator. However, the operator keeps track of all the PVs created by the Ceph CSI drivers and ensures that the PVs have appropriate annotations for the supported features. Quickstarts Deploys various Quickstart CRs for the OpenShift Container Platform Console. Rook-Ceph resources CephBlockPool Define the default Ceph block pools. CephFilesysPrometheusRulesoute for the Ceph object store. StorageClass Define the default Storage classes. For example, for CephBlockPool and CephFilesystem ). VolumeSnapshotClass Define the default volume snapshot classes for the corresponding storage classes. Multicloud Object Gateway resources NooBaa Define the default Multicloud Object Gateway system. Monitoring resources Metrics Exporter Service Metrics Exporter Service Monitor PrometheusRules 3.2.5. Limitation The ocs-operator neither deploys nor reconciles the other Pods of OpenShift Data Foundation. The ocs-operator CSV defines the top-level components such as operator Deployments and the Operator Lifecycle Manager (OLM) reconciles the specified component. 3.2.6. High availability High availability is not a primary requirement for the ocs-operator Pod similar to most of the other operators. In general, there are no operations that require or benefit from process distribution. OpenShift Container Platform quickly spins up a replacement Pod whenever the current Pod becomes unavailable or is deleted. 3.2.7. Relevant config files The ocs-operator configuration is entirely specified by the CSV and is not modifiable without a custom build of the CSV. 3.2.8. Relevant log files To get an understanding of the OpenShift Container Storage and troubleshoot issues, you can look at the following: Operator Pod logs StorageCluster status and events OCSInitialization status Operator Pod logs Each operator provides standard Pod logs that include information about reconciliation and errors encountered. These logs often have information about successful reconciliation which can be filtered out and ignored. StorageCluster status and events The StorageCluster CR stores the reconciliation details in the status of the CR and has associated events. Status contains a section of the expected container images. It shows the container images that it expects to be present in the pods from other operators and the images that it currently detects. This helps to determine whether the OpenShift Container Storage upgrade is complete. OCSInitialization status This status shows whether the initialization tasks are completed successfully. 3.2.9. Lifecycle The ocs-operator is required to be present as long as the OpenShift Container Storage bundle remains installed. This is managed as part of OLM's reconciliation of the OpenShift Container Storage CSV. At least one instance of the pod should be in Ready state. The operator operands such as CRDs should not affect the lifecycle of the operator. An OCSInitialization CR should always exist. The operator creates one if it does not exist. The creation and deletion of StorageClusters is an operation outside the operator's control and must be initiated by the administrator or automated with the appropriate API calls. 3.3. Rook-Ceph operator Rook-Ceph operator is the Rook operator for Ceph in the OpenShift Data Foundation. Rook enables Ceph storage systems to run on the OpenShift Container Platform. The Rook-Ceph operator is a simple container that automatically bootstraps the storage clusters and monitors the storage daemons to ensure the storage clusters are healthy. 3.3.1. Components The Rook-Ceph operator manages a number of components as part of the OpenShift Data Foundation deployment. Ceph-CSI Driver The operator creates and updates the CSI driver, including a provisioner for each of the two drivers, RADOS block device (RBD) and Ceph filesystem (CephFS) and a volume plugin daemonset for each of the two drivers. Ceph daemons Mons The monitors (mons) provide the core metadata store for Ceph. OSDs The object storage daemons (OSDs) store the data on underlying devices. Mgr The manager (mgr) collects metrics and provides other internal functions for Ceph. RGW The RADOS Gateway (RGW) provides the S3 endpoint to the object store. MDS The metadata server (MDS) provides CephFS shared volumes. 3.3.2. Design diagram The following image illustrates how Ceph Rook integrates with OpenShift Container Platform. Figure 3.3. Rook-Ceph Operator With Ceph running in the OpenShift Container Platform cluster, OpenShift Container Platform applications can mount block devices and filesystems managed by Rook-Ceph, or can use the S3/Swift API for object storage. 3.3.3. Responsibilities The Rook-Ceph operator is a container that bootstraps and monitors the storage cluster. It performs the following functions: Automates the configuration of storage components Starts, monitors, and manages the Ceph monitor pods and Ceph OSD daemons to provide the RADOS storage cluster Initializes the pods and other artifacts to run the services to manage: CRDs for pools Object stores (S3/Swift) Filesystems Monitors the Ceph mons and OSDs to ensure that the storage remains available and healthy Deploys and manages Ceph mons placement while adjusting the mon configuration based on cluster size Watches the desired state changes requested by the API service and applies the changes Initializes the Ceph-CSI drivers that are needed for consuming the storage Automatically configures the Ceph-CSI driver to mount the storage to pods Rook-Ceph Operator architecture The Rook-Ceph operator image includes all required tools to manage the cluster. There is no change to the data path. However, the operator does not expose all Ceph configurations. Many of the Ceph features like placement groups and crush maps are hidden from the users and are provided with a better user experience in terms of physical resources, pools, volumes, filesystems, and buckets. 3.3.4. Resources Rook-Ceph operator adds owner references to all the resources it creates in the openshift-storage namespace. When the cluster is uninstalled, the owner references ensure that the resources are all cleaned up. This includes OpenShift Container Platform resources such as configmaps , secrets , services , deployments , daemonsets , and so on. The Rook-Ceph operator watches CRs to configure the settings determined by OpenShift Data Foundation, which includes CephCluster , CephObjectStore , CephFilesystem , and CephBlockPool . 3.3.5. Lifecycle Rook-Ceph operator manages the lifecycle of the following pods in the Ceph cluster: Rook operator A single pod that owns the reconcile of the cluster. RBD CSI Driver Two provisioner pods, managed by a single deployment. One plugin pod per node, managed by a daemonset . CephFS CSI Driver Two provisioner pods, managed by a single deployment. One plugin pod per node, managed by a daemonset . Monitors (mons) Three mon pods, each with its own deployment. Stretch clusters Contain five mon pods, one in the arbiter zone and two in each of the other two data zones. Manager (mgr) There is a single mgr pod for the cluster. Stretch clusters There are two mgr pods (starting with OpenShift Data Foundation 4.8), one in each of the two non-arbiter zones. Object storage daemons (OSDs) At least three OSDs are created initially in the cluster. More OSDs are added when the cluster is expanded. Metadata server (MDS) The CephFS metadata server has a single pod. RADOS gateway (RGW) The Ceph RGW daemon has a single pod. 3.4. MCG operator The Multicloud Object Gateway (MCG) operator is an operator for OpenShift Data Foundation along with the OpenShift Data Foundation operator and the Rook-Ceph operator. The MCG operator is available upstream as a standalone operator. The MCG operator performs the following primary functions: Controls and reconciles the Multicloud Object Gateway (MCG) component within OpenShift Data Foundation. Manages new user resources such as object bucket claims, bucket classes, and backing stores. Creates the default out-of-the-box resources. A few configurations and information are passed to the MCG operator through the OpenShift Data Foundation operator. 3.4.1. Components The MCG operator does not have sub-components. However, it consists of a reconcile loop for the different resources that are controlled by it. The MCG operator has a command-line interface (CLI) and is available as a part of OpenShift Data Foundation. It enables the creation, deletion, and querying of various resources. This CLI adds a layer of input sanitation and status validation before the configurations are applied unlike applying a YAML file directly. 3.4.2. Responsibilities and resources The MCG operator reconciles and is responsible for the custom resource definitions (CRDs) and OpenShift Container Platform entities. Backing store Namespace store Bucket class Object bucket claims (OBCs) NooBaa, pod stateful sets CRD Prometheus Rules and Service Monitoring Horizontal pod autoscaler (HPA) Backing store A resource that the customer has connected to the MCG component. This resource provides MCG the ability to save the data of the provisioned buckets on top of it. A default backing store is created as part of the deployment depending on the platform that the OpenShift Container Platform is running on. For example, when OpenShift Container Platform or OpenShift Data Foundation is deployed on Amazon Web Services (AWS), it results in a default backing store which is an AWS::S3 bucket. Similarly, for Microsoft Azure, the default backing store is a blob container and so on. The default backing stores are created using CRDs for the cloud credential operator, which comes with OpenShift Container Platform. There is no limit on the amount of the backing stores that can be added to MCG. The backing stores are used in the bucket class CRD to define the different policies of the bucket. Refer the documentation of the specific OpenShift Data Foundation version to identify the types of services or resources supported as backing stores. Namespace store Resources that are used in namespace buckets. No default is created during deployment. Bucketclass A default or initial policy for a newly provisioned bucket. The following policies are set in a bucketclass: Placement policy Indicates the backing stores to be attached to the bucket and used to write the data of the bucket. This policy is used for data buckets and for cache policies to indicate the local cache placement. There are two modes of placement policy: Spread. Strips the data across the defined backing stores Mirror. Creates a full replica on each backing store Namespace policy A policy for the namespace buckets that defines the resources that are being used for aggregation and the resource used for the write target. Cache Policy This is a policy for the bucket and sets the hub (the source of truth) and the time to live (TTL) for the cache items. A default bucket class is created during deployment and it is set with a placement policy that uses the default backing store. There is no limit to the number of bucket class that can be added. Refer to the documentation of the specific OpenShift Data Foundation version to identify the types of policies that are supported. Object bucket claims (OBCs) CRDs that enable provisioning of S3 buckets. With MCG, OBCs receive an optional bucket class to note the initial configuration of the bucket. If a bucket class is not provided, the default bucket class is used. NooBaa, pod stateful sets CRD An internal CRD that controls the different pods of the NooBaa deployment such as the DB pod, the core pod, and the endpoints. This CRD must not be changed as it is internal. This operator reconciles the following entities: DB pod SCC Role Binding and Service Account to allow SSO single sign-on between OpenShift Container Platform and NooBaa user interfaces Route for S3 access Certificates that are taken and signed by the OpenShift Container Platform and are set on the S3 route Prometheus rules and service monitoring These CRDs set up scraping points for Prometheus and alert rules that are supported by MCG. Horizontal pod autoscaler (HPA) It is Integrated with the MCG endpoints. The endpoint pods scale up and down according to CPU pressure (amount of S3 traffic). 3.4.3. High availability As an operator, the only high availability provided is that the OpenShift Container Platform reschedules a failed pod. 3.4.4. Relevant log files To troubleshoot issues with the NooBaa operator, you can look at the following: Operator pod logs, which are also available through the must-gather. Different CRDs or entities and their statuses that are available through the must-gather. 3.4.5. Lifecycle The MCG operator runs and reconciles after OpenShift Data Foundation is deployed and until it is uninstalled.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/red_hat_openshift_data_foundation_architecture/openshift_data_foundation_operators
Chapter 3. Bug fixes
Chapter 3. Bug fixes In this release of Red Hat Trusted Profile Analyzer (RHTPA), we fixed the following bugs. In addition to these fixes, we list the descriptions of previously known issues found in earlier versions that we fixed. Fixed an inconsistency when a CVE has many CVSS scores Before this update, vulnerabilities with many Common Vulnerability Scoring System (CVSS) scores were inconsistently displayed when applying a filter. This was happening because the first CVSS score ordered the initial list of vulnerabilities, but the second score reordered the same list when applying a filter giving an inconsistent list of vulnerabilities. With this release, we fixed this order inconsistency by always applying the highest score when ordering the list of vulnerabilities, even when applying a filter. This gives consistency to the vulnerabilities list. Changed the strategy type for deploying the spog-api and the collectorist-api in OpenShift Before this update, the default strategy type for deploying the spog-api and the collectorist-api in OpenShift was a rolling strategy. Using the rolling strategy when deploying these 2 APIs mounts a volume with a ReadWriteOnce policy. This causes the pods to fail when redeploying the RHTPA application, because the rolling strategy does not scale down, and the volume is in use by the existing pods. With this release, we changed the default strategy from rolling to recreate for the spog-api and the collectorist-api pods. Vulnerability count mismatch Before this update, there is a vulnerability count mismatch between the Common Vulnerability and Exposures (CVE) panel and the Software Bill of Materials (SBOM) dashboard. With this release, we fixed the vulnerability count mismatch between the CVE panel and the SBOM dashboard. Duplicate SBOMs displayed in the RHTPA console We fixed a bug when retrieving data from the Graph for Understanding Artifact Composition (GUAC) engine by implementing a proper identification for packages that use a hash within software bill of materials (SBOM) documents. This fix eliminates the showing of any duplicate SBOMs when referring to the same SBOM. Errors with cyclical dependencies within SBOM documents Some software bill of materials (SBOM) documents contain cyclical dependencies for packages, which was causing errors with the expected data. We fixed a bug with the Graph for Understanding Artifact Composition (GUAC) engine, so the graph is properly traversed from a package to the product it belongs to. With this update, the package details page reports the correct product association. SBOM data does not load properly when uploading a large SBOM Before this update, when uploading a large software bill of materials (SBOM) documents, for example an SBOM that includes 50,000 packages, the RHTPA dashboard does not load properly. With this release, we fixed an issue with Keycloak's access token expiring before the SBOM can finish uploading its data. Uploading large SBOM document work as expected and display properly in the RHTPA dashboard.
null
https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1/html/release_notes/bug-fixes
Chapter 21. Configuring time synchronization by using the timesync RHEL System Role
Chapter 21. Configuring time synchronization by using the timesync RHEL System Role With the timesync RHEL System Role, you can manage time synchronization on multiple target machines on RHEL using Red Hat Ansible Automation Platform. 21.1. The timesync RHEL System Role You can manage time synchronization on multiple target machines using the timesync RHEL System Role. The timesync role installs and configures an NTP or PTP implementation to operate as an NTP client or PTP replica in order to synchronize the system clock with NTP servers or grandmasters in PTP domains. Note that using the timesync role also facilitates the migration to chrony , because you can use the same playbook on all versions of Red Hat Enterprise Linux starting with RHEL 6 regardless of whether the system uses ntp or chrony to implement the NTP protocol. 21.2. Applying the timesync System Role for a single pool of servers The following example shows how to apply the timesync role in a situation with just one pool of servers. Warning The timesync role replaces the configuration of the given or detected provider service on the managed host. settings are lost, even if they are not specified in the role variables. The only preserved setting is the choice of provider if the timesync_ntp_provider variable is not defined. Prerequisites The Ansible Core package is installed on the control machine. You have the rhel-system-roles package installed on the system from which you want to run the playbook. You have an inventory file which lists the systems on which you want to deploy timesync System Role. Procedure Create a new playbook.yml file with the following content: --- - hosts: timesync-test vars: timesync_ntp_servers: - hostname: 2.rhel.pool.ntp.org pool: yes iburst: yes roles: - rhel-system-roles.timesync Optional: Verify playbook syntax. Run the playbook on your inventory file: 21.3. Applying the timesync System Role on client servers You can use the timesync role to enable Network Time Security (NTS) on NTP clients. Network Time Security (NTS) is an authentication mechanism specified for Network Time Protocol (NTP). It verifies that NTP packets exchanged between the server and client are not altered. Warning The timesync role replaces the configuration of the given or detected provider service on the managed host. settings are lost even if they are not specified in the role variables. The only preserved setting is the choice of provider if the timesync_ntp_provider variable is not defined. Prerequisites You do not have to have Red Hat Ansible Automation Platform installed on the systems on which you want to deploy the timesync solution. You have the rhel-system-roles package installed on the system from which you want to run the playbook. You have an inventory file which lists the systems on which you want to deploy the timesync System Role. The chrony NTP provider version is 4.0 or later. Procedure Create a playbook.yml file with the following content: --- - hosts: timesync-test vars: timesync_ntp_servers: - hostname: ptbtime1.ptb.de iburst: yes nts: yes roles: - rhel-system-roles.timesync ptbtime1.ptb.de is an example of public server. You may want to use a different public server or your own server. Optional: Verify playbook syntax. Run the playbook on your inventory file: Verification Perform a test on the client machine: Check that the number of reported cookies is larger than zero. Additional resources chrony.conf(5) man page 21.4. timesync System Roles variables You can pass the following variable to the timesync role: timesync_ntp_servers : Role variable settings Description hostname: host.example.com Hostname or address of the server minpoll: number Minimum polling interval. Default: 6 maxpoll: number Maximum polling interval. Default: 10 iburst: yes Flag enabling fast initial synchronization. Default: no pool: yes Flag indicating that each resolved address of the hostname is a separate NTP server. Default: no nts: yes Flag to enable Network Time Security (NTS). Default: no. Supported only with chrony >= 4.0. Additional resources For a detailed reference on timesync role variables, install the rhel-system-roles package, and see the README.md or README.html files in the /usr/share/doc/rhel-system-roles/timesync directory.
[ "--- - hosts: timesync-test vars: timesync_ntp_servers: - hostname: 2.rhel.pool.ntp.org pool: yes iburst: yes roles: - rhel-system-roles.timesync", "ansible-playbook --syntax-check playbook.yml", "ansible-playbook -i inventory_file /path/to/file/playbook.yml", "--- - hosts: timesync-test vars: timesync_ntp_servers: - hostname: ptbtime1.ptb.de iburst: yes nts: yes roles: - rhel-system-roles.timesync", "ansible-playbook --syntax-check playbook.yml", "ansible-playbook -i inventory_file /path/to/file/playbook.yml", "chronyc -N authdata Name/IP address Mode KeyID Type KLen Last Atmp NAK Cook CLen ===================================================================== ptbtime1.ptb.de NTS 1 15 256 157 0 0 8 100" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/configuring-time-synchronization-by-using-the-timesync-rhel-system-role_automating-system-administration-by-using-rhel-system-roles
Chapter 5. Checking DNS records using IdM Healthcheck
Chapter 5. Checking DNS records using IdM Healthcheck You can identify issues with DNS records in Identity Management (IdM) using the Healthcheck tool. Prerequisites The DNS records Healthcheck tool is only available on RHEL 8.2 or newer. 5.1. DNS records healthcheck test The Healthcheck tool includes a test for checking that the expected DNS records required for autodiscovery are resolvable. To list all tests, run the ipa-healthcheck with the --list-sources option: You can find the DNS records check test under the ipahealthcheck.ipa.idns source. IPADNSSystemRecordsCheck This test checks the DNS records from the ipa dns-update-system-records --dry-run command using the first resolver specified in the /etc/resolv.conf file. The records are tested on the IPA server. 5.2. Screening DNS records using the healthcheck tool Follow this procedure to run a standalone manual test of DNS records on an Identity Management (IdM) server using the Healthcheck tool. The Healthcheck tool includes many tests. Results can be narrowed down by including only the DNS records tests by adding the --source ipahealthcheck.ipa.idns option. Prerequisites You must perform Healthcheck tests as the root user. Procedure To run the DNS records check, enter: If the record is resolvable, the test returns SUCCESS as a result: The test returns a WARNING when, for example, the number of records does not match the expected number: Additional resources See man ipa-healthcheck .
[ "ipa-healthcheck --list-sources", "ipa-healthcheck --source ipahealthcheck.ipa.idns", "{ \"source\": \"ipahealthcheck.ipa.idns\", \"check\": \"IPADNSSystemRecordsCheck\", \"result\": \"SUCCESS\", \"uuid\": \"eb7a3b68-f6b2-4631-af01-798cac0eb018\", \"when\": \"20200415143339Z\", \"duration\": \"0.210471\", \"kw\": { \"key\": \"_ldap._tcp.idm.example.com.:server1.idm.example.com.\" } }", "{ \"source\": \"ipahealthcheck.ipa.idns\", \"check\": \"IPADNSSystemRecordsCheck\", \"result\": \"WARNING\", \"uuid\": \"972b7782-1616-48e0-bd5c-49a80c257895\", \"when\": \"20200409100614Z\", \"duration\": \"0.203049\", \"kw\": { \"msg\": \"Got {count} ipa-ca A records, expected {expected}\", \"count\": 2, \"expected\": 1 } }" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/using_idm_healthcheck_to_monitor_your_idm_environment/checking-dns-records-using-idm-healthcheck_using-idm-healthcheck-to-monitor-your-idm-environment
Chapter 2. Migrating Camel Routes from Fuse 7 to Camel
Chapter 2. Migrating Camel Routes from Fuse 7 to Camel Note You can define Camel routes in Red Hat build of Apache Camel for Quarkus applications using Java DSL, XML IO DSL, or YAML. 2.1. Java DSL route migration example To migrate a Java DSL route definition from your Fuse application to CEQ, you can copy your existing route definition directly to your Red Hat build of Apache Camel for Quarkus application and add the necessary dependencies to your Red Hat build of Apache Camel for Quarkus pom.xml file. In this example, we will migrate a content-based route definition from a Fuse 7 application to a new CEQ application by copying the Java DSL route to a file named Routes.java in your CEQ application. Procedure Using the code.quarkus.redhat.com website, select the extensions required for this example: camel-quarkus-file camel-quarkus-xpath Navigate to the directory where you extracted the generated project files from the step: USD cd <directory_name> Create a file named Routes.java in the src/main/java/org/acme/ subfolder. Add the route definition from your Fuse application to the Routes.java , similar to the following example: package org.acme; import org.apache.camel.builder.RouteBuilder; public class Routes extends RouteBuilder { // Add your Java DSL route definition here public void configure() { from("file:work/cbr/input") .log("Receiving order USD{file:name}") .choice() .when().xpath("//order/customer/country[text() = 'UK']") .log("Sending order USD{file:name} to the UK") .to("file:work/cbr/output/uk") .when().xpath("//order/customer/country[text() = 'US']") .log("Sending order USD{file:name} to the US") .to("file:work/cbr/output/uk") .otherwise() .log("Sending order USD{file:name} to another country") .to("file:work/cbr/output/others"); } } Compile your CEQ application. mvn clean compile quarkus:dev Note This command compiles the project, starts your application, and lets the Quarkus tooling watch for changes in your workspace. Any modifications in your project will automatically take effect in the running application. 2.2. Blueprint XML DSL route migration To migrate a Blueprint XML route definition from your Fuse application to CEQ, use the camel-quarkus-xml-io-dsl extension and copy your Fuse application route definition directly to your CEQ application. You will then need to add the necessary dependencies to the CEQ pom.xml file and update your CEQ configuration in the application.properties file. Note CEQ supports Camel 3, whereas Fuse 7 supports Camel 2. For more information relating to upgrading Camel when you migrate your Red Hat Fuse 7 application to CEQ, see Migrating from Camel 2 to Camel 3 . For more information about using beans in Camel Quarkus, see the CDI and the Camel Bean Component section in the Developing Applications with Red Hat build of Apache Camel for Quarkus guide. 2.2.1. XML-IO-DSL limitations You can use the camel-quarkus-xml-io-dsl extension to assist with migrating a Blueprint XML route definition to CEQ. The camel-quarkus-xml-io-dsl extension only supports the following <camelContext> sub-elements: routeTemplates templatedRoutes rests routes routeConfigurations Note As Blueprint XML supports other bean definitions that are not supported by the camel-quarkus-xml-io-dsl extension, you may need to rewrite other bean definitions that are included in your Blueprint XML route definition. You must define every element (XML IO DSL) in a separate file. For example, this is a simplified example of a Blueprint XML route definition: <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <restConfiguration contextPath="/camel" /> <rest path="/books"> <get uri="/"> <to ..../> </get> </rest> <route> <from ..../> </route> </camelContext> </blueprint> You can migrate this Blueprint XML route definition to CEQ using XML IO DSL as defined in the following files: src/main/resources/routes/camel-rests.xml <rests xmlns="http://camel.apache.org/schema/spring"> <rest path="/books"> <get path="/"> <to ..../> </get> </rest> </rests> src/main/resources/routes/camel-routes.xml <routes xmlns="http://camel.apache.org/schema/spring"> <route> <from ..../> </route> </routes> You must use Java DSL to define other elements which are not supported, such as <restConfiguration> . For example, using a route builder defined in a camel-rests.xml file as follows: src/main/resources/routes/camel-rests.xml import org.apache.camel.builder.RouteBuilder; public class Routes extends RouteBuilder { public void configure() { restConfiguration() .contextPath("/camel"); } } 2.2.2. Blueprint XML DSL route migration example Note {Link For more information about using the XML IO DSL extension, see the XML IO DSL documentation in the Red Hat build of Apache Camel for Quarkus Extensions. In this example, you are migrating a content-based route definition from a Fuse application to a new CEQ application by copying the Blueprint XML route definition to a file named camel-routes.xml in your CEQ application. Procedure Using the code.quarkus.redhat.com website, select the following extensions for this example: camel-quarkus-xml-io-dsl camel-quarkus-file camel-quarkus-xpath Select Generate your application to confirm your choices and display the overlay screen with the download link for the archive that contains your generated project. Select Download the ZIP to save the archive with the generated project files to your machine. Extract the contents of the archive. Navigate to the directory where you extracted the generated project files from the step: USD cd <directory_name> Create a file named camel-routes.xml in the src/main/resources/routes/ directory. Copy the <route> element and sub-elements from the following blueprint-example.xml example to the camel-routes.xml file: blueprint-example.xml <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0"> <camelContext id="cbr-example-context" xmlns="http://camel.apache.org/schema/blueprint"> <route id="cbr-route"> <from id="_from1" uri="file:work/cbr/input"/> <log id="_log1" message="Receiving order USD{file:name}"/> <choice id="_choice1"> <when id="_when1"> <xpath id="_xpath1">/order/customer/country = 'UK'</xpath> <log id="_log2" message="Sending order USD{file:name} to the UK"/> <to id="_to1" uri="file:work/cbr/output/uk"/> </when> <when id="_when2"> <xpath id="_xpath2">/order/customer/country = 'US'</xpath> <log id="_log3" message="Sending order USD{file:name} to the US"/> <to id="_to2" uri="file:work/cbr/output/us"/> </when> <otherwise id="_otherwise1"> <log id="_log4" message="Sending order USD{file:name} to another country"/> <to id="_to3" uri="file:work/cbr/output/others"/> </otherwise> </choice> <log id="_log5" message="Done processing USD{file:name}"/> </route> </camelContext> </blueprint> camel-routes.xml <route id="cbr-route"> <from id="_from1" uri="file:work/cbr/input"/> <log id="_log1" message="Receiving order USD{file:name}"/> <choice id="_choice1"> <when id="_when1"> <xpath id="_xpath1">/order/customer/country = 'UK'</xpath> <log id="_log2" message="Sending order USD{file:name} to the UK"/> <to id="_to1" uri="file:work/cbr/output/uk"/> </when> <when id="_when2"> <xpath id="_xpath2">/order/customer/country = 'US'</xpath> <log id="_log3" message="Sending order USD{file:name} to the US"/> <to id="_to2" uri="file:work/cbr/output/us"/> </when> <otherwise id="_otherwise1"> <log id="_log4" message="Sending order USD{file:name} to another country"/> <to id="_to3" uri="file:work/cbr/output/others"/> </otherwise> </choice> <log id="_log5" message="Done processing USD{file:name}"/> </route> Modify application.properties # Camel # camel.context.name = camel-quarkus-xml-io-dsl-example camel.main.routes-include-pattern = file:src/main/resources/routes/camel-routes.xml Compile your CEQ application. mvn clean compile quarkus:dev Note This command compiles the project, starts your application, and lets the Quarkus tooling watch for changes in your workspace. Any modifications in your project will automatically take effect in the running application.
[ "cd <directory_name>", "package org.acme; import org.apache.camel.builder.RouteBuilder; public class Routes extends RouteBuilder { // Add your Java DSL route definition here public void configure() { from(\"file:work/cbr/input\") .log(\"Receiving order USD{file:name}\") .choice() .when().xpath(\"//order/customer/country[text() = 'UK']\") .log(\"Sending order USD{file:name} to the UK\") .to(\"file:work/cbr/output/uk\") .when().xpath(\"//order/customer/country[text() = 'US']\") .log(\"Sending order USD{file:name} to the US\") .to(\"file:work/cbr/output/uk\") .otherwise() .log(\"Sending order USD{file:name} to another country\") .to(\"file:work/cbr/output/others\"); } }", "mvn clean compile quarkus:dev", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"> <camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <restConfiguration contextPath=\"/camel\" /> <rest path=\"/books\"> <get uri=\"/\"> <to ..../> </get> </rest> <route> <from ..../> </route> </camelContext> </blueprint>", "<rests xmlns=\"http://camel.apache.org/schema/spring\"> <rest path=\"/books\"> <get path=\"/\"> <to ..../> </get> </rest> </rests>", "<routes xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from ..../> </route> </routes>", "import org.apache.camel.builder.RouteBuilder; public class Routes extends RouteBuilder { public void configure() { restConfiguration() .contextPath(\"/camel\"); } }", "cd <directory_name>", "<blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\"> <camelContext id=\"cbr-example-context\" xmlns=\"http://camel.apache.org/schema/blueprint\"> <route id=\"cbr-route\"> <from id=\"_from1\" uri=\"file:work/cbr/input\"/> <log id=\"_log1\" message=\"Receiving order USD{file:name}\"/> <choice id=\"_choice1\"> <when id=\"_when1\"> <xpath id=\"_xpath1\">/order/customer/country = 'UK'</xpath> <log id=\"_log2\" message=\"Sending order USD{file:name} to the UK\"/> <to id=\"_to1\" uri=\"file:work/cbr/output/uk\"/> </when> <when id=\"_when2\"> <xpath id=\"_xpath2\">/order/customer/country = 'US'</xpath> <log id=\"_log3\" message=\"Sending order USD{file:name} to the US\"/> <to id=\"_to2\" uri=\"file:work/cbr/output/us\"/> </when> <otherwise id=\"_otherwise1\"> <log id=\"_log4\" message=\"Sending order USD{file:name} to another country\"/> <to id=\"_to3\" uri=\"file:work/cbr/output/others\"/> </otherwise> </choice> <log id=\"_log5\" message=\"Done processing USD{file:name}\"/> </route> </camelContext> </blueprint>", "<route id=\"cbr-route\"> <from id=\"_from1\" uri=\"file:work/cbr/input\"/> <log id=\"_log1\" message=\"Receiving order USD{file:name}\"/> <choice id=\"_choice1\"> <when id=\"_when1\"> <xpath id=\"_xpath1\">/order/customer/country = 'UK'</xpath> <log id=\"_log2\" message=\"Sending order USD{file:name} to the UK\"/> <to id=\"_to1\" uri=\"file:work/cbr/output/uk\"/> </when> <when id=\"_when2\"> <xpath id=\"_xpath2\">/order/customer/country = 'US'</xpath> <log id=\"_log3\" message=\"Sending order USD{file:name} to the US\"/> <to id=\"_to2\" uri=\"file:work/cbr/output/us\"/> </when> <otherwise id=\"_otherwise1\"> <log id=\"_log4\" message=\"Sending order USD{file:name} to another country\"/> <to id=\"_to3\" uri=\"file:work/cbr/output/others\"/> </otherwise> </choice> <log id=\"_log5\" message=\"Done processing USD{file:name}\"/> </route>", "Camel # camel.context.name = camel-quarkus-xml-io-dsl-example camel.main.routes-include-pattern = file:src/main/resources/routes/camel-routes.xml", "mvn clean compile quarkus:dev" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/migrating_fuse_7_applications_to_red_hat_build_of_apache_camel_for_quarkus/migrating_camel_routes_from_fuse_7_to_camel
Sandboxed Containers Support for OpenShift
Sandboxed Containers Support for OpenShift OpenShift Container Platform 4.10 OpenShift sandboxed containers guide Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/sandboxed_containers_support_for_openshift/index
Installing JBoss EAP by using the Red Hat Ansible Certified Content Collection
Installing JBoss EAP by using the Red Hat Ansible Certified Content Collection Red Hat JBoss Enterprise Application Platform 7.4 Automating deployments of JBoss EAP 7.4 with the Red Hat Ansible Certified Content Collection Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/installing_jboss_eap_by_using_the_red_hat_ansible_certified_content_collection/index
Chapter 85. CSimple
Chapter 85. CSimple The CSimple language is compiled Simple language. 85.1. Different between CSimple and Simple The simple language is a dynamic expression language which is runtime parsed into a set of Camel Expressions or Predicates. The csimple language is parsed into regular Java source code and compiled together with all the other source code, or compiled once during bootstrap via the camel-csimple-joor module. The simple language is generally very lightweight and fast, however for some use-cases with dynamic method calls via OGNL paths, then the simple language does runtime introspection and reflection calls. This has an overhead on performance, and was one of the reasons why csimple was created. The csimple language requires to be typesafe and method calls via OGNL paths requires to know the type during parsing. This means for csimple languages expressions you would need to provide the class type in the script, whereas simple introspects this at runtime. In other words the simple language is using duck typing (if it looks like a duck, and quacks like a duck, then it is a duck) and csimple is using Java type (typesafety). If there is a type error then simple will report this at runtime, and with csimple there will be a Java compilation error. 85.1.1. Additional CSimple functions The csimple language includes some additional functions to support common use-cases working with Collection , Map or array types. The following functions bodyAsIndex , headerAsIndex , and exchangePropertyAsIndex is used for these use-cases as they are typed. Function Type Description bodyAsIndex( type , index ) Type To be used for collecting the body from an existing Collection , Map or array (lookup by the index) and then converting the body to the given type determined by its classname. The converted body can be null. mandatoryBodyAsIndex( type , index ) Type To be used for collecting the body from an existing Collection , Map or array (lookup by the index) and then converting the body to the given type determined by its classname. Expects the body to be not null. headerAsIndex( key , type , index ) Type To be used for collecting a header from an existing Collection , Map or array (lookup by the index) and then converting the header value to the given type determined by its classname. The converted header can be null. mandatoryHeaderAsIndex( key , type , index ) Type To be used for collecting a header from an existing Collection , Map or array (lookup by the index) and then converting the header value to the given type determined by its classname. Expects the header to be not null. exchangePropertyAsIndex( key , type , index ) Type To be used for collecting an exchange property from an existing Collection , Map or array (lookup by the index) and then converting the exchange property to the given type determined by its classname. The converted exchange property can be null. mandatoryExchangePropertyAsIndex( key , type , index ) Type To be used for collecting an exchange property from an existing Collection , Map or array (lookup by the index) and then converting the exchange property to the given type determined by its classname. Expects the exchange property to be not null. For example given the following simple expression: This script has no type information, and the simple language will resolve this at runtime, by introspecting the message body and if it's a collection based then lookup the first element, and then invoke a method named getName via reflection. In csimple (compiled) we want to pre compile this and therefore the end user must provide type information with the bodyAsIndex function: 85.2. Compilation The csimple language is parsed into regular Java source code and compiled together with all the other source code, or it can be compiled once during bootstrap via the camel-csimple-joor module. There are two ways to compile csimple using the camel-csimple-maven-plugin generating source code at built time. using camel-csimple-joor which does runtime in-memory compilation during bootstrap of Camel. 85.2.1. Using camel-csimple-maven-plugin The camel-csimple-maven-plugin Maven plugin is used for discovering all the csimple scripts from the source code, and then automatic generate source code in the src/generated/java folder, which then gets compiled together with all the other sources. The maven plugin will do source code scanning of .java and .xml files (Java and XML DSL). The scanner limits to detect certain code patterns, and it may miss discovering some csimple scripts if they are being used in unusual/rare ways. The runtime compilation using camel-csimple-joor does not have this limitation. The benefit is all the csimple scripts will be compiled using the regular Java compiler and therefore everything is included out of the box as .class files in the application JAR file, and no additional dependencies is required at runtime. To use camel-csimple-maven-plugin you need to add it to your pom.xml file as shown: <plugins> <!-- generate source code for csimple languages --> <plugin> <groupId>org.apache.camel</groupId> <artifactId>camel-csimple-maven-plugin</artifactId> <version>USD{camel.version}</version> <executions> <execution> <id>generate</id> <goals> <goal>generate</goal> </goals> </execution> </executions> </plugin> <!-- include source code generated to maven sources paths --> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>build-helper-maven-plugin</artifactId> <version>3.1.0</version> <executions> <execution> <phase>generate-sources</phase> <goals> <goal>add-source</goal> <goal>add-resource</goal> </goals> <configuration> <sources> <source>src/generated/java</source> </sources> <resources> <resource> <directory>src/generated/resources</directory> </resource> </resources> </configuration> </execution> </executions> </plugin> </plugins> And then you must also add the build-helper-maven-plugin Maven plugin to include src/generated to the list of source folders for the Java compiler, to ensure the generated source code is compiled and included in the application JAR file. See the camel-example-csimple example at Camel Examples which uses the maven plugin. 85.2.2. Using camel-csimple-joor The jOOR library integrates with the Java compiler and performs runtime compilation of Java code. The supported runtime when using camel-simple-joor is intended for Java standalone, Spring Boot, Camel Quarkus and other microservices runtimes. It is not supported in OSGi, Camel Karaf or any kind of Java Application Server runtime. jOOR does not support runtime compilation with Spring Boot using fat jar packaging ( https://github.com/jOOQ/jOOR/issues/69 ), it works with exploded classpath. To use camel-simple-joor you simply just add it as dependency to the classpath: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-csimple-joor</artifactId> <version>{CamelSBProjectVersion}</version> </dependency> There is no need for adding Maven plugins to the pom.xml file. See the camel-example-csimple-joor example at Camel Examples which uses the jOOR compiler. 85.3. CSimple Language options The CSimple language supports 2 options, which are listed below. Name Default Java Type Description resultType String Sets the class name of the result type (type from output). trim Boolean Whether to trim the value to remove leading and trailing whitespaces and line breaks. 85.4. Limitations Currently, the csimple language does not support: nested functions (aka functions inside functions) the null safe operator ( ? ). For example the following scripts cannot compile: Hello USD{bean:greeter(USD{body}, USD{header.counter})} USD{bodyAs(MyUser)?.address?.zip} > 10000 85.5. Auto imports The csimple language will automatically import from: 85.6. Configuration file You can configure the csimple language in the camel-csimple.properties file which is loaded from the root classpath. For example you can add additional imports in the camel-csimple.properties file by adding: You can also add aliases (key=value) where an alias will be used as a shorthand replacement in the code. Which allows to use echo() in the csimple language script such as: from("direct:hello") .transform(csimple("Hello echo()")) .log("You said USD{body}"); The echo() alias will be replaced with its value resulting in a script as: .transform(csimple("Hello USD{bodyAs(String)} USD{bodyAs(String)}")) 85.7. See Also See the Simple language as csimple has the same set of functions as simple language. 85.8. Spring Boot Auto-Configuration When using csimple with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency> The component supports 147 options, which are listed below. Name Description Default Type camel.cloud.consul.service-discovery.acl-token Sets the ACL token to be used with Consul. String camel.cloud.consul.service-discovery.block-seconds The seconds to wait for a watch event, default 10 seconds. 10 Integer camel.cloud.consul.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.consul.service-discovery.connect-timeout-millis Connect timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.datacenter The data center. String camel.cloud.consul.service-discovery.enabled Enable the component. true Boolean camel.cloud.consul.service-discovery.password Sets the password to be used for basic authentication. String camel.cloud.consul.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.consul.service-discovery.read-timeout-millis Read timeout for OkHttpClient. Long camel.cloud.consul.service-discovery.url The Consul agent URL. String camel.cloud.consul.service-discovery.user-name Sets the username to be used for basic authentication. String camel.cloud.consul.service-discovery.write-timeout-millis Write timeout for OkHttpClient. Long camel.cloud.dns.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.dns.service-discovery.domain The domain name;. String camel.cloud.dns.service-discovery.enabled Enable the component. true Boolean camel.cloud.dns.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.dns.service-discovery.proto The transport protocol of the desired service. _tcp String camel.cloud.etcd.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.etcd.service-discovery.enabled Enable the component. true Boolean camel.cloud.etcd.service-discovery.password The password to use for basic authentication. String camel.cloud.etcd.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.etcd.service-discovery.service-path The path to look for for service discovery. /services/ String camel.cloud.etcd.service-discovery.timeout To set the maximum time an action could take to complete. Long camel.cloud.etcd.service-discovery.type To set the discovery type, valid values are on-demand and watch. on-demand String camel.cloud.etcd.service-discovery.uris The URIs the client can connect to. String camel.cloud.etcd.service-discovery.user-name The user name to use for basic authentication. String camel.cloud.kubernetes.service-discovery.api-version Sets the API version when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-data Sets the Certificate Authority data when using client lookup. String camel.cloud.kubernetes.service-discovery.ca-cert-file Sets the Certificate Authority data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-data Sets the Client Certificate data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-cert-file Sets the Client Certificate data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-algo Sets the Client Keystore algorithm, such as RSA when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-data Sets the Client Keystore data when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-file Sets the Client Keystore data that are loaded from the file when using client lookup. String camel.cloud.kubernetes.service-discovery.client-key-passphrase Sets the Client Keystore passphrase when using client lookup. String camel.cloud.kubernetes.service-discovery.configurations Define additional configuration definitions. Map camel.cloud.kubernetes.service-discovery.dns-domain Sets the DNS domain to use for DNS lookup. String camel.cloud.kubernetes.service-discovery.enabled Enable the component. true Boolean camel.cloud.kubernetes.service-discovery.lookup How to perform service lookup. Possible values: client, dns, environment. When using client, then the client queries the kubernetes master to obtain a list of active pods that provides the service, and then random (or round robin) select a pod. When using dns the service name is resolved as name.namespace.svc.dnsDomain. When using dnssrv the service name is resolved with SRV query for . ... svc... When using environment then environment variables are used to lookup the service. By default environment is used. environment String camel.cloud.kubernetes.service-discovery.master-url Sets the URL to the master when using client lookup. String camel.cloud.kubernetes.service-discovery.namespace Sets the namespace to use. Will by default use namespace from the ENV variable KUBERNETES_MASTER. String camel.cloud.kubernetes.service-discovery.oauth-token Sets the OAUTH token for authentication (instead of username/password) when using client lookup. String camel.cloud.kubernetes.service-discovery.password Sets the password for authentication when using client lookup. String camel.cloud.kubernetes.service-discovery.port-name Sets the Port Name to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.port-protocol Sets the Port Protocol to use for DNS/DNSSRV lookup. String camel.cloud.kubernetes.service-discovery.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.kubernetes.service-discovery.trust-certs Sets whether to turn on trust certificate check when using client lookup. false Boolean camel.cloud.kubernetes.service-discovery.username Sets the username for authentication when using client lookup. String camel.cloud.ribbon.load-balancer.client-name Sets the Ribbon client name. String camel.cloud.ribbon.load-balancer.configurations Define additional configuration definitions. Map camel.cloud.ribbon.load-balancer.enabled Enable the component. true Boolean camel.cloud.ribbon.load-balancer.namespace The namespace. String camel.cloud.ribbon.load-balancer.password The password. String camel.cloud.ribbon.load-balancer.properties Set client properties to use. These properties are specific to what service call implementation are in use. For example if using ribbon, then the client properties are define in com.netflix.client.config.CommonClientConfigKey. Map camel.cloud.ribbon.load-balancer.username The username. String camel.hystrix.allow-maximum-size-to-diverge-from-core-size Allows the configuration for maximumSize to take effect. That value can then be equal to, or higher, than coreSize. false Boolean camel.hystrix.circuit-breaker-enabled Whether to use a HystrixCircuitBreaker or not. If false no circuit-breaker logic will be used and all requests permitted. This is similar in effect to circuitBreakerForceClosed() except that continues tracking metrics and knowing whether it should be open/closed, this property results in not even instantiating a circuit-breaker. true Boolean camel.hystrix.circuit-breaker-error-threshold-percentage Error percentage threshold (as whole number such as 50) at which point the circuit breaker will trip open and reject requests. It will stay tripped for the duration defined in circuitBreakerSleepWindowInMilliseconds; The error percentage this is compared against comes from HystrixCommandMetrics.getHealthCounts(). 50 Integer camel.hystrix.circuit-breaker-force-closed If true the HystrixCircuitBreaker#allowRequest() will always return true to allow requests regardless of the error percentage from HystrixCommandMetrics.getHealthCounts(). The circuitBreakerForceOpen() property takes precedence so if it set to true this property does nothing. false Boolean camel.hystrix.circuit-breaker-force-open If true the HystrixCircuitBreaker.allowRequest() will always return false, causing the circuit to be open (tripped) and reject all requests. This property takes precedence over circuitBreakerForceClosed();. false Boolean camel.hystrix.circuit-breaker-request-volume-threshold Minimum number of requests in the metricsRollingStatisticalWindowInMilliseconds() that must exist before the HystrixCircuitBreaker will trip. If below this number the circuit will not trip regardless of error percentage. 20 Integer camel.hystrix.circuit-breaker-sleep-window-in-milliseconds The time in milliseconds after a HystrixCircuitBreaker trips open that it should wait before trying requests again. 5000 Integer camel.hystrix.configurations Define additional configuration definitions. Map camel.hystrix.core-pool-size Core thread-pool size that gets passed to java.util.concurrent.ThreadPoolExecutor#setCorePoolSize(int). 10 Integer camel.hystrix.enabled Enable the component. true Boolean camel.hystrix.execution-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.run(). Requests beyond the concurrent limit will be rejected. Applicable only when executionIsolationStrategy == SEMAPHORE. 20 Integer camel.hystrix.execution-isolation-strategy What isolation strategy HystrixCommand.run() will be executed with. If THREAD then it will be executed on a separate thread and concurrent requests limited by the number of threads in the thread-pool. If SEMAPHORE then it will be executed on the calling thread and concurrent requests limited by the semaphore count. THREAD String camel.hystrix.execution-isolation-thread-interrupt-on-timeout Whether the execution thread should attempt an interrupt (using Future#cancel ) when a thread times out. Applicable only when executionIsolationStrategy() == THREAD. true Boolean camel.hystrix.execution-timeout-enabled Whether the timeout mechanism is enabled for this command. true Boolean camel.hystrix.execution-timeout-in-milliseconds Time in milliseconds at which point the command will timeout and halt execution. If executionIsolationThreadInterruptOnTimeout == true and the command is thread-isolated, the executing thread will be interrupted. If the command is semaphore-isolated and a HystrixObservableCommand, that command will get unsubscribed. 1000 Integer camel.hystrix.fallback-enabled Whether HystrixCommand.getFallback() should be attempted when failure occurs. true Boolean camel.hystrix.fallback-isolation-semaphore-max-concurrent-requests Number of concurrent requests permitted to HystrixCommand.getFallback(). Requests beyond the concurrent limit will fail-fast and not attempt retrieving a fallback. 10 Integer camel.hystrix.group-key Sets the group key to use. The default value is CamelHystrix. CamelHystrix String camel.hystrix.keep-alive-time Keep-alive time in minutes that gets passed to ThreadPoolExecutor#setKeepAliveTime(long,TimeUnit). 1 Integer camel.hystrix.max-queue-size Max queue size that gets passed to BlockingQueue in HystrixConcurrencyStrategy.getBlockingQueue(int) This should only affect the instantiation of a threadpool - it is not eliglible to change a queue size on the fly. For that, use queueSizeRejectionThreshold(). -1 Integer camel.hystrix.maximum-size Maximum thread-pool size that gets passed to ThreadPoolExecutor#setMaximumPoolSize(int) . This is the maximum amount of concurrency that can be supported without starting to reject HystrixCommands. Please note that this setting only takes effect if you also set allowMaximumSizeToDivergeFromCoreSize. 10 Integer camel.hystrix.metrics-health-snapshot-interval-in-milliseconds Time in milliseconds to wait between allowing health snapshots to be taken that calculate success and error percentages and affect HystrixCircuitBreaker.isOpen() status. On high-volume circuits the continual calculation of error percentage can become CPU intensive thus this controls how often it is calculated. 500 Integer camel.hystrix.metrics-rolling-percentile-bucket-size Maximum number of values stored in each bucket of the rolling percentile. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-percentile-enabled Whether percentile metrics should be captured using HystrixRollingPercentile inside HystrixCommandMetrics. true Boolean camel.hystrix.metrics-rolling-percentile-window-buckets Number of buckets the rolling percentile window is broken into. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 6 Integer camel.hystrix.metrics-rolling-percentile-window-in-milliseconds Duration of percentile rolling window in milliseconds. This is passed into HystrixRollingPercentile inside HystrixCommandMetrics. 10000 Integer camel.hystrix.metrics-rolling-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside HystrixCommandMetrics. 10 Integer camel.hystrix.metrics-rolling-statistical-window-in-milliseconds This property sets the duration of the statistical rolling window, in milliseconds. This is how long metrics are kept for the thread pool. The window is divided into buckets and rolls by those increments. 10000 Integer camel.hystrix.queue-size-rejection-threshold Queue size rejection threshold is an artificial max size at which rejections will occur even if maxQueueSize has not been reached. This is done because the maxQueueSize of a BlockingQueue can not be dynamically changed and we want to support dynamically changing the queue size that affects rejections. This is used by HystrixCommand when queuing a thread for execution. 5 Integer camel.hystrix.request-log-enabled Whether HystrixCommand execution and events should be logged to HystrixRequestLog. true Boolean camel.hystrix.thread-pool-key Sets the thread pool key to use. Will by default use the same value as groupKey has been configured to use. CamelHystrix String camel.hystrix.thread-pool-rolling-number-statistical-window-buckets Number of buckets the rolling statistical window is broken into. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10 Integer camel.hystrix.thread-pool-rolling-number-statistical-window-in-milliseconds Duration of statistical rolling window in milliseconds. This is passed into HystrixRollingNumber inside each HystrixThreadPoolMetrics instance. 10000 Integer camel.language.constant.enabled Whether to enable auto configuration of the constant language. This is enabled by default. Boolean camel.language.constant.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.csimple.enabled Whether to enable auto configuration of the csimple language. This is enabled by default. Boolean camel.language.csimple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.exchangeproperty.enabled Whether to enable auto configuration of the exchangeProperty language. This is enabled by default. Boolean camel.language.exchangeproperty.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.file.enabled Whether to enable auto configuration of the file language. This is enabled by default. Boolean camel.language.file.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.header.enabled Whether to enable auto configuration of the header language. This is enabled by default. Boolean camel.language.header.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.ref.enabled Whether to enable auto configuration of the ref language. This is enabled by default. Boolean camel.language.ref.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.simple.enabled Whether to enable auto configuration of the simple language. This is enabled by default. Boolean camel.language.simple.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.language.tokenize.enabled Whether to enable auto configuration of the tokenize language. This is enabled by default. Boolean camel.language.tokenize.group-delimiter Sets the delimiter to use when grouping. If this has not been set then token will be used as the delimiter. String camel.language.tokenize.trim Whether to trim the value to remove leading and trailing whitespaces and line breaks. true Boolean camel.resilience4j.automatic-transition-from-open-to-half-open-enabled Enables automatic transition from OPEN to HALF_OPEN state once the waitDurationInOpenState has passed. false Boolean camel.resilience4j.circuit-breaker-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreaker instance to lookup and use from the registry. When using this, then any other circuit breaker options are not in use. String camel.resilience4j.config-ref Refers to an existing io.github.resilience4j.circuitbreaker.CircuitBreakerConfig instance to lookup and use from the registry. String camel.resilience4j.configurations Define additional configuration definitions. Map camel.resilience4j.enabled Enable the component. true Boolean camel.resilience4j.failure-rate-threshold Configures the failure rate threshold in percentage. If the failure rate is equal or greater than the threshold the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 50 percentage. Float camel.resilience4j.minimum-number-of-calls Configures the minimum number of calls which are required (per sliding window period) before the CircuitBreaker can calculate the error rate. For example, if minimumNumberOfCalls is 10, then at least 10 calls must be recorded, before the failure rate can be calculated. If only 9 calls have been recorded the CircuitBreaker will not transition to open even if all 9 calls have failed. Default minimumNumberOfCalls is 100. 100 Integer camel.resilience4j.permitted-number-of-calls-in-half-open-state Configures the number of permitted calls when the CircuitBreaker is half open. The size must be greater than 0. Default size is 10. 10 Integer camel.resilience4j.sliding-window-size Configures the size of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. slidingWindowSize configures the size of the sliding window. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. The slidingWindowSize must be greater than 0. The minimumNumberOfCalls must be greater than 0. If the slidingWindowType is COUNT_BASED, the minimumNumberOfCalls cannot be greater than slidingWindowSize . If the slidingWindowType is TIME_BASED, you can pick whatever you want. Default slidingWindowSize is 100. 100 Integer camel.resilience4j.sliding-window-type Configures the type of the sliding window which is used to record the outcome of calls when the CircuitBreaker is closed. Sliding window can either be count-based or time-based. If slidingWindowType is COUNT_BASED, the last slidingWindowSize calls are recorded and aggregated. If slidingWindowType is TIME_BASED, the calls of the last slidingWindowSize seconds are recorded and aggregated. Default slidingWindowType is COUNT_BASED. COUNT_BASED String camel.resilience4j.slow-call-duration-threshold Configures the duration threshold (seconds) above which calls are considered as slow and increase the slow calls percentage. Default value is 60 seconds. 60 Integer camel.resilience4j.slow-call-rate-threshold Configures a threshold in percentage. The CircuitBreaker considers a call as slow when the call duration is greater than slowCallDurationThreshold Duration. When the percentage of slow calls is equal or greater the threshold, the CircuitBreaker transitions to open and starts short-circuiting calls. The threshold must be greater than 0 and not greater than 100. Default value is 100 percentage which means that all recorded calls must be slower than slowCallDurationThreshold. Float camel.resilience4j.wait-duration-in-open-state Configures the wait duration (in seconds) which specifies how long the CircuitBreaker should stay open, before it switches to half open. Default value is 60 seconds. 60 Integer camel.resilience4j.writable-stack-trace-enabled Enables writable stack traces. When set to false, Exception.getStackTrace returns a zero length array. This may be used to reduce log spam when the circuit breaker is open as the cause of the exceptions is already known (the circuit breaker is short-circuiting calls). true Boolean camel.rest.api-component The name of the Camel component to use as the REST API (such as swagger) If no API Component has been explicit configured, then Camel will lookup if there is a Camel component responsible for servicing and generating the REST API documentation, or if a org.apache.camel.spi.RestApiProcessorFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.api-context-path Sets a leading API context-path the REST API services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. String camel.rest.api-context-route-id Sets the route id to use for the route that services the REST API. The route will by default use an auto assigned route id. String camel.rest.api-host To use an specific hostname for the API documentation (eg swagger) This can be used to override the generated host with this configured hostname. String camel.rest.api-property Allows to configure as many additional properties for the api documentation (swagger). For example set property api.title to my cool stuff. Map camel.rest.api-vendor-extension Whether vendor extension is enabled in the Rest APIs. If enabled then Camel will include additional information as vendor extension (eg keys starting with x-) such as route ids, class names etc. Not all 3rd party API gateways and tools supports vendor-extensions when importing your API docs. false Boolean camel.rest.binding-mode Sets the binding mode to use. The default value is off. RestBindingMode camel.rest.client-request-validation Whether to enable validation of the client request to check whether the Content-Type and Accept headers from the client is supported by the Rest-DSL configuration of its consumes/produces settings. This can be turned on, to enable this check. In case of validation error, then HTTP Status codes 415 or 406 is returned. The default value is false. false Boolean camel.rest.component The Camel Rest component to use for the REST transport (consumer), such as netty-http, jetty, servlet, undertow. If no component has been explicit configured, then Camel will lookup if there is a Camel component that integrates with the Rest DSL, or if a org.apache.camel.spi.RestConsumerFactory is registered in the registry. If either one is found, then that is being used. String camel.rest.component-property Allows to configure as many additional properties for the rest component in use. Map camel.rest.consumer-property Allows to configure as many additional properties for the rest consumer in use. Map camel.rest.context-path Sets a leading context-path the REST services will be using. This can be used when using components such as camel-servlet where the deployed web application is deployed using a context-path. Or for components such as camel-jetty or camel-netty-http that includes a HTTP server. String camel.rest.cors-headers Allows to configure custom CORS headers. Map camel.rest.data-format-property Allows to configure as many additional properties for the data formats in use. For example set property prettyPrint to true to have json outputted in pretty mode. The properties can be prefixed to denote the option is only for either JSON or XML and for either the IN or the OUT. The prefixes are: json.in. json.out. xml.in. xml.out. For example a key with value xml.out.mustBeJAXBElement is only for the XML data format for the outgoing. A key without a prefix is a common key for all situations. Map camel.rest.enable-cors Whether to enable CORS headers in the HTTP response. The default value is false. false Boolean camel.rest.endpoint-property Allows to configure as many additional properties for the rest endpoint in use. Map camel.rest.host The hostname to use for exposing the REST service. String camel.rest.host-name-resolver If no hostname has been explicit configured, then this resolver is used to compute the hostname the REST service will be using. RestHostNameResolver camel.rest.json-data-format Name of specific json data format to use. By default json-jackson will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.port The port number to use for exposing the REST service. Notice if you use servlet component then the port number configured here does not apply, as the port number in use is the actual port number the servlet component is using. eg if using Apache Tomcat its the tomcat http port, if using Apache Karaf its the HTTP service in Karaf that uses port 8181 by default etc. Though in those situations setting the port number here, allows tooling and JMX to know the port number, so its recommended to set the port number to the number that the servlet engine uses. String camel.rest.producer-api-doc Sets the location of the api document (swagger api) the REST producer will use to validate the REST uri and query parameters are valid accordingly to the api document. This requires adding camel-swagger-java to the classpath, and any miss configuration will let Camel fail on startup and report the error(s). The location of the api document is loaded from classpath by default, but you can use file: or http: to refer to resources to load from file or http url. String camel.rest.producer-component Sets the name of the Camel component to use as the REST producer. String camel.rest.scheme The scheme to use for exposing the REST service. Usually http or https is supported. The default value is http. String camel.rest.skip-binding-on-error-code Whether to skip binding on output if there is a custom HTTP error code header. This allows to build custom error messages that do not bind to json / xml etc, as success messages otherwise will do. false Boolean camel.rest.use-x-forward-headers Whether to use X-Forward headers for Host and related setting. The default value is true. true Boolean camel.rest.xml-data-format Name of specific XML data format to use. By default jaxb will be used. Important: This option is only for setting a custom name of the data format, not to refer to an existing data format instance. String camel.rest.api-context-id-pattern Deprecated Sets an CamelContext id pattern to only allow Rest APIs from rest services within CamelContext's which name matches the pattern. The pattern name refers to the CamelContext name, to match on the current CamelContext only. For any other value, the pattern uses the rules from PatternHelper#matchPattern(String,String). String camel.rest.api-context-listing Deprecated Sets whether listing of all available CamelContext's with REST services in the JVM is enabled. If enabled it allows to discover these contexts, if false then only the current CamelContext is in use. false Boolean
[ "Hello USD\\{body[0].name}", "Hello USD\\{bodyAsIndex(com.foo.MyUser, 0).name}", "<plugins> <!-- generate source code for csimple languages --> <plugin> <groupId>org.apache.camel</groupId> <artifactId>camel-csimple-maven-plugin</artifactId> <version>USD{camel.version}</version> <executions> <execution> <id>generate</id> <goals> <goal>generate</goal> </goals> </execution> </executions> </plugin> <!-- include source code generated to maven sources paths --> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>build-helper-maven-plugin</artifactId> <version>3.1.0</version> <executions> <execution> <phase>generate-sources</phase> <goals> <goal>add-source</goal> <goal>add-resource</goal> </goals> <configuration> <sources> <source>src/generated/java</source> </sources> <resources> <resource> <directory>src/generated/resources</directory> </resource> </resources> </configuration> </execution> </executions> </plugin> </plugins>", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-csimple-joor</artifactId> <version>{CamelSBProjectVersion}</version> </dependency>", "Hello USD{bean:greeter(USD{body}, USD{header.counter})}", "USD{bodyAs(MyUser)?.address?.zip} > 10000", "import java.util.*; import java.util.concurrent.*; import java.util.stream.*; import org.apache.camel.*; import org.apache.camel.util.*;", "import com.foo.MyUser; import com.bar.*; import static com.foo.MyHelper.*;", "echo()=USD{bodyAs(String)} USD{bodyAs(String)}", "from(\"direct:hello\") .transform(csimple(\"Hello echo()\")) .log(\"You said USD{body}\");", ".transform(csimple(\"Hello USD{bodyAs(String)} USD{bodyAs(String)}\"))", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-core-starter</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-csimple-language-starter
5.5.2. Adding a Member to a Running DLM Cluster
5.5.2. Adding a Member to a Running DLM Cluster The procedure for adding a member to a running DLM cluster depends on whether the cluster contains only two nodes or more than two nodes. To add a member to a running DLM cluster, follow the steps in one of the following sections according to the number of nodes in the cluster: For clusters with only two nodes - Section 5.5.2.1, "Adding a Member to a Running DLM Cluster That Contains Only Two Nodes" For clusters with more than two nodes - Section 5.5.2.2, "Adding a Member to a Running DLM Cluster That Contains More Than Two Nodes" 5.5.2.1. Adding a Member to a Running DLM Cluster That Contains Only Two Nodes To add a member to an existing DLM cluster that is currently in operation, and contains only two nodes, follow these steps: Add the node and configure fencing for it as in Section 5.5.1, "Adding a Member to a New Cluster" . Click Send to Cluster to propagate the updated configuration to other running nodes in the cluster. Use the scp command to send the updated /etc/cluster/cluster.conf file from one of the existing cluster nodes to the new node. At system-config-cluster , in the Cluster Status Tool tab, disable each service listed under Services . Stop the cluster software on the two running nodes by running the following commands at each node in this order: service rgmanager stop , if the cluster is running high-availability services ( rgmanager ) service gfs stop , if you are using Red Hat GFS service clvmd stop , if CLVM has been used to create clustered volumes service fenced stop service cman stop service ccsd stop Start cluster software on all cluster nodes (including the added one) by running the following commands in this order: service ccsd start service cman start service fenced start service clvmd start , if CLVM has been used to create clustered volumes service gfs start , if you are using Red Hat GFS service rgmanager start , if the cluster is running high-availability services ( rgmanager ) Start system-config-cluster (refer to Section 5.2, "Starting the Cluster Configuration Tool " ). At the Cluster Configuration Tool tab, verify that the configuration is correct. At the Cluster Status Tool tab verify that the nodes and services are running as expected. Note Make sure to configure other parameters that may be affected by changes in this section. Refer to Section 5.1, "Configuration Tasks" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s2-add-member-running-dlm-ca
Chapter 4. Advisories related to this release
Chapter 4. Advisories related to this release The following advisories are issued to document bug fixes and CVE fixes included in this release: RHSA-2024:8127 RHSA-2024:8128 RHSA-2024:8129 Revised on 2024-10-18 15:09:13 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_red_hat_build_of_openjdk_21.0.5/openjdk-2105-advisory_openjdk
Chapter 21. Configuring time synchronization by using the timesync RHEL System Role
Chapter 21. Configuring time synchronization by using the timesync RHEL System Role With the timesync RHEL System Role, you can manage time synchronization on multiple target machines on RHEL using Red Hat Ansible Automation Platform. 21.1. The timesync RHEL System Role You can manage time synchronization on multiple target machines using the timesync RHEL System Role. The timesync role installs and configures an NTP or PTP implementation to operate as an NTP client or PTP replica in order to synchronize the system clock with NTP servers or grandmasters in PTP domains. Note that using the timesync role also facilitates the migration to chrony , because you can use the same playbook on all versions of Red Hat Enterprise Linux starting with RHEL 6 regardless of whether the system uses ntp or chrony to implement the NTP protocol. 21.2. Applying the timesync System Role for a single pool of servers The following example shows how to apply the timesync role in a situation with just one pool of servers. Warning The timesync role replaces the configuration of the given or detected provider service on the managed host. settings are lost, even if they are not specified in the role variables. The only preserved setting is the choice of provider if the timesync_ntp_provider variable is not defined. Prerequisites The Ansible Core package is installed on the control machine. You have the rhel-system-roles package installed on the system from which you want to run the playbook. You have an inventory file which lists the systems on which you want to deploy timesync System Role. Procedure Create a new playbook.yml file with the following content: --- - hosts: timesync-test vars: timesync_ntp_servers: - hostname: 2.rhel.pool.ntp.org pool: yes iburst: yes roles: - rhel-system-roles.timesync Optional: Verify playbook syntax. Run the playbook on your inventory file: 21.3. Applying the timesync System Role on client servers You can use the timesync role to enable Network Time Security (NTS) on NTP clients. Network Time Security (NTS) is an authentication mechanism specified for Network Time Protocol (NTP). It verifies that NTP packets exchanged between the server and client are not altered. Warning The timesync role replaces the configuration of the given or detected provider service on the managed host. settings are lost even if they are not specified in the role variables. The only preserved setting is the choice of provider if the timesync_ntp_provider variable is not defined. Prerequisites You do not have to have Red Hat Ansible Automation Platform installed on the systems on which you want to deploy the timesync solution. You have the rhel-system-roles package installed on the system from which you want to run the playbook. You have an inventory file which lists the systems on which you want to deploy the timesync System Role. The chrony NTP provider version is 4.0 or later. Procedure Create a playbook.yml file with the following content: --- - hosts: timesync-test vars: timesync_ntp_servers: - hostname: ptbtime1.ptb.de iburst: yes nts: yes roles: - rhel-system-roles.timesync ptbtime1.ptb.de is an example of public server. You may want to use a different public server or your own server. Optional: Verify playbook syntax. Run the playbook on your inventory file: Verification Perform a test on the client machine: Check that the number of reported cookies is larger than zero. Additional resources chrony.conf(5) man page 21.4. timesync System Roles variables You can pass the following variable to the timesync role: timesync_ntp_servers : Role variable settings Description hostname: host.example.com Hostname or address of the server minpoll: number Minimum polling interval. Default: 6 maxpoll: number Maximum polling interval. Default: 10 iburst: yes Flag enabling fast initial synchronization. Default: no pool: yes Flag indicating that each resolved address of the hostname is a separate NTP server. Default: no nts: yes Flag to enable Network Time Security (NTS). Default: no. Supported only with chrony >= 4.0. Additional resources For a detailed reference on timesync role variables, install the rhel-system-roles package, and see the README.md or README.html files in the /usr/share/doc/rhel-system-roles/timesync directory.
[ "--- - hosts: timesync-test vars: timesync_ntp_servers: - hostname: 2.rhel.pool.ntp.org pool: yes iburst: yes roles: - rhel-system-roles.timesync", "ansible-playbook --syntax-check playbook.yml", "ansible-playbook -i inventory_file /path/to/file/playbook.yml", "--- - hosts: timesync-test vars: timesync_ntp_servers: - hostname: ptbtime1.ptb.de iburst: yes nts: yes roles: - rhel-system-roles.timesync", "ansible-playbook --syntax-check playbook.yml", "ansible-playbook -i inventory_file /path/to/file/playbook.yml", "chronyc -N authdata Name/IP address Mode KeyID Type KLen Last Atmp NAK Cook CLen ===================================================================== ptbtime1.ptb.de NTS 1 15 256 157 0 0 8 100" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/automating_system_administration_by_using_rhel_system_roles_in_rhel_7.9/configuring-time-synchronization-by-using-the-timesync-rhel-system-role_automating-system-administration-by-using-rhel-system-roles
B.90. subversion
B.90. subversion B.90.1. RHSA-2011:0258 - Moderate: subversion security update Updated subversion packages that fix three security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. Common Vulnerability Scoring System (CVSS) base scores, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Subversion (SVN) is a concurrent version control system which enables one or more users to collaborate in developing and maintaining a hierarchy of files and directories while keeping a history of all changes. The mod_dav_svn module is used with the Apache HTTP Server to allow access to Subversion repositories via HTTP. CVE-2010-3315 An access restriction bypass flaw was found in the mod_dav_svn module. If the SVNPathAuthz directive was set to "short_circuit", certain access rules were not enforced, possibly allowing sensitive repository data to be leaked to remote users. Note that SVNPathAuthz is set to "On" by default. CVE-2010-4644 A server-side memory leak was found in the Subversion server. If a malicious, remote user performed "svn blame" or "svn log" operations on certain repository files, it could cause the Subversion server to consume a large amount of system memory. CVE-2010-4539 A NULL pointer dereference flaw was found in the way the mod_dav_svn module processed certain requests. If a malicious, remote user issued a certain type of request to display a collection of Subversion repositories on a host that has the SVNListParentPath directive enabled, it could cause the httpd process serving the request to crash. Note that SVNListParentPath is not enabled by default. All Subversion users should upgrade to these updated packages, which contain backported patches to correct these issues. After installing the updated packages, the Subversion server must be restarted for the update to take effect: restart httpd if you are using mod_dav_svn, or restart svnserve if it is used.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/subversion
Chapter 5. KafkaClusterSpec schema reference
Chapter 5. KafkaClusterSpec schema reference Used in: KafkaSpec Full list of KafkaClusterSpec schema properties Configures a Kafka cluster. 5.1. listeners Use the listeners property to configure listeners to provide access to Kafka brokers. Example configuration of a plain (unencrypted) listener without authentication apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # ... listeners: - name: plain port: 9092 type: internal tls: false # ... zookeeper: # ... 5.2. config Use the config properties to configure Kafka broker options as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Apache Kafka documentation . However, Streams for Apache Kafka takes care of configuring and managing options related to the following, which cannot be changed: Security (encryption, authentication, and authorization) Listener configuration Broker ID configuration Configuration of log data directories Inter-broker communication ZooKeeper connectivity Properties with the following prefixes cannot be set: advertised. authorizer. broker. controller cruise.control.metrics.reporter.bootstrap. cruise.control.metrics.topic host.name inter.broker.listener.name listener. listeners. log.dir password. port process.roles sasl. security. servers,node.id ssl. super.user zookeeper.clientCnxnSocket zookeeper.connect zookeeper.set.acl zookeeper.ssl If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka, including the following exceptions to the options configured by Streams for Apache Kafka: Any ssl configuration for supported TLS versions and cipher suites Configuration for the zookeeper.connection.timeout.ms property to set the maximum time allowed for establishing a ZooKeeper connection Cruise Control metrics properties: cruise.control.metrics.topic.num.partitions cruise.control.metrics.topic.replication.factor cruise.control.metrics.topic.retention.ms cruise.control.metrics.topic.auto.create.retries cruise.control.metrics.topic.auto.create.timeout.ms cruise.control.metrics.topic.min.insync.replicas Controller properties: controller.quorum.election.backoff.max.ms controller.quorum.election.timeout.ms controller.quorum.fetch.timeout.ms Example Kafka broker configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... config: num.partitions: 1 num.recovery.threads.per.data.dir: 1 default.replication.factor: 3 offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 1 log.retention.hours: 168 log.segment.bytes: 1073741824 log.retention.check.interval.ms: 300000 num.network.threads: 3 num.io.threads: 8 socket.send.buffer.bytes: 102400 socket.receive.buffer.bytes: 102400 socket.request.max.bytes: 104857600 group.initial.rebalance.delay.ms: 0 zookeeper.connection.timeout.ms: 6000 # ... 5.3. brokerRackInitImage When rack awareness is enabled, Kafka broker pods use init container to collect the labels from the OpenShift cluster nodes. The container image used for this container can be configured using the brokerRackInitImage property. When the brokerRackInitImage field is missing, the following images are used in order of priority: Container image specified in STRIMZI_DEFAULT_KAFKA_INIT_IMAGE environment variable in the Cluster Operator configuration. registry.redhat.io/amq-streams/strimzi-rhel9-operator:2.7.0 container image. Example brokerRackInitImage configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... rack: topologyKey: topology.kubernetes.io/zone brokerRackInitImage: my-org/my-image:latest # ... Note Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container registry used by Streams for Apache Kafka. In this case, you should either copy the Streams for Apache Kafka images or build them from the source. If the configured image is not compatible with Streams for Apache Kafka images, it might not work properly. 5.4. logging Kafka has its own configurable loggers, which include the following: log4j.logger.org.I0Itec.zkclient.ZkClient log4j.logger.org.apache.zookeeper log4j.logger.kafka log4j.logger.org.apache.kafka log4j.logger.kafka.request.logger log4j.logger.kafka.network.Processor log4j.logger.kafka.server.KafkaApis log4j.logger.kafka.network.RequestChannelUSD log4j.logger.kafka.controller log4j.logger.kafka.log.LogCleaner log4j.logger.state.change.logger log4j.logger.kafka.authorizer.logger Kafka uses the Apache log4j logger implementation. Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.valueFrom.configMapKeyRef.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Both logging.valueFrom.configMapKeyRef.name and logging.valueFrom.configMapKeyRef.key properties are mandatory. A ConfigMap using the exact logging configuration specified is created with the custom resource when the Cluster Operator is running, then recreated after each reconciliation. If you do not specify a custom ConfigMap, default logging settings are used. If a specific logger value is not set, upper-level logger settings are inherited for that logger. For more information about log levels, see Apache logging services . Here we see examples of inline and external logging. The inline logging specifies the root logger level. You can also set log levels for specific classes or loggers by adding them to the loggers property. Inline logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... kafka: # ... logging: type: inline loggers: kafka.root.logger.level: INFO log4j.logger.kafka.coordinator.transaction: TRACE log4j.logger.kafka.log.LogCleanerManager: DEBUG log4j.logger.kafka.request.logger: DEBUG log4j.logger.io.strimzi.kafka.oauth: DEBUG log4j.logger.org.openpolicyagents.kafka.OpaAuthorizer: DEBUG # ... Note Setting a log level to DEBUG may result in a large amount of log output and may have performance implications. External logging apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # ... logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: kafka-log4j.properties # ... Any available loggers that are not configured have their level set to OFF . If Kafka was deployed using the Cluster Operator, changes to Kafka logging levels are applied dynamically. If you use external logging, a rolling update is triggered when logging appenders are changed. Garbage collector (GC) Garbage collector logging can also be enabled (or disabled) using the jvmOptions property . 5.5. KafkaClusterSpec schema properties Property Property type Description version string The Kafka broker version. Defaults to the latest version. Consult the user documentation to understand the process required to upgrade or downgrade the version. metadataVersion string Added in Streams for Apache Kafka 2.7. The KRaft metadata version used by the Kafka cluster. This property is ignored when running in ZooKeeper mode. If the property is not set, it defaults to the metadata version that corresponds to the version property. replicas integer The number of pods in the cluster. This property is required when node pools are not used. image string The container image used for Kafka pods. If the property is not set, the default Kafka image version is determined based on the version configuration. The image names are specifically mapped to corresponding versions in the Cluster Operator configuration. Changing the Kafka image version does not automatically update the image versions for other components, such as Kafka Exporter. listeners GenericKafkaListener array Configures listeners of Kafka brokers. config map Kafka broker config properties with the following prefixes cannot be set: listeners, advertised., broker., listener., host.name, port, inter.broker.listener.name, sasl., ssl., security., password., log.dir, zookeeper.connect, zookeeper.set.acl, zookeeper.ssl, zookeeper.clientCnxnSocket, authorizer., super.user, cruise.control.metrics.topic, cruise.control.metrics.reporter.bootstrap.servers, node.id, process.roles, controller., metadata.log.dir, zookeeper.metadata.migration.enable (with the exception of: zookeeper.connection.timeout.ms, sasl.server.max.receive.size, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols, ssl.secure.random.implementation, cruise.control.metrics.topic.num.partitions, cruise.control.metrics.topic.replication.factor, cruise.control.metrics.topic.retention.ms, cruise.control.metrics.topic.auto.create.retries, cruise.control.metrics.topic.auto.create.timeout.ms, cruise.control.metrics.topic.min.insync.replicas, controller.quorum.election.backoff.max.ms, controller.quorum.election.timeout.ms, controller.quorum.fetch.timeout.ms). storage EphemeralStorage , PersistentClaimStorage , JbodStorage Storage configuration (disk). Cannot be updated. This property is required when node pools are not used. authorization KafkaAuthorizationSimple , KafkaAuthorizationOpa , KafkaAuthorizationKeycloak , KafkaAuthorizationCustom Authorization configuration for Kafka brokers. rack Rack Configuration of the broker.rack broker config. brokerRackInitImage string The image of the init container used for initializing the broker.rack . livenessProbe Probe Pod liveness checking. readinessProbe Probe Pod readiness checking. jvmOptions JvmOptions JVM Options for pods. jmxOptions KafkaJmxOptions JMX Options for Kafka brokers. resources ResourceRequirements CPU and memory resources to reserve. metricsConfig JmxPrometheusExporterMetrics Metrics configuration. logging InlineLogging , ExternalLogging Logging configuration for Kafka. template KafkaClusterTemplate Template for Kafka cluster resources. The template allows users to specify how the OpenShift resources are generated. tieredStorage TieredStorageCustom Configure the tiered storage feature for Kafka brokers.
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: false # zookeeper: #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # config: num.partitions: 1 num.recovery.threads.per.data.dir: 1 default.replication.factor: 3 offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 1 log.retention.hours: 168 log.segment.bytes: 1073741824 log.retention.check.interval.ms: 300000 num.network.threads: 3 num.io.threads: 8 socket.send.buffer.bytes: 102400 socket.receive.buffer.bytes: 102400 socket.request.max.bytes: 104857600 group.initial.rebalance.delay.ms: 0 zookeeper.connection.timeout.ms: 6000 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone brokerRackInitImage: my-org/my-image:latest #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: # logging: type: inline loggers: kafka.root.logger.level: INFO log4j.logger.kafka.coordinator.transaction: TRACE log4j.logger.kafka.log.LogCleanerManager: DEBUG log4j.logger.kafka.request.logger: DEBUG log4j.logger.io.strimzi.kafka.oauth: DEBUG log4j.logger.org.openpolicyagents.kafka.OpaAuthorizer: DEBUG #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: kafka-log4j.properties #" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaClusterSpec-reference
Chapter 29. Uninstalling the integrated IdM DNS service from an IdM server
Chapter 29. Uninstalling the integrated IdM DNS service from an IdM server If you have more than one server with integrated DNS in an Identity Management (IdM) deployment, you might decide to remove the integrated DNS service from one of the servers. To do this, you must first decommission the IdM server completely before re-installing IdM on it, this time without the integrated DNS. Note While you can add the DNS role to an IdM server, IdM does not provide a method to remove only the DNS role from an IdM server: the ipa-dns-install command does not have an --uninstall option. Prerequisites You have integrated DNS installed on an IdM server. This is not the last integrated DNS service in your IdM topology. Procedure Identify the redundant DNS service and follow the procedure in Uninstalling an IdM server on the IdM replica that hosts this service. On the same host, follow the procedure in either Without integrated DNS, with an integrated CA as the root CA or Without integrated DNS, with an external CA as the root CA , depending on your use case.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/installing_identity_management/uninstalling-the-integrated-idm-dns-service-from-an-idm-server_installing-identity-management
Chapter 2. Understanding networking
Chapter 2. Understanding networking Cluster Administrators have several options for exposing applications that run inside a cluster to external traffic and securing network connections: Service types, such as node ports or load balancers API resources, such as Ingress and Route By default, Kubernetes allocates each pod an internal IP address for applications running within the pod. Pods and their containers can network, but clients outside the cluster do not have networking access. When you expose your application to external traffic, giving each pod its own IP address means that pods can be treated like physical hosts or virtual machines in terms of port allocation, networking, naming, service discovery, load balancing, application configuration, and migration. Note Some cloud platforms offer metadata APIs that listen on the 169.254.169.254 IP address, a link-local IP address in the IPv4 169.254.0.0/16 CIDR block. This CIDR block is not reachable from the pod network. Pods that need access to these IP addresses must be given host network access by setting the spec.hostNetwork field in the pod spec to true . If you allow a pod host network access, you grant the pod privileged access to the underlying network infrastructure. 2.1. OpenShift Container Platform DNS If you are running multiple services, such as front-end and back-end services for use with multiple pods, environment variables are created for user names, service IPs, and more so the front-end pods can communicate with the back-end services. If the service is deleted and recreated, a new IP address can be assigned to the service, and requires the front-end pods to be recreated to pick up the updated values for the service IP environment variable. Additionally, the back-end service must be created before any of the front-end pods to ensure that the service IP is generated properly, and that it can be provided to the front-end pods as an environment variable. For this reason, OpenShift Container Platform has a built-in DNS so that the services can be reached by the service DNS as well as the service IP/port. 2.2. OpenShift Container Platform Ingress Operator When you create your OpenShift Container Platform cluster, pods and services running on the cluster are each allocated their own IP addresses. The IP addresses are accessible to other pods and services running nearby but are not accessible to outside clients. The Ingress Operator implements the IngressController API and is the component responsible for enabling external access to OpenShift Container Platform cluster services. The Ingress Operator makes it possible for external clients to access your service by deploying and managing one or more HAProxy-based Ingress Controllers to handle routing. You can use the Ingress Operator to route traffic by specifying OpenShift Container Platform Route and Kubernetes Ingress resources. Configurations within the Ingress Controller, such as the ability to define endpointPublishingStrategy type and internal load balancing, provide ways to publish Ingress Controller endpoints. 2.2.1. Comparing routes and Ingress The Kubernetes Ingress resource in OpenShift Container Platform implements the Ingress Controller with a shared router service that runs as a pod inside the cluster. The most common way to manage Ingress traffic is with the Ingress Controller. You can scale and replicate this pod like any other regular pod. This router service is based on HAProxy , which is an open source load balancer solution. The OpenShift Container Platform route provides Ingress traffic to services in the cluster. Routes provide advanced features that might not be supported by standard Kubernetes Ingress Controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments. Ingress traffic accesses services in the cluster through a route. Routes and Ingress are the main resources for handling Ingress traffic. Ingress provides features similar to a route, such as accepting external requests and delegating them based on the route. However, with Ingress you can only allow certain types of connections: HTTP/2, HTTPS and server name identification (SNI), and TLS with certificate. In OpenShift Container Platform, routes are generated to meet the conditions specified by the Ingress resource. 2.3. Glossary of common terms for OpenShift Container Platform networking This glossary defines common terms that are used in the networking content. authentication To control access to an OpenShift Container Platform cluster, a cluster administrator can configure user authentication and ensure only approved users access the cluster. To interact with an OpenShift Container Platform cluster, you must authenticate to the OpenShift Container Platform API. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the OpenShift Container Platform API. AWS Load Balancer Operator The AWS Load Balancer (ALB) Operator deploys and manages an instance of the aws-load-balancer-controller . Cluster Network Operator The Cluster Network Operator (CNO) deploys and manages the cluster network components in an OpenShift Container Platform cluster. This includes deployment of the Container Network Interface (CNI) network plugin selected for the cluster during installation. config map A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. custom resource (CR) A CR is extension of the Kubernetes API. You can create custom resources. DNS Cluster DNS is a DNS server which serves DNS records for Kubernetes services. Containers started by Kubernetes automatically include this DNS server in their DNS searches. DNS Operator The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods. This enables DNS-based Kubernetes Service discovery in OpenShift Container Platform. deployment A Kubernetes resource object that maintains the life cycle of an application. domain Domain is a DNS name serviced by the Ingress Controller. egress The process of data sharing externally through a network's outbound traffic from a pod. External DNS Operator The External DNS Operator deploys and manages ExternalDNS to provide the name resolution for services and routes from the external DNS provider to OpenShift Container Platform. HTTP-based route An HTTP-based route is an unsecured route that uses the basic HTTP routing protocol and exposes a service on an unsecured application port. Ingress The Kubernetes Ingress resource in OpenShift Container Platform implements the Ingress Controller with a shared router service that runs as a pod inside the cluster. Ingress Controller The Ingress Operator manages Ingress Controllers. Using an Ingress Controller is the most common way to allow external access to an OpenShift Container Platform cluster. installer-provisioned infrastructure The installation program deploys and configures the infrastructure that the cluster runs on. kubelet A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod. Kubernetes NMState Operator The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the OpenShift Container Platform cluster's nodes with NMState. kube-proxy Kube-proxy is a proxy service which runs on each node and helps in making services available to the external host. It helps in forwarding the request to correct containers and is capable of performing primitive load balancing. load balancers OpenShift Container Platform uses load balancers for communicating from outside the cluster with services running in the cluster. MetalLB Operator As a cluster administrator, you can add the MetalLB Operator to your cluster so that when a service of type LoadBalancer is added to the cluster, MetalLB can add an external IP address for the service. multicast With IP multicast, data is broadcast to many IP addresses simultaneously. namespaces A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources. networking Network information of a OpenShift Container Platform cluster. node A worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. OpenShift Container Platform Ingress Operator The Ingress Operator implements the IngressController API and is the component responsible for enabling external access to OpenShift Container Platform services. pod One or more containers with shared resources, such as volume and IP addresses, running in your OpenShift Container Platform cluster. A pod is the smallest compute unit defined, deployed, and managed. PTP Operator The PTP Operator creates and manages the linuxptp services. route The OpenShift Container Platform route provides Ingress traffic to services in the cluster. Routes provide advanced features that might not be supported by standard Kubernetes Ingress Controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments. scaling Increasing or decreasing the resource capacity. service Exposes a running application on a set of pods. Single Root I/O Virtualization (SR-IOV) Network Operator The Single Root I/O Virtualization (SR-IOV) Network Operator manages the SR-IOV network devices and network attachments in your cluster. software-defined networking (SDN) OpenShift Container Platform uses a software-defined networking (SDN) approach to provide a unified cluster network that enables communication between pods across the OpenShift Container Platform cluster. Stream Control Transmission Protocol (SCTP) SCTP is a reliable message based protocol that runs on top of an IP network. taint Taints and tolerations ensure that pods are scheduled onto appropriate nodes. You can apply one or more taints on a node. toleration You can apply tolerations to pods. Tolerations allow the scheduler to schedule pods with matching taints. web console A user interface (UI) to manage OpenShift Container Platform.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/networking/understanding-networking
7.4. Block I/O Tuning Techniques
7.4. Block I/O Tuning Techniques This section describes more techniques for tuning block I/O performance in virtualized environments. 7.4.1. Disk I/O Throttling When several virtual machines are running simultaneously, they can interfere with system performance by using excessive disk I/O. Disk I/O throttling in KVM provides the ability to set a limit on disk I/O requests sent from virtual machines to the host machine. This can prevent a virtual machine from over-utilizing shared resources and impacting the performance of other virtual machines. Disk I/O throttling can be useful in various situations, for example when guest virtual machines belonging to different customers are running on the same host, or when quality of service guarantees are given for different guests. Disk I/O throttling can also be used to simulate slower disks. I/O throttling can be applied independently to each block device attached to a guest and supports limits on throughput and I/O operations. Use the virsh blkdeviotune command to set I/O limits for a virtual machine: Device specifies a unique target name ( <target dev='name'/> ) or source file ( <source file='name'/> ) for one of the disk devices attached to the virtual machine. Use the virsh domblklist command for a list of disk device names. Optional parameters include: total-bytes-sec The total throughput limit in bytes per second. read-bytes-sec The read throughput limit in bytes per second. write-bytes-sec The write throughput limit in bytes per second. total-iops-sec The total I/O operations limit per second. read-iops-sec The read I/O operations limit per second. write-iops-sec The write I/O operations limit per second. For example, to throttle vda on virtual_machine to 1000 I/O operations per second and 50 MB per second throughput, run this command: 7.4.2. Multi-Queue virtio-scsi Multi-queue virtio-scsi provides improved storage performance and scalability in the virtio-scsi driver. It enables each virtual CPU to have a separate queue and interrupt to use without affecting other vCPUs. 7.4.2.1. Configuring Multi-Queue virtio-scsi Multi-queue virtio-scsi is disabled by default on Red Hat Enterprise Linux 7. To enable multi-queue virtio-scsi support in the guest, add the following to the guest XML configuration, where N is the total number of vCPU queues:
[ "virsh blkdeviotune virtual_machine device --parameter limit", "virsh blkdeviotune virtual_machine vda --total-iops-sec 1000 --total-bytes-sec 52428800", "<controller type='scsi' index='0' model='virtio-scsi'> <driver queues=' N ' /> </controller>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-blockio-techniques
Data Grid downloads
Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/using_the_data_grid_command_line_interface/rhdg-downloads_datagrid
19.3.2. Sendmail
19.3.2. Sendmail Sendmail's core purpose, like other MTAs, is to safely transfer email among hosts, usually using the SMTP protocol. However, Sendmail is highly configurable, allowing control over almost every aspect of how email is handled, including the protocol used. Many system administrators elect to use Sendmail as their MTA due to its power and scalability. 19.3.2.1. Purpose and Limitations It is important to be aware of what Sendmail is and what it can do, as opposed to what it is not. In these days of monolithic applications that fulfill multiple roles, Sendmail may seem like the only application needed to run an email server within an organization. Technically, this is true, as Sendmail can spool mail to each users' directory and deliver outbound mail for users. However, most users actually require much more than simple email delivery. Users usually want to interact with their email using an MUA, that uses POP or IMAP , to download their messages to their local machine. Or, they may prefer a Web interface to gain access to their mailbox. These other applications can work in conjunction with Sendmail, but they actually exist for different reasons and can operate separately from one another. It is beyond the scope of this section to go into all that Sendmail should or could be configured to do. With literally hundreds of different options and rule sets, entire volumes have been dedicated to helping explain everything that can be done and how to fix things that go wrong. See the Section 19.6, "Additional Resources" for a list of Sendmail resources. This section reviews the files installed with Sendmail by default and reviews basic configuration changes, including how to stop unwanted email (spam) and how to extend Sendmail with the Lightweight Directory Access Protocol (LDAP) . 19.3.2.2. The Default Sendmail Installation In order to use Sendmail, first ensure the sendmail package is installed on your system by running, as root : In order to configure Sendmail, ensure the sendmail-cf package is installed on your system by running, as root : For more information on installing packages with Yum, see Section 8.2.4, "Installing Packages" . Before using Sendmail, the default MTA has to be switched from Postfix. For more information how to switch the default MTA see Section 19.3, "Mail Transport Agents" . The Sendmail executable is /usr/sbin/sendmail . Sendmail's lengthy and detailed configuration file is /etc/mail/sendmail.cf . Avoid editing the sendmail.cf file directly. To make configuration changes to Sendmail, edit the /etc/mail/sendmail.mc file, back up the original /etc/mail/sendmail.cf file, and use the following alternatives to generate a new configuration file: Use the included makefile in /etc/mail/ to create a new /etc/mail/sendmail.cf configuration file: All other generated files in /etc/mail (db files) will be regenerated if needed. The old makemap commands are still usable. The make command is automatically used whenever you start or restart the sendmail service. Alternatively you may use the m4 macro processor to create a new /etc/mail/sendmail.cf . The m4 macro processor is not installed by default. Before using it to create /etc/mail/sendmail.cf , install the m4 package as root: More information on configuring Sendmail can be found in Section 19.3.2.3, "Common Sendmail Configuration Changes" . Various Sendmail configuration files are installed in the /etc/mail/ directory including: access - Specifies which systems can use Sendmail for outbound email. domaintable - Specifies domain name mapping. local-host-names - Specifies aliases for the host. mailertable - Specifies instructions that override routing for particular domains. virtusertable - Specifies a domain-specific form of aliasing, allowing multiple virtual domains to be hosted on one machine. Several of the configuration files in /etc/mail/ , such as access , domaintable , mailertable and virtusertable , must actually store their information in database files before Sendmail can use any configuration changes. To include any changes made to these configurations in their database files, run the following command, as root : where <name> represents the name of the configuration file to be updated. You may also restart the sendmail service for the changes to take effect by running: For example, to have all emails addressed to the example.com domain delivered to [email protected] , add the following line to the virtusertable file: To finalize the change, the virtusertable.db file must be updated: Sendmail will create an updated virtusertable.db file containing the new configuration. 19.3.2.3. Common Sendmail Configuration Changes When altering the Sendmail configuration file, it is best not to edit an existing file, but to generate an entirely new /etc/mail/sendmail.cf file. Warning Before replacing or making any changes to the sendmail.cf file, create a backup copy. To add the desired functionality to Sendmail, edit the /etc/mail/sendmail.mc file as root. Once you are finished, restart the sendmail service and, if the m4 package is installed, the m4 macro processor will automatically generate a new sendmail.cf configuration file: Important The default sendmail.cf file does not allow Sendmail to accept network connections from any host other than the local computer. To configure Sendmail as a server for other clients, edit the /etc/mail/sendmail.mc file, and either change the address specified in the Addr= option of the DAEMON_OPTIONS directive from 127.0.0.1 to the IP address of an active network device or comment out the DAEMON_OPTIONS directive all together by placing dnl at the beginning of the line. When finished, regenerate /etc/mail/sendmail.cf by restarting the service The default configuration in Red Hat Enterprise Linux works for most SMTP -only sites. However, it does not work for UUCP ( UNIX-to-UNIX Copy Protocol ) sites. If using UUCP mail transfers, the /etc/mail/sendmail.mc file must be reconfigured and a new /etc/mail/sendmail.cf file must be generated. Consult the /usr/share/sendmail-cf/README file before editing any files in the directories under the /usr/share/sendmail-cf directory, as they can affect the future configuration of the /etc/mail/sendmail.cf file. 19.3.2.4. Masquerading One common Sendmail configuration is to have a single machine act as a mail gateway for all machines on the network. For example, a company may want to have a machine called mail.example.com that handles all of their email and assigns a consistent return address to all outgoing mail. In this situation, the Sendmail server must masquerade the machine names on the company network so that their return address is [email protected] instead of [email protected] . To do this, add the following lines to /etc/mail/sendmail.mc : After generating a new sendmail.cf file using the m4 macro processor, this configuration makes all mail from inside the network appear as if it were sent from example.com . 19.3.2.5. Stopping Spam Email spam can be defined as unnecessary and unwanted email received by a user who never requested the communication. It is a disruptive, costly, and widespread abuse of Internet communication standards. Sendmail makes it relatively easy to block new spamming techniques being employed to send junk email. It even blocks many of the more usual spamming methods by default. Main anti-spam features available in sendmail are header checks , relaying denial (default from version 8.9), access database and sender information checks . For example, forwarding of SMTP messages, also called relaying, has been disabled by default since Sendmail version 8.9. Before this change occurred, Sendmail directed the mail host ( x.edu ) to accept messages from one party ( y.com ) and sent them to a different party ( z.net ). Now, however, Sendmail must be configured to permit any domain to relay mail through the server. To configure relay domains, edit the /etc/mail/relay-domains file and restart Sendmail However users can also be sent spam from from servers on the Internet. In these instances, Sendmail's access control features available through the /etc/mail/access file can be used to prevent connections from unwanted hosts. The following example illustrates how this file can be used to both block and specifically allow access to the Sendmail server: This example shows that any email sent from badspammer.com is blocked with a 550 RFC-821 compliant error code, with a message sent back. Email sent from the tux.badspammer.com sub-domain, is accepted. The last line shows that any email sent from the 10.0.*.* network can be relayed through the mail server. Because the /etc/mail/access.db file is a database, use the makemap command to update any changes. Do this using the following command as root : Message header analysis allows you to reject mail based on header contents. SMTP servers store information about an email's journey in the message header. As the message travels from one MTA to another, each puts in a Received header above all the other Received headers. It is important to note that this information may be altered by spammers. The above examples only represent a small part of what Sendmail can do in terms of allowing or blocking access. See the /usr/share/sendmail-cf/README file for more information and examples. Since Sendmail calls the Procmail MDA when delivering mail, it is also possible to use a spam filtering program, such as SpamAssassin, to identify and file spam for users. See Section 19.4.2.6, "Spam Filters" for more information about using SpamAssassin. 19.3.2.6. Using Sendmail with LDAP Using LDAP is a very quick and powerful way to find specific information about a particular user from a much larger group. For example, an LDAP server can be used to look up a particular email address from a common corporate directory by the user's last name. In this kind of implementation, LDAP is largely separate from Sendmail, with LDAP storing the hierarchical user information and Sendmail only being given the result of LDAP queries in pre-addressed email messages. However, Sendmail supports a much greater integration with LDAP , where it uses LDAP to replace separately maintained files, such as /etc/aliases and /etc/mail/virtusertables , on different mail servers that work together to support a medium- to enterprise-level organization. In short, LDAP abstracts the mail routing level from Sendmail and its separate configuration files to a powerful LDAP cluster that can be leveraged by many different applications. The current version of Sendmail contains support for LDAP . To extend the Sendmail server using LDAP , first get an LDAP server, such as OpenLDAP , running and properly configured. Then edit the /etc/mail/sendmail.mc to include the following: Note This is only for a very basic configuration of Sendmail with LDAP . The configuration can differ greatly from this depending on the implementation of LDAP , especially when configuring several Sendmail machines to use a common LDAP server. Consult /usr/share/sendmail-cf/README for detailed LDAP routing configuration instructions and examples. , recreate the /etc/mail/sendmail.cf file by running the m4 macro processor and again restarting Sendmail. See Section 19.3.2.3, "Common Sendmail Configuration Changes" for instructions. For more information on LDAP , see Section 20.1, "OpenLDAP" .
[ "~]# yum install sendmail", "~]# yum install sendmail-cf", "~]# make all -C /etc/mail/", "~]# yum install m4", "~]# makemap hash /etc/mail/ <name> < /etc/mail/ <name>", "~]# service sendmail restart", "@example.com [email protected]", "~]# makemap hash /etc/mail/virtusertable < /etc/mail/virtusertable", "~]# service sendmail restart", "~]# service sendmail restart", "FEATURE(always_add_domain)dnl FEATURE(`masquerade_entire_domain')dnl FEATURE(`masquerade_envelope')dnl FEATURE(`allmasquerade')dnl MASQUERADE_AS(`example.com.')dnl MASQUERADE_DOMAIN(`example.com.')dnl MASQUERADE_AS(example.com)dnl", "~]# service sendmail restart", "badspammer.com ERROR:550 \"Go away and do not spam us anymore\" tux.badspammer.com OK 10.0 RELAY", "~]# makemap hash /etc/mail/access < /etc/mail/access", "LDAPROUTE_DOMAIN(' yourdomain.com ')dnl FEATURE('ldap_routing')dnl" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-email-mta-sendmail
14.12.6. Uploading and Downloading Storage Volumes
14.12.6. Uploading and Downloading Storage Volumes This section will instruct how to upload and download information to and from storage volumes. 14.12.6.1. Uploading contents to a storage volume The vol-upload --pool pool-or-uuid --offset bytes --length bytes vol-name-or-key-or-path local-file command uploads the contents of specified local-file to a storage volume. The command requires --pool pool-or-uuid which is the name or UUID of the storage pool the volume is in. It also requires vol-name-or-key-or-path which is the name or key or path of the volume to wipe. The --offset option is the position in the storage volume at which to start writing the data. --length length dictates an upper limit for the amount of data to be uploaded. An error will occur if the local-file is greater than the specified --length .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-storage_volume_commands-uploading_and_downloading_storage_volumes
Chapter 13. Scanning pods for vulnerabilities
Chapter 13. Scanning pods for vulnerabilities Using the Red Hat Quay Container Security Operator, you can access vulnerability scan results from the OpenShift Container Platform web console for container images used in active pods on the cluster. The Red Hat Quay Container Security Operator: Watches containers associated with pods on all or specified namespaces Queries the container registry where the containers came from for vulnerability information, provided an image's registry is running image scanning (such as Quay.io or a Red Hat Quay registry with Clair scanning) Exposes vulnerabilities via the ImageManifestVuln object in the Kubernetes API Using the instructions here, the Red Hat Quay Container Security Operator is installed in the openshift-operators namespace, so it is available to all namespaces on your OpenShift Container Platform cluster. 13.1. Running the Red Hat Quay Container Security Operator You can start the Red Hat Quay Container Security Operator from the OpenShift Container Platform web console by selecting and installing that Operator from the Operator Hub, as described here. Prerequisites Have administrator privileges to the OpenShift Container Platform cluster Have containers that come from a Red Hat Quay or Quay.io registry running on your cluster Procedure Navigate to Operators OperatorHub and select Security . Select the Container Security Operator, then select Install to go to the Create Operator Subscription page. Check the settings. All namespaces and automatic approval strategy are selected, by default. Select Install . The Container Security Operator appears after a few moments on the Installed Operators screen. Optional: You can add custom certificates to the Red Hat Quay Container Security Operator. In this example, create a certificate named quay.crt in the current directory. Then run the following command to add the cert to the Red Hat Quay Container Security Operator: USD oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators If you added a custom certificate, restart the Operator pod for the new certs to take effect. Open the OpenShift Dashboard ( Home Overview ). A link to Quay Image Security appears under the status section, with a listing of the number of vulnerabilities found so far. Select the link to see a Quay Image Security breakdown , as shown in the following figure: You can do one of two things at this point to follow up on any detected vulnerabilities: Select the link to the vulnerability. You are taken to the container registry that the container came from, where you can see information about the vulnerability. The following figure shows an example of detected vulnerabilities from a Quay.io registry: Select the namespaces link to go to the ImageManifestVuln screen, where you can see the name of the selected image and all namespaces where that image is running. The following figure indicates that a particular vulnerable image is running in the quay-enterprise namespace: At this point, you know what images are vulnerable, what you need to do to fix those vulnerabilities, and every namespace that the image was run in. So you can: Alert anyone running the image that they need to correct the vulnerability Stop the images from running by deleting the deployment or other object that started the pod that the image is in Note that if you do delete the pod, it may take several minutes for the vulnerability to reset on the dashboard. 13.2. Querying image vulnerabilities from the CLI Using the oc command, you can display information about vulnerabilities detected by the Red Hat Quay Container Security Operator. Prerequisites Be running the Red Hat Quay Container Security Operator on your OpenShift Container Platform instance Procedure To query for detected container image vulnerabilities, type: USD oc get vuln --all-namespaces Example output NAMESPACE NAME AGE default sha256.ca90... 6m56s skynet sha256.ca90... 9m37s To display details for a particular vulnerability, provide the vulnerability name and its namespace to the oc describe command. This example shows an active container whose image includes an RPM package with a vulnerability: USD oc describe vuln --namespace mynamespace sha256.ac50e3752... Example output Name: sha256.ac50e3752... Namespace: quay-enterprise ... Spec: Features: Name: nss-util Namespace Name: centos:7 Version: 3.44.0-3.el7 Versionformat: rpm Vulnerabilities: Description: Network Security Services (NSS) is a set of libraries...
[ "oc create secret generic container-security-operator-extra-certs --from-file=quay.crt -n openshift-operators", "oc get vuln --all-namespaces", "NAMESPACE NAME AGE default sha256.ca90... 6m56s skynet sha256.ca90... 9m37s", "oc describe vuln --namespace mynamespace sha256.ac50e3752", "Name: sha256.ac50e3752 Namespace: quay-enterprise Spec: Features: Name: nss-util Namespace Name: centos:7 Version: 3.44.0-3.el7 Versionformat: rpm Vulnerabilities: Description: Network Security Services (NSS) is a set of libraries" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/security_and_compliance/pod-vulnerability-scan
3.5. The Metadata Daemon (lvmetad)
3.5. The Metadata Daemon (lvmetad) LVM can optionally use a central metadata cache, implemented through a daemon ( lvmetad ) and a udev rule. The metadata daemon has two main purposes: it improves performance of LVM commands and it allows udev to automatically activate logical volumes or entire volume groups as they become available to the system. LVM is configured to make use of the daemon when the global/use_lvmetad variable is set to 1 in the lvm.conf configuration file. This is the default value. For information on the lvm.conf configuration file, see Appendix B, The LVM Configuration Files . Note The lvmetad daemon is not currently supported across the nodes of a cluster, and requires that the locking type be local file-based locking. When you use the lvmconf --enable-cluster/--disable-cluster command, the lvm.conf file is configured appropriately, including the use_lvmetad setting (which should be 0 for locking_type=3 ). Note, however, that in a Pacemaker cluster, the ocf:heartbeat:clvm resource agent itself sets these parameters as part of the start procedure. If you change the value of use_lvmetad from 1 to 0, you must reboot or stop the lvmetad service manually with the following command. Normally, each LVM command issues a disk scan to find all relevant physical volumes and to read volume group metadata. However, if the metadata daemon is running and enabled, this expensive scan can be skipped. Instead, the lvmetad daemon scans each device only once, when it becomes available, using udev rules. This can save a significant amount of I/O and reduce the time required to complete LVM operations, particularly on systems with many disks. When a new volume group is made available at runtime (for example, through hotplug or iSCSI), its logical volumes must be activated in order to be used. When the lvmetad daemon is enabled, the activation/auto_activation_volume_list option in the lvm.conf configuration file can be used to configure a list of volume groups or logical volumes (or both) that should be automatically activated. Without the lvmetad daemon, a manual activation is necessary. Note When the lvmetad daemon is running, the filter = setting in the /etc/lvm/lvm.conf file does not apply when you execute the pvscan --cache device command. To filter devices, you need to use the global_filter = setting. Devices that fail the global filter are not opened by LVM and are never scanned. You may need to use a global filter, for example, when you use LVM devices in VMs and you do not want the contents of the devices in the VMs to be scanned by the physical host.
[ "systemctl stop lvm2-lvmetad.service" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/metadatadaemon
Chapter 32. Security
Chapter 32. Security Configurations that depend on chrooting in user-non-searchable paths now work properly In Red Hat Enterprise Linux 7.3, the chroot process in the OpenSSH tool had been changed to help harden the SELinux system policy, and root UID was dropped before performing chroot . Consequently, existing configurations that depend on chrooting in user-non-searchable paths stopped working. With this update of the openssh packages, the change has been reverted. Additionally, the problem has been fixed in the SELinux system policy by allowing confined users to use OpenSSH chroot if the administrator enables the selinuxuser_use_ssh_chroot boolean. The described configurations now work in the same way as in Red Hat Enterprise Linux 7.2. (BZ# 1418062 ) firewalld now supports all ICMP types Previously, the Internet Control Message Protocol (ICMP) type list was not complete. As a consequence, some ICMP types such as packet-too-big could not be blocked or allowed. With this update, support for additional ICMP types has been added, and the firewalld service daemon now allows to handle all ICMP types. (BZ# 1401978 ) docker.pp replaced with container.pp in selinux-policy Prior to this update, the container.te file in the container-selinux package contained Docker interfaces, which point to the equivalent container interfaces, and also the docker.if file. Consequently, when compiling the container.te file, the compiler warned about duplicate interfaces. With this update, the docker.pp file in the selinux-policy package has been replaced with the container.pp file, and the warning no longer occurs in the described scenario. (BZ# 1386916 ) Recently-added kernel classes and permission defined in selinux-policy Previously, several new classes and permissions had been added to the kernel. As a consequence, these classes and permissions that were not defined in the system policy caused SELinux denials or warnings. With this update, all recently-added kernel classes and permissions have been defined in the selinux-policy package, and the denials and warnings no longer occur. (BZ#1368057) nss now properly handles PKCS#12 files Previously, when using the pk12util tool to list certificates in a PKCS#12 file with strong ciphers using PKCS#5 v2.0 format, there was no output. Additionally, when using pk12util to list certificates in a PKCS#12 file with the SHA-2 Message Authentication Code (MAC), a MAC error was reported, but no certificates were printed. With this update, importing and exporting PKCS#12 files has been changed to be compatible with the OpenSSL handling, and PKCS#12 files are now processed properly in the described scenarios. (BZ# 1220573 ) OpenSCAP now produces only useful messages and warnings Previously, default scan output settings have been changed, and debug messages were also printed to standard output. As a consequence, the OpenSCAP output was full of errors and warnings. The output was hard to read and the SCAP Workbench was unable to handle those messages, too. With this update, the change of default output setting has been reverted, and OpenSCAP now produces useful output. (BZ# 1447341 ) AIDE now logs in the syslog format With this update, the AIDE detection system with the syslog_format option logs in the rsyslog -compatible format. Multiline logs cause problems while parsing on the remote rsyslog server. With the new syslog_format option, AIDE is now able to log with every change logged as a single line. (BZ#1377215) Installations with the OpenSCAP security-hardening profile now proceed Prior to this update, typos in the scap-security-guide package caused the Anaconda installation program to exit and restart a machine. Consequently, it was not possible to select any of the security-hardened profiles such as Criminal Justice Information Services (CJIS) during the Red Hat Enterprise Linux 7.4 installation process. The typos have been fixed, and installations with the OpenSCAP security-hardening profile now proceed. (BZ#1450731) OpenSCAP and SSG are now able to scan RHV-H systems correctly Previously, using the OpenSCAP and SCAP Security Guide (SSG) tools to scan a Red Hat Enterprise Linux system working as a Red Hat Virtualization Host (RHV-H) returned Not Applicable results. With this update, OpenSCAP and SSG correctly identify RHV-H as Red Hat Enterprise Linux, which enables OpenSCAP and SSG to scan RHV-H systems properly. (BZ# 1420038 ) OpenSCAP now handles also uncompressed XML files in a CVE OVAL feed Previously, the OpenSCAP tool was able to handle only compressed CVE OVAL files from a feed. As a consequence, the CVE OVAL feed provided by Red Hat cannot be used as a base for vulnerability scanning. With this update, OpenSCAP supports not only ZIP and BZIP2 files but also uncompressed XML files in a CVE OVAL feed, and the CVE OVAL-based scanning works properly without additional steps. (BZ# 1440192 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/bug_fixes_security
Chapter 11. Notifications overview
Chapter 11. Notifications overview Red Hat Quay supports adding notifications to a repository for various events that occur in the repository's lifecycle. 11.1. Notification actions Notifications are added to the Events and Notifications section of the Repository Settings page. They are also added to the Notifications window, which can be found by clicking the bell icon in the navigation pane of Red Hat Quay. Red Hat Quay notifications can be setup to be sent to a User , Team , or the entire organization . Notifications can be delivered by one of the following methods. E-mail notifications E-mails are sent to specified addresses that describe the specified event. E-mail addresses must be verified on a per-repository basis. Webhook POST notifications An HTTP POST call is made to the specified URL with the event's data. For more information about event data, see "Repository events description". When the URL is HTTPS, the call has an SSL client certificate set from Red Hat Quay. Verification of this certificate proves that the call originated from Red Hat Quay. Responses with the status code in the 2xx range are considered successful. Responses with any other status code are considered failures and result in a retry of the webhook notification. Flowdock notifications Posts a message to Flowdock. Hipchat notifications Posts a message to HipChat. Slack notifications Posts a message to Slack. 11.2. Creating notifications by using the UI Use the following procedure to add notifications. Prerequisites You have created a repository. You have administrative privileges for the repository. Procedure Navigate to a repository on Red Hat Quay. In the navigation pane, click Settings . In the Events and Notifications category, click Create Notification to add a new notification for a repository event. The Create notification popup box appears. On the Create repository popup box, click the When this event occurs box to select an event. You can select a notification for the following types of events: Push to Repository Image build failed Image build queued Image build started Image build success Image build cancelled Image expiry trigger After you have selected the event type, select the notification method. The following methods are supported: Quay Notification E-mail Notification Webhook POST Flowdock Team Notification HipChat Room Notification Slack Notification Depending on the method that you choose, you must include additional information. For example, if you select E-mail , you are required to include an e-mail address and an optional notification title. After selecting an event and notification method, click Create Notification . 11.2.1. Creating an image expiration notification Image expiration event triggers can be configured to notify users through email, Slack, webhooks, and so on, and can be configured at the repository level. Triggers can be set for images expiring in any amount of days, and can work in conjunction with the auto-pruning feature. Image expiration notifications can be set by using the Red Hat Quay v2 UI or by using the createRepoNotification API endpoint. Prerequisites FEATURE_GARBAGE_COLLECTION: true is set in your config.yaml file. Optional. FEATURE_AUTO_PRUNE: true is set in your config.yaml file. Procedure On the Red Hat Quay v2 UI, click Repositories . Select the name of a repository. Click Settings Events and notifications . Click Create notification . The Create notification popup box appears. Click the Select event... box, then click Image expiry trigger . In the When the image is due to expiry in days box, enter the number of days before the image's expiration when you want to receive an alert. For example, use 1 for 1 day. In the Select method... box, click one of the following: E-mail Webhook POST Flowdock Team Notification HipChat Room Notification Slack Notification Depending on which method you chose, include the necessary data. For example, if you chose Webhook POST , include the Webhook URL . Optional. Provide a POST JSON body template . Optional. Provide a Title for your notification. Click Submit . You are returned to the Events and notifications page, and the notification now appears. Optional. You can set the NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES variable in your config.yaml file. with this field set, if there are any expiring images notifications will be sent automatically. By default, this is set to 300 , or 5 hours, however it can be adjusted as warranted. NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES: 300 1 1 By default, this field is set to 300 , or 5 hours. Verification Click the menu kebab Test Notification . The following message is returned: Test Notification Queued A test version of this notification has been queued and should appear shortly Depending on which method you chose, check your e-mail, webhook address, Slack channel, and so on. The information sent should look similar to the following example: { "repository": "sample_org/busybox", "namespace": "sample_org", "name": "busybox", "docker_url": "quay-server.example.com/sample_org/busybox", "homepage": "http://quay-server.example.com/repository/sample_org/busybox", "tags": [ "latest", "v1" ], "expiring_in": "1 days" } 11.3. Creating notifications by using the API Use the following procedure to add notifications. Prerequisites You have created a repository. You have administrative privileges for the repository. You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following POST /api/v1/repository/{repository}/notification command to create a notification on your repository: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ --data '{ "event": "<event>", "method": "<method>", "config": { "<config_key>": "<config_value>" }, "eventConfig": { "<eventConfig_key>": "<eventConfig_value>" } }' \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/ This command does not return output in the CLI. Instead, you can enter the following GET /api/v1/repository/{repository}/notification/{uuid} command to obtain information about the repository notification: {"uuid": "240662ea-597b-499d-98bb-2b57e73408d6", "title": null, "event": "repo_push", "method": "quay_notification", "config": {"target": {"name": "quayadmin", "kind": "user", "is_robot": false, "avatar": {"name": "quayadmin", "hash": "b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc", "color": "#17becf", "kind": "user"}}}, "event_config": {}, "number_of_failures": 0} You can test your repository notification by entering the following POST /api/v1/repository/{repository}/notification/{uuid}/test command: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>/test Example output {} You can reset repository notification failures to 0 by entering the following POST /api/v1/repository/{repository}/notification/{uuid} command: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid> Enter the following DELETE /api/v1/repository/{repository}/notification/{uuid} command to delete a repository notification: USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/<uuid> This command does not return output in the CLI. Instead, you can enter the following GET /api/v1/repository/{repository}/notification/ command to retrieve a list of all notifications: USD curl -X GET -H "Authorization: Bearer <bearer_token>" -H "Accept: application/json" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification Example output {"notifications": []} 11.4. Repository events description The following sections detail repository events. Repository Push A successful push of one or more images was made to the repository: Dockerfile Build Queued The following example is a response from a Dockerfile Build that has been queued into the Build system. Note Responses can differ based on the use of optional attributes. Dockerfile Build started The following example is a response from a Dockerfile Build that has been queued into the Build system. Note Responses can differ based on the use of optional attributes. Dockerfile Build successfully completed The following example is a response from a Dockerfile Build that has been successfully completed by the Build system. Note This event occurs simultaneously with a Repository Push event for the built image or images. Dockerfile Build failed The following example is a response from a Dockerfile Build that has failed. Dockerfile Build cancelled The following example is a response from a Dockerfile Build that has been cancelled. Vulnerability detected The following example is a response from a Dockerfile Build has detected a vulnerability in the repository.
[ "NOTIFICATION_TASK_RUN_MINIMUM_INTERVAL_MINUTES: 300 1", "Test Notification Queued A test version of this notification has been queued and should appear shortly", "{ \"repository\": \"sample_org/busybox\", \"namespace\": \"sample_org\", \"name\": \"busybox\", \"docker_url\": \"quay-server.example.com/sample_org/busybox\", \"homepage\": \"http://quay-server.example.com/repository/sample_org/busybox\", \"tags\": [ \"latest\", \"v1\" ], \"expiring_in\": \"1 days\" }", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" --data '{ \"event\": \"<event>\", \"method\": \"<method>\", \"config\": { \"<config_key>\": \"<config_value>\" }, \"eventConfig\": { \"<eventConfig_key>\": \"<eventConfig_value>\" } }' https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/", "{\"uuid\": \"240662ea-597b-499d-98bb-2b57e73408d6\", \"title\": null, \"event\": \"repo_push\", \"method\": \"quay_notification\", \"config\": {\"target\": {\"name\": \"quayadmin\", \"kind\": \"user\", \"is_robot\": false, \"avatar\": {\"name\": \"quayadmin\", \"hash\": \"b28d563a6dc76b4431fc7b0524bbff6b810387dac86d9303874871839859c7cc\", \"color\": \"#17becf\", \"kind\": \"user\"}}}, \"event_config\": {}, \"number_of_failures\": 0}", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>/test", "{}", "curl -X POST -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<repository>/notification/<uuid>", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification/<uuid>", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" -H \"Accept: application/json\" https://<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>/notification", "{\"notifications\": []}", "{ \"name\": \"repository\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"homepage\": \"https://quay.io/repository/dgangaia/repository\", \"updated_tags\": [ \"latest\" ] }", "{ \"build_id\": \"296ec063-5f86-4706-a469-f0a400bf9df2\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"repo\": \"test\", \"trigger_metadata\": { \"default_branch\": \"master\", \"commit\": \"b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"ref\": \"refs/heads/master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"date\": \"2019-03-06T12:48:24+11:00\", \"message\": \"adding 5\", \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional }, \"committer\": { \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" } } }, \"is_manual\": false, \"manual_user\": null, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/296ec063-5f86-4706-a469-f0a400bf9df2\" }", "{ \"build_id\": \"a8cc247a-a662-4fee-8dcb-7d7e822b71ba\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"50bc599\", \"trigger_metadata\": { //Optional \"commit\": \"50bc5996d4587fd4b2d8edc4af652d4cec293c42\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/50bc5996d4587fd4b2d8edc4af652d4cec293c42\", \"date\": \"2019-03-06T14:10:14+11:00\", \"message\": \"test build\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/a8cc247a-a662-4fee-8dcb-7d7e822b71ba\" }", "{ \"build_id\": \"296ec063-5f86-4706-a469-f0a400bf9df2\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"b7f7d2b\", \"image_id\": \"sha256:0339f178f26ae24930e9ad32751d6839015109eabdf1c25b3b0f2abf8934f6cb\", \"trigger_metadata\": { \"commit\": \"b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/b7f7d2b948aacbe844ee465122a85a9368b2b735\", \"date\": \"2019-03-06T12:48:24+11:00\", \"message\": \"adding 5\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/296ec063-5f86-4706-a469-f0a400bf9df2\", \"manifest_digests\": [ \"quay.io/dgangaia/test@sha256:2a7af5265344cc3704d5d47c4604b1efcbd227a7a6a6ff73d6e4e08a27fd7d99\", \"quay.io/dgangaia/test@sha256:569e7db1a867069835e8e97d50c96eccafde65f08ea3e0d5debaf16e2545d9d1\" ] }", "{ \"build_id\": \"5346a21d-3434-4764-85be-5be1296f293c\", \"trigger_kind\": \"github\", //Optional \"name\": \"test\", \"repository\": \"dgangaia/test\", \"docker_url\": \"quay.io/dgangaia/test\", \"error_message\": \"Could not find or parse Dockerfile: unknown instruction: GIT\", \"namespace\": \"dgangaia\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", //Optional \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"6ae9a86\", \"trigger_metadata\": { //Optional \"commit\": \"6ae9a86930fc73dd07b02e4c5bf63ee60be180ad\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { //Optional \"url\": \"https://github.com/dgangaia/test/commit/6ae9a86930fc73dd07b02e4c5bf63ee60be180ad\", \"date\": \"2019-03-06T14:18:16+11:00\", \"message\": \"failed build test\", \"committer\": { //Optional \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", //Optional \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" //Optional }, \"author\": { //Optional \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", //Optional \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" //Optional } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/5346a21d-3434-4764-85be-5be1296f293c\" }", "{ \"build_id\": \"cbd534c5-f1c0-4816-b4e3-55446b851e70\", \"trigger_kind\": \"github\", \"name\": \"test\", \"repository\": \"dgangaia/test\", \"namespace\": \"dgangaia\", \"docker_url\": \"quay.io/dgangaia/test\", \"trigger_id\": \"38b6e180-9521-4ff7-9844-acf371340b9e\", \"docker_tags\": [ \"master\", \"latest\" ], \"build_name\": \"cbce83c\", \"trigger_metadata\": { \"commit\": \"cbce83c04bfb59734fc42a83aab738704ba7ec41\", \"ref\": \"refs/heads/master\", \"default_branch\": \"master\", \"git_url\": \"[email protected]:dgangaia/test.git\", \"commit_info\": { \"url\": \"https://github.com/dgangaia/test/commit/cbce83c04bfb59734fc42a83aab738704ba7ec41\", \"date\": \"2019-03-06T14:27:53+11:00\", \"message\": \"testing cancel build\", \"committer\": { \"username\": \"web-flow\", \"url\": \"https://github.com/web-flow\", \"avatar_url\": \"https://avatars3.githubusercontent.com/u/19864447?v=4\" }, \"author\": { \"username\": \"dgangaia\", \"url\": \"https://github.com/dgangaia\", \"avatar_url\": \"https://avatars1.githubusercontent.com/u/43594254?v=4\" } } }, \"homepage\": \"https://quay.io/repository/dgangaia/test/build/cbd534c5-f1c0-4816-b4e3-55446b851e70\" }", "{ \"repository\": \"dgangaia/repository\", \"namespace\": \"dgangaia\", \"name\": \"repository\", \"docker_url\": \"quay.io/dgangaia/repository\", \"homepage\": \"https://quay.io/repository/dgangaia/repository\", \"tags\": [\"latest\", \"othertag\"], \"vulnerability\": { \"id\": \"CVE-1234-5678\", \"description\": \"This is a bad vulnerability\", \"link\": \"http://url/to/vuln/info\", \"priority\": \"Critical\", \"has_fix\": true } }" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/use_red_hat_quay/repository-notifications
Chapter 2. Enabling applications that are connected to OpenShift AI
Chapter 2. Enabling applications that are connected to OpenShift AI You must enable SaaS-based applications before using them with Red Hat OpenShift AI. On-cluster applications are enabled automatically. Typically, you can install, or enable applications connected to OpenShift AI using one of the following methods: Enabling the application from the Explore page on the OpenShift AI dashboard, as documented in the following procedure. Installing the Operator for the application from OperatorHub. OperatorHub is a web console for cluster administrators to discover and select Operators to install on their cluster. It is deployed by default in OpenShift ( Installing from OperatorHub using the web console ). Note Deployments containing Operators installed from OperatorHub may not be fully supported by Red Hat. Installing the Operator for the application from Red Hat Marketplace ( Install Operators ). Installing the application as an Add-on to your OpenShift Dedicated ( Adding Operators to an OpenShift Dedicated cluster ) or ROSA cluster ( Adding Operators to a ROSA cluster ). For some applications (such as Jupyter), the API endpoint is available on the tile for the application on the Enabled page of OpenShift AI. Certain applications cannot be accessed directly from their tiles, for example, OpenVINO provides notebook images for use in Jupyter and does not provide an endpoint link from its tile. Additionally, it may be useful to store these endpoint URLs as environment variables for easy reference in a notebook environment. Some independent software vendor (ISV) applications must be installed in specific namespaces. In these cases, the tile for the application in the OpenShift AI dashboard specifies the required namespace. To help you get started quickly, you can access the application's learning resources and documentation on the Resources page, or on the Enabled page by clicking the relevant link on the tile for the application. Prerequisites You have logged in to Red Hat OpenShift AI. Your administrator has installed or configured the application on your OpenShift cluster. Procedure On the OpenShift AI home page, click Explore . On the Explore page, find the tile for the application that you want to enable. Click Enable on the application tile. If prompted, enter the application's service key and then click Connect . Click Enable to confirm that you want to enable the application. Verification The application that you enabled appears on the Enabled page. The API endpoint is displayed on the tile for the application on the Enabled page.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/working_with_connected_applications/enabling-applications-connected_connected-apps
probe::nfsd.dispatch
probe::nfsd.dispatch Name probe::nfsd.dispatch - NFS server receives an operation from client Synopsis nfsd.dispatch Values xid transmission id version nfs version proto transfer protocol proc procedure number client_ip the ip address of client prog program number
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfsd-dispatch
37.3. Additional Resources
37.3. Additional Resources For more information on kernel modules and their utilities, refer to the following resources. 37.3.1. Installed Documentation lsmod man page - description and explanation of its output. insmod man page - description and list of command line options. modprobe man page - description and list of command line options. rmmod man page - description and list of command line options. modinfo man page - description and list of command line options. /usr/share/doc/kernel-doc- <version> /Documentation/kbuild/modules.txt - how to compile and use kernel modules.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/kernel_modules-additional_resources
4.3.2. Adding Physical Volumes to a Volume Group
4.3.2. Adding Physical Volumes to a Volume Group To add additional physical volumes to an existing volume group, use the vgextend command. The vgextend command increases a volume group's capacity by adding one or more free physical volumes. The following command adds the physical volume /dev/sdf1 to the volume group vg1 .
[ "vgextend vg1 /dev/sdf1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/VG_grow
3.14. Software Collection Kernel Module Support
3.14. Software Collection Kernel Module Support Because Linux kernel modules are normally tied to a particular version of the Linux kernel, you must be careful when you package kernel modules into a Software Collection. This is because the package management system on Red Hat Enterprise Linux does not automatically update or install an updated version of the kernel module if an updated version of the Linux kernel is installed. To make packaging the kernel modules into the Software Collection easier, see the following recommendations. Ensure that: the name of your kernel module package includes the kernel version, the tag Requires , which can be found in your kernel module spec file, includes the kernel version and revision (in the format kernel- version - revision ).
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-software_collection_kernel_module_support
3.3. Confined and Unconfined Users
3.3. Confined and Unconfined Users Each Linux user is mapped to an SELinux user using SELinux policy. This allows Linux users to inherit the restrictions on SELinux users. This Linux user mapping is seen by running the semanage login -l command as root: In Red Hat Enterprise Linux, Linux users are mapped to the SELinux __default__ login by default, which is mapped to the SELinux unconfined_u user. The following line defines the default mapping: The following procedure demonstrates how to add a new Linux user to the system and how to map that user to the SELinux unconfined_u user. It assumes that the root user is running unconfined, as it does by default in Red Hat Enterprise Linux: Procedure 3.4. Mapping a New Linux User to the SELinux unconfined_u User As root, enter the following command to create a new Linux user named newuser : To assign a password to the Linux newuser user. Enter the following command as root: Log out of your current session, and log in as the Linux newuser user. When you log in, the pam_selinux PAM module automatically maps the Linux user to an SELinux user (in this case, unconfined_u ), and sets up the resulting SELinux context. The Linux user's shell is then launched with this context. Enter the following command to view the context of a Linux user: Note If you no longer need the newuser user on your system, log out of the Linux newuser 's session, log in with your account, and run the userdel -r newuser command as root. It will remove newuser along with their home directory. Confined and unconfined Linux users are subject to executable and writable memory checks, and are also restricted by MCS or MLS. To list the available SELinux users, enter the following command: Note that the seinfo command is provided by the setools-console package, which is not installed by default. If an unconfined Linux user executes an application that SELinux policy defines as one that can transition from the unconfined_t domain to its own confined domain, the unconfined Linux user is still subject to the restrictions of that confined domain. The security benefit of this is that, even though a Linux user is running unconfined, the application remains confined. Therefore, the exploitation of a flaw in the application can be limited by the policy. Similarly, we can apply these checks to confined users. Each confined Linux user is restricted by a confined user domain. The SELinux policy can also define a transition from a confined user domain to its own target confined domain. In such a case, confined Linux users are subject to the restrictions of that target confined domain. The main point is that special privileges are associated with the confined users according to their role. In the table below, you can see examples of basic confined domains for Linux users in Red Hat Enterprise Linux: Table 3.1. SELinux User Capabilities User Role Domain X Window System su or sudo Execute in home directory and /tmp (default) Networking sysadm_u sysadm_r sysadm_t yes su and sudo yes yes staff_u staff_r staff_t yes only sudo yes yes user_u user_r user_t yes no yes yes guest_u guest_r guest_t no no yes no xguest_u xguest_r xguest_t yes no yes Firefox only Linux users in the user_t , guest_t , and xguest_t domains can only run set user ID (setuid) applications if SELinux policy permits it (for example, passwd ). These users cannot run the su and sudo setuid applications, and therefore cannot use these applications to become root. Linux users in the sysadm_t , staff_t , user_t , and xguest_t domains can log in using the X Window System and a terminal. By default, Linux users in the staff_t , user_t , guest_t , and xguest_t domains can execute applications in their home directories and /tmp . To prevent them from executing applications, which inherit users' permissions, in directories they have write access to, set the guest_exec_content and xguest_exec_content booleans to off . This helps prevent flawed or malicious applications from modifying users' files. See Section 6.6, "Booleans for Users Executing Applications" for information about allowing and preventing users from executing applications in their home directories and /tmp . The only network access Linux users in the xguest_t domain have is Firefox connecting to web pages. Note that system_u is a special user identity for system processes and objects. It must never be associated to a Linux user. Also, unconfined_u and root are unconfined users. For these reasons, they are not included in the aforementioned table of SELinux user capabilities. Alongside with the already mentioned SELinux users, there are special roles, that can be mapped to those users. These roles determine what SELinux allows the user to do: webadm_r can only administrate SELinux types related to the Apache HTTP Server. See Section 13.2, "Types" for further information. dbadm_r can only administrate SELinux types related to the MariaDB database and the PostgreSQL database management system. See Section 20.2, "Types" and Section 21.2, "Types" for further information. logadm_r can only administrate SELinux types related to the syslog and auditlog processes. secadm_r can only administrate SELinux. auditadm_r can only administrate processes related to the audit subsystem. To list all available roles, enter the following command: As mentioned before, the seinfo command is provided by the setools-console package, which is not installed by default. 3.3.1. The sudo Transition and SELinux Roles In certain cases, confined users need to perform an administrative task that require root privileges. To do so, such a confined user has to gain a confined administrator SELinux role using the sudo command. The sudo command is used to give trusted users administrative access. When users precede an administrative command with sudo , they are prompted for their own password. Then, when they have been authenticated and assuming that the command is permitted, the administrative command is executed as if they were the root user. As shown in Table 3.1, "SELinux User Capabilities" , only the staff_u and sysadm_u SELinux confined users are permitted to use sudo by default. When such users execute a command with sudo , their role can be changed based on the rules specified in the /etc/sudoers configuration file or in a respective file in the /etc/sudoers.d/ directory if such a file exists. For more information about sudo , see the Gaining Privileges section in the Red Hat Enterprise Linux 7 System Administrator's Guide . Procedure 3.5. Configuring the sudo Transition This procedure shows how to set up sudo to transition a newly-created SELinux_user_u confined user from a default_role_r to an administrator_r administrator role. Note To configure a confined administrator role for an already existing SELinux user, skip the first two steps. Create a new SELinux user and specify the default SELinux role and a supplementary confined administrator role for this user: Set up the default SElinux policy context file. For example, to have the same SELinux rules as the staff_u SELinux user, copy the staff_u context file: Map the newly-created SELinux user to an existing Linux user: Create a new configuration file with the same name as your Linux user in the /etc/sudoers.d/ directory and add the following string to it: Use the restorecon utility to relabel the linux_user home directory: Log in to the system as the newly-created Linux user and check that the user is labeled with the default SELinux role: Run sudo to change the user's SELinux context to the supplementary SELinux role as specified in /etc/sudoers.d/ linux_user . The -i option used with sudo causes that an interactive shell is executed: To better understand the placeholders, such as default_role_r or administrator_r , see the following example. Example 3.1. Configuring the sudo Transition This example creates a new SELinux user confined_u with default assigned role staff_r and with sudo configured to change the role of confined_u from staff_r to webadm_r . Enter all the following commands as the root user in the sysadm_r or unconfined_r role. Log in to the system as the newly-created Linux user and check that the user is labeled with the default SELinux role:
[ "~]# semanage login -l Login Name SELinux User MLS/MCS Range Service __default__ unconfined_u s0-s0:c0.c1023 * root unconfined_u s0-s0:c0.c1023 * system_u system_u s0-s0:c0.c1023 *", "__default__ unconfined_u s0-s0:c0.c1023", "~]# useradd newuser", "~]# passwd newuser Changing password for user newuser. New UNIX password: Enter a password Retype new UNIX password: Enter the same password again passwd: all authentication tokens updated successfully.", "[newuser@localhost ~]USD id -Z unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023", "~]USD seinfo -u Users: 8 sysadm_u system_u xguest_u root guest_u staff_u user_u unconfined_u", "~]USD seinfo -r", "~]# semanage user -a -r s0-s0:c0.c1023 -R \" default_role_r administrator_r \" SELinux_user_u", "~]# cp /etc/selinux/targeted/contexts/users/staff_u /etc/selinux/targeted/contexts/users/ SELinux_user_u", "semanage login -a -s SELinux_user_u -rs0:c0.c1023 linux_user", "~]# echo \" linux_user ALL=(ALL) TYPE= administrator_t ROLE= administrator_r /bin/bash \" > /etc/sudoers.d/ linux_user", "~]# restorecon -FR -v /home/ linux_user", "~]USD id -Z SELinux_user_u : default_role_r : SELinux_user_t :s0:c0.c1023", "~]USD sudo -i ~]# id -Z SELinux_user_u : administrator_r : administrator_t :s0:c0.c1023", "~]# semanage user -a -r s0-s0:c0.c1023 -R \"staff_r webadm_r\" confined_u ~]# cp /etc/selinux/targeted/contexts/users/staff_u /etc/selinux/targeted/contexts/users/confined_u ~]# semanage login -a -s confined_u -rs0:c0.c1023 linux_user ~]# restorecon -FR -v /home/linux_user ~]# echo \" linux_user ALL=(ALL) ROLE=webadm_r TYPE=webadm_t /bin/bash \" > /etc/sudoers.d/linux_user", "~]USD id -Z confined_u:staff_r:staff_t:s0:c0.c1023 ~]USD sudo -i ~]# id -Z confined_u:webadm_r:webadm_t:s0:c0.c1023" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/sect-security-enhanced_linux-targeted_policy-confined_and_unconfined_users
13.5. Resizing a Partition with fdisk
13.5. Resizing a Partition with fdisk The fdisk utility allows you to create and manipulate GPT, MBR, Sun, SGI, and BSD partition tables. On disks with a GUID Partition Table (GPT), using the parted utility is recommended, as fdisk GPT support is in an experimental phase. Before resizing a partition, back up the data stored on the file system and test the procedure, as the only way to change a partition size using fdisk is by deleting and recreating the partition. Important The partition you are resizing must be the last partition on a particular disk. Red Hat only supports extending and resizing LVM partitions. Procedure 13.4. Resizing a Partition The following procedure is provided only for reference. To resize a partition using fdisk : Unmount the device: Run fdisk disk_name . For example: Use the p option to determine the line number of the partition to be deleted. Use the d option to delete a partition. If there is more than one partition available, fdisk prompts you to provide a number of the partition to delete: Use the n option to create a partition and follow the prompts. Allow enough space for any future resizing. The fdisk default behavior (press Enter ) is to use all space on the device. You can specify the end of the partition by sectors, or specify a human-readable size by using + <size> <suffix> , for example +500M, or +10G. Red Hat recommends using the human-readable size specification if you do not want to use all free space, as fdisk aligns the end of the partition with the physical sectors. If you specify the size by providing an exact number (in sectors), fdisk does not align the end of the partition. Set the partition type to LVM: Write the changes with the w option when you are sure the changes are correct, as errors can cause instability with the selected partition. Run e2fsck on the device to check for consistency: Mount the device: For more information, see the fdisk (8) manual page.
[ "umount /dev/vda", "fdisk /dev/vda Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them. Be careful before using the write command. Command (m for help):", "Command (m for help): p Disk /dev/vda: 16.1 GB, 16106127360 bytes, 31457280 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x0006d09a Device Boot Start End Blocks Id System /dev/vda1 * 2048 1026047 512000 83 Linux /dev/vda2 1026048 31457279 15215616 8e Linux LVM", "Command (m for help): d Partition number (1,2, default 2): 2 Partition 2 is deleted", "Command (m for help): n Partition type: p primary (1 primary, 0 extended, 3 free) e extended Select (default p): *Enter* Using default response p Partition number (2-4, default 2): *Enter* First sector (1026048-31457279, default 1026048): *Enter* Using default value 1026048 Last sector, +sectors or +size{K,M,G} (1026048-31457279, default 31457279): +500M Partition 2 of type Linux and of size 500 MiB is set", "Command (m for help): t Partition number (1,2, default 2): *Enter* Hex code (type L to list all codes): 8e Changed type of partition 'Linux' to 'Linux LVM'", "e2fsck /dev/vda e2fsck 1.41.12 (17-May-2010) Pass 1:Checking inodes, blocks, and sizes Pass 2:Checking directory structure Pass 3:Checking directory connectivity Pass 4:Checking reference counts Pass 5:Checking group summary information ext4-1:11/131072 files (0.0% non-contiguous),27050/524128 blocks", "mount /dev/vda" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/s2-disk-storage-parted-resize-part
Chapter 8. Migrating virtual machines from OpenStack
Chapter 8. Migrating virtual machines from OpenStack 8.1. Adding an OpenStack source provider You can add an OpenStack source provider by using the Red Hat OpenShift web console. Important When you migrate an image-based VM from an OpenStack provider, a snapshot is created for the image that is attached to the source VM and the data from the snapshot is copied over to the target VM. This means that the target VM will have the same state as that of the source VM at the time the snapshot was created. Procedure In the Red Hat OpenShift web console, click Migration Providers for virtualization . Click Create Provider . Click OpenStack . Specify the following fields: Provider resource name : Name of the source provider. URL : URL of the OpenStack Identity (Keystone) endpoint. For example, http://controller:5000/v3 . Authentication type : Choose one of the following methods of authentication and supply the information related to your choice. For example, if you choose Application credential ID as the authentication type, the Application credential ID and the Application credential secret fields become active, and you need to supply the ID and the secret. Application credential ID Application credential ID : OpenStack application credential ID Application credential secret : OpenStack application credential Secret Application credential name Application credential name : OpenStack application credential name Application credential secret : OpenStack application credential Secret Username : OpenStack username Domain : OpenStack domain name Token with user ID Token : OpenStack token User ID : OpenStack user ID Project ID : OpenStack project ID Token with user Name Token : OpenStack token Username : OpenStack username Project : OpenStack project Domain name : OpenStack domain name Password Username : OpenStack username Password : OpenStack password Project : OpenStack project Domain : OpenStack domain name Choose one of the following options for validating CA certificates: Use a custom CA certificate : Migrate after validating a custom CA certificate. Use the system CA certificate : Migrate after validating the system CA certificate. Skip certificate validation : Migrate without validating a CA certificate. To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select . To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty. To skip certificate validation, toggle the Skip certificate validation switch to the right. Optional: Ask MTV to fetch a custom CA certificate from the provider's API endpoint URL. Click Fetch certificate from URL . The Verify certificate window opens. If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm . If not, click Cancel , and then, enter the correct certificate information manually. Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint. Click Create provider to add and save the provider. The provider appears in the list of providers. Optional: Add access to the UI of the provider: On the Providers page, click the provider. The Provider details page opens. Click the Edit icon under External UI web link . Enter the link and click Save . Note If you do not enter a link, MTV attempts to calculate the correct link. If MTV succeeds, the hyperlink of the field points to the calculated link. If MTV does not succeed, the field remains empty. 8.2. Adding an OpenShift Virtualization destination provider You can use a Red Hat OpenShift Virtualization provider as both a source provider and destination provider. Specifically, the host cluster that is automatically added as a OpenShift Virtualization provider can be used as both a source provider and a destination provider. You can also add another OpenShift Virtualization destination provider to the Red Hat OpenShift web console in addition to the default OpenShift Virtualization destination provider, which is the cluster where you installed MTV. You can migrate VMs from the cluster that MTV is deployed on to another cluster, or from a remote cluster to the cluster that MTV is deployed on. Prerequisites You must have an OpenShift Virtualization service account token with cluster-admin privileges. Procedure In the Red Hat OpenShift web console, click Migration Providers for virtualization . Click Create Provider . Click OpenShift Virtualization . Specify the following fields: Provider resource name : Name of the source provider URL : URL of the endpoint of the API server Service account bearer token : Token for a service account with cluster-admin privileges If both URL and Service account bearer token are left blank, the local OpenShift cluster is used. Choose one of the following options for validating CA certificates: Use a custom CA certificate : Migrate after validating a custom CA certificate. Use the system CA certificate : Migrate after validating the system CA certificate. Skip certificate validation : Migrate without validating a CA certificate. To use a custom CA certificate, leave the Skip certificate validation switch toggled to left, and either drag the CA certificate to the text box or browse for it and click Select . To use the system CA certificate, leave the Skip certificate validation switch toggled to the left, and leave the CA certificate text box empty. To skip certificate validation, toggle the Skip certificate validation switch to the right. Optional: Ask MTV to fetch a custom CA certificate from the provider's API endpoint URL. Click Fetch certificate from URL . The Verify certificate window opens. If the details are correct, select the I trust the authenticity of this certificate checkbox, and then, click Confirm . If not, click Cancel , and then, enter the correct certificate information manually. Once confirmed, the CA certificate will be used to validate subsequent communication with the API endpoint. Click Create provider to add and save the provider. The provider appears in the list of providers. 8.3. Selecting a migration network for an OpenShift Virtualization provider You can select a default migration network for an OpenShift Virtualization provider in the Red Hat OpenShift web console to improve performance. The default migration network is used to transfer disks to the namespaces in which it is configured. If you do not select a migration network, the default migration network is the pod network, which might not be optimal for disk transfer. Note You can override the default migration network of the provider by selecting a different network when you create a migration plan. Procedure In the Red Hat OpenShift web console, click Migration > Providers for virtualization . Click the OpenShift Virtualization provider whose migration network you want to change. When the Providers detail page opens: Click the Networks tab. Click Set default transfer network . Select a default transfer network from the list and click Save . 8.4. Creating a migration plan Use the Red Hat OpenShift web console to create a migration plan. Specify the source provider, the virtual machines (VMs) you want to migrate, and other plan details. Warning Do not include virtual machines with guest-initiated storage connections, such as Internet Small Computer Systems Interface (iSCSI) connections or Network File System (NFS) mounts. These require either additional planning before migration or reconfiguration after migration. This prevents concurrent disk access to the storage the guest points to. Important A plan cannot contain more than 500 VMs or 500 disks. Procedure In the Red Hat OpenShift web console, click Plans for virtualization and then click Create Plan . The Create migration plan wizard opens to the Select source provider interface. Select the source provider of the VMs you want to migrate. The Select virtual machines interface opens. Select the VMs you want to migrate and click . The Create migration plan pane opens. It displays the source provider's name and suggestions for a target provider and namespace, a network map, and a storage map. Enter the Plan name . To change the Target provider , the Target namespace , or elements of the Network map or the Storage map , select an item from the relevant list. To add either a Network map or a Storage map , click the + sign anf add a mapping. Click Create migration plan . MTV validates the migration plan, and the Plan details page opens, indicating whether the plan is ready for use or contains an error. The details of the plan are listed, and you can edit the items you filled in on the page. If you make any changes, MTV validates the plan again. Check the following items in the Settings section of the page: Transfer Network : The network used to transfer the VMs to OpenShift Virtualization, by default, this is the default transfer network of the provider. Verify that the transfer network is in the selected target namespace.To edit the transfer network, click the Edit icon, choose a different transfer network from the list in the window that opens, and click Save . You can configure an OpenShift network in the OpenShift web console by clicking Networking > NetworkAttachmentDefinitions . To learn more about the different types of networks OpenShift supports, see Additional Networks in OpenShift Container Platform . If you want to adjust the maximum transmission unit (MTU) of the OpenShift transfer network, you must also change the MTU of the VMware migration network. For more information see Selecting a migration network for a VMware source provider . Target namespace : Destination namespace to be used by all the migrated VMs, by default, this is the current or active namespace. To edit the namespace, click the Edit icon, choose a different target namespace from the list in the window that opens, and click Save . If your plan is valid, you can do one of the following: Run the plan now by clicking Start migration . Run the plan later by selecting it on the Plans for virtualization page and following the procedure in Running a migration plan . 8.5. Running a migration plan You can run a migration plan and view its progress in the Red Hat OpenShift web console. Prerequisites Valid migration plan. Procedure In the Red Hat OpenShift web console, click Migration Plans for virtualization . The Plans list displays the source and target providers, the number of virtual machines (VMs) being migrated, the status, the date that the migration started, and the description of each plan. Click Start beside a migration plan to start the migration. Click Start in the confirmation window that opens. The plan's Status changes to Running , and the migration's progress is displayed. Warm migration only: The precopy stage starts. Click Cutover to complete the migration. Optional: Click the links in the migration's Status to see its overall status and the status of each VM: The link on the left indicates whether the migration failed, succeeded, or is ongoing. It also reports the number of VMs whose migration succeeded, failed, or was canceled. The link on the right opens the Virtual Machines tab of the Plan Details page. For each VM, the tab displays the following data: The name of the VM The start and end times of the migration The amount of data copied A progress pipeline for the VM's migration Warning vMotion, including svMotion, and relocation must be disabled for VMs that are being imported to avoid data corruption. Optional: To view your migration's logs, either as it is running or after it is completed, perform the following actions: Click the Virtual Machines tab. Click the arrow ( > ) to the left of the virtual machine whose migration progress you want to check. The VM's details are displayed. In the Pods section, in the Pod links column, click the Logs link. The Logs tab opens. Note Logs are not always available. The following are common reasons for logs not being available: The migration is from OpenShift Virtualization to OpenShift Virtualization. In this case, virt-v2v is not involved, so no pod is required. No pod was created. The pod was deleted. The migration failed before running the pod. To see the raw logs, click the Raw link. To download the logs, click the Download link. 8.6. Migration plan options On the Plans for virtualization page of the Red Hat OpenShift web console, you can click the Options menu beside a migration plan to access the following options: Edit Plan : Edit the details of a migration plan. If the plan is running or has completed successfully, you cannot edit the following options: All properties on the Settings section of the Plan details page. For example, warm or cold migration, target namespace, and preserved static IPs. The plan's mapping on the Mappings tab. The hooks listed on the Hooks tab. Start migration : Active only if relevant. Restart migration : Restart a migration that was interrupted. Before choosing this option, make sure there are no error messages. If there are, you need to edit the plan. Cutover : Warm migrations only. Active only if relevant. Clicking Cutover opens the Cutover window, which supports the following options: Set cutover : Set the date and time for a cutover. Remove cutover : Cancel a scheduled cutover. Active only if relevant. Duplicate Plan : Create a new migration plan with the same virtual machines (VMs), parameters, mappings, and hooks as an existing plan. You can use this feature for the following tasks: Migrate VMs to a different namespace. Edit an archived migration plan. Edit a migration plan with a different status, for example, failed, canceled, running, critical, or ready. Archive Plan : Delete the logs, history, and metadata of a migration plan. The plan cannot be edited or restarted. It can only be viewed, duplicated, or deleted. Note Archive Plan is irreversible. However, you can duplicate an archived plan. Delete Plan : Permanently remove a migration plan. You cannot delete a running migration plan. Note Delete Plan is irreversible. Deleting a migration plan does not remove temporary resources. To remove temporary resources, archive the plan first before deleting it. 8.7. Canceling a migration You can cancel the migration of some or all virtual machines (VMs) while a migration plan is in progress by using the Red Hat OpenShift web console. Procedure In the Red Hat OpenShift web console, click Plans for virtualization . Click the name of a running migration plan to view the migration details. Select one or more VMs and click Cancel . Click Yes, cancel to confirm the cancellation. In the Migration details by VM list, the status of the canceled VMs is Canceled . The unmigrated and the migrated virtual machines are not affected. You can restart a canceled migration by clicking Restart beside the migration plan on the Migration plans page.
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.7/html/installing_and_using_the_migration_toolkit_for_virtualization/migrating-osp_ostack
Preface
Preface Important To function properly, GNOME requires your system to support 3D acceleration . This includes bare metal systems, as well as hypervisor solutions such as VMWare . If GNOME does not start or performs poorly on your VMWare virtual machine (VM), see Why does the GUI fail to start on my VMware virtual machine? (Red Hat Knowledgebase)
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/customizing_the_gnome_desktop_environment/pr01
Chapter 1. The LVM Logical Volume Manager
Chapter 1. The LVM Logical Volume Manager This chapter provides a summary of the features of the LVM logical volume manager that are new since the initial release of Red Hat Enterprise Linux 7. This chapter also provides a high-level overview of the components of the Logical Volume Manager (LVM). 1.1. New and Changed Features This section lists features of the LVM logical volume manager that are new since the initial release of Red Hat Enterprise Linux 7. 1.1.1. New and Changed Features for Red Hat Enterprise Linux 7.1 Red Hat Enterprise Linux 7.1 includes the following documentation and feature updates and changes. The documentation for thinly-provisioned volumes and thinly-provisioned snapshots has been clarified. Additional information about LVM thin provisioning is now provided in the lvmthin (7) man page. For general information on thinly-provisioned logical volumes, see Section 2.3.4, "Thinly-Provisioned Logical Volumes (Thin Volumes)" . For information on thinly-provisioned snapshot volumes, see Section 2.3.6, "Thinly-Provisioned Snapshot Volumes" . This manual now documents the lvm dumpconfig command in Section B.2, "The lvmconfig Command" . Note that as of the Red Hat Enterprise Linux 7.2 release, this command was renamed lvmconfig , although the old format continues to work. This manual now documents LVM profiles in Section B.3, "LVM Profiles" . This manual now documents the lvm command in Section 3.6, "Displaying LVM Information with the lvm Command" . In the Red Hat Enterprise Linux 7.1 release, you can control activation of thin pool snapshots with the -k and -K options of the lvcreate and lvchange command, as documented in Section 4.4.20, "Controlling Logical Volume Activation" . This manual documents the --force argument of the vgimport command. This allows you to import volume groups that are missing physical volumes and subsequently run the vgreduce --removemissing command. For information on the vgimport command, refer to Section 4.3.15, "Moving a Volume Group to Another System" . This manual documents the --mirrorsonly argument of the vgreduce command. This allows you remove only the logical volumes that are mirror images from a physical volume that has failed. For information on using this option, refer to Section 4.3.15, "Moving a Volume Group to Another System" . In addition, small technical corrections and clarifications have been made throughout the document. 1.1.2. New and Changed Features for Red Hat Enterprise Linux 7.2 Red Hat Enterprise Linux 7.2 includes the following documentation and feature updates and changes. Many LVM processing commands now accept the -S or --select option to define selection criteria for those commands. LVM selection criteria are documented in the new appendix Appendix C, LVM Selection Criteria . This document provides basic procedures for creating cache logical volumes in Section 4.4.8, "Creating LVM Cache Logical Volumes" . The troubleshooting chapter of this document includes a new section, Section 6.7, "Duplicate PV Warnings for Multipathed Devices" . As of the Red Hat Enterprise Linux 7.2 release, the lvm dumpconfig command was renamed lvmconfig , although the old format continues to work. This change is reflected throughout this document. In addition, small technical corrections and clarifications have been made throughout the document. 1.1.3. New and Changed Features for Red Hat Enterprise Linux 7.3 Red Hat Enterprise Linux 7.3 includes the following documentation and feature updates and changes. LVM supports RAID0 segment types. RAID0 spreads logical volume data across multiple data subvolumes in units of stripe size. For information on creating RAID0 volumes, see Section 4.4.3.1, "Creating RAID0 Volumes (Red Hat Enterprise Linux 7.3 and Later)" . You can report information about physical volumes, volume groups, logical volumes, physical volume segments, and logical volume segments all at once with the lvm fullreport command. For information on this command and its capabilities, see the lvm-fullreport (8) man page. LVM supports log reports, which contain a log of operations, messages, and per-object status with complete object identification collected during LVM command execution. For an example of an LVM log report, see Section 4.8.6, "Command Log Reporting (Red Hat Enterprise Linux 7.3 and later)" . For further information about the LVM log report. see the lvmreport (7) man page. You can use the --reportformat option of the LVM display commands to display the output in JSON format. For an example of output displayed in JSON format, see Section 4.8.5, "JSON Format Output (Red Hat Enterprise Linux 7.3 and later)" . You can now configure your system to track thin snapshot and thin logical volumes that have been removed by enabling the record_lvs_history metadata option in the lvm.conf configuration file. This allows you to display a full thin snapshot dependency chain that includes logical volumes that have been removed from the original dependency chain and have become historical logical volumes. For information on historical logical volumes, see Section 4.4.21, "Tracking and Displaying Historical Logical Volumes (Red Hat Enterprise Linux 7.3 and Later)" . In addition, small technical corrections and clarifications have been made throughout the document. 1.1.4. New and Changed Features for Red Hat Enterprise Linux 7.4 Red Hat Enterprise Linux 7.4 includes the following documentation and feature updates and changes. Red Hat Enterprise Linux 7.4 provides support for RAID takeover and RAID reshaping. For a summary of these features, see Section 4.4.3.12, "RAID Takeover (Red Hat Enterprise Linux 7.4 and Later)" and Section 4.4.3.13, "Reshaping a RAID Logical Volume (Red Hat Enterprise Linux 7.4 and Later)" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/lvm_overview
13.4. Using virtual list view control to request a contiguous subset of a large search result
13.4. Using virtual list view control to request a contiguous subset of a large search result Directory Server supports the LDAP virtual list view control. This control enables an LDAP client to request a contiguous subset of a large search result. For example, you have stored an address book with 100.000 entries in Directory Server. By default, a query for all entries returns all entries at once. This is a resource and time-consuming operation, and clients often do not require the whole data set because, if the user scrolls through the results, only a partial set is visible. However, if the client uses the VLV control, the server only returns a subset and, for example, if the user scrolls in the client application, the server returns more entries. This reduces the load on the server, and the client does not need to store and process all data at once. VLV also improves the performance of server-sorted searches when all search parameters are fixed. Directory Server pre-computes the search results within the VLV index. Therefore, the VLV index is much more efficient than retrieving the results and sorting them afterwards. In Directory Server, the VLV control is always available. However, if you use it in a large directory, a VLV index, also called browsing index, can significantly improve the speed. Directory Server does not maintain VLV indexes for attributes, such as for standard indexes. The server generates VLV indexes dynamically based on attributes set in entries and the location of those entries in the directory tree. Unlike standard entries, VLV entries are special entries in the database. 13.4.1. How the VLV control works in ldapsearch commands Typically, you use the virtual list view (VLV) feature in LDAP client applications. However, for example for testing purposes, you can use the ldapsearch utility to request only partial results. To use the VLV feature in ldapsearch commands, specify the -E option for both the sss (server-side sorting) and vlv search extensions: The sss search extension has the following syntax: The vlv search extension has the following syntax: before sets the number of entries returned before the targeted one. after sets the number of entries returned after the targeted one. index , count , and value help to determine the target entry. If you set value , the target entry is the first one having its first sorting attribute starting with the value. Otherwise, you set count to 0 , and the target entry is determined by the index value (starting from 1). If the count value is higher than 0 , the target entry is determined by the ratio index * number of entries / count . Example 13.1. Output of an ldapsearch command with VLV search extension The following command searches in ou=People,dc=example,dc=com . The server then sorts the results by the cn attribute and returns the uid attributes of the 70th entry together with one entry before and two entries after the offset. For additional details, see the -E parameter description in the ldapsearch (1) man page. 13.4.2. Enabling unauthenticated users to use the VLV control By default, the access control instruction (ACI) in the oid=2.16.840.1.113730.3.4.9,cn=features,cn=config entry enables only authenticated users to use the VLV control. To enable also non-authenticated users to use the VLV control, update the ACI by changing userdn = "ldap:///all" to userdn = "ldap:///anyone" . Procedure Update the ACI in oid=2.16.840.1.113730.3.4.9,cn=features,cn=config : Verification Perform a query with VLV control not specify a bind user: This command requires that the server allows anonymous binds. If the command succeeds but returns no entries, run the query again with a bind user to ensure that the query works when using authentication. 13.4.3. Creating a VLV index using the command line to improve the speed of VLV queries Follow this procedure to create a virtual list view (VLV) index, also called browsing index, for entries in ou=People,dc=example,dc=com that contain a mail attribute and have the objectClass attribute set to person . Prerequisites Your client applications use the VLV control. Client applications require to query a contiguous subset of a large search result. The directory contains a large number of entries. Procedure Create the VLV search entry: This command uses the following options: --name sets the name of the search entry. This can be any name. --search-base sets the base DN for the VLV index. Directory Server creates the VLV index on this entry. --search-scope sets the scope of the search to run for entries in the VLV index. You can set this option to 0 (base search), 1 (one-level search), or 2 (subtree search). --search-filter sets the filter Directory Server applies when it creates the VLV index. Only entries that match this filter become part of the index. userRoot is the name of the database in which to create the entry. Create the index entry: This command uses the following options: --index-name sets the name of the index entry. This can be any name. --parent-name sets the name of the VLV search entry and must match the name you set in the step. --sort sets the attribute names and their sort order. Separate the attributes by space. --index-it causes that Directory Server automatically starts an index task after the entry was created. dc=example,dc=com is the suffix of the database in which to create the entry. Verification Verify the successful creation of the VLV index in the /var/log/dirsrv/slapd-instance_name/errors file: Use the VLV control in an ldapsearch command to query only specific records from the directory: This example assumes you have entries continuously named uid=user001 to at least uid=user072 in ou=People,dc=example,dc=com . For additional details, see the -E parameter description in the ldapsearch (1) man page. 13.4.4. Creating a VLV index using the web console to improve the speed of VLV queries Follow this procedure to create a virtual list view (VLV) index, also called browsing index, for entries in ou=People,dc=example,dc=com that contain a mail attribute and have the objectClass attribute set to person . Prerequisites Your client applications use the VLV control. Client applications require to query a contiguous subset of a large search result. The directory contains a large number of entries. Procedure Open the Directory Server user interface in the web console. See Section 1.4, "Logging Into Directory Server Using the Web Console" . Navigate to Database Suffixes dc=example,dc=com VLV Indexes Click Create VLV Index and fill the fields: Figure 13.1. Creating a VLV Index Using the Web Console Enter the attribute names, and click Add Sort Index . Select Index VLV on Save . Click Save VLV Index . Verification Navigate to Monitoring Logging Errors Log Use the VLV control in an ldapsearch command to query only specific records from the directory: This example assumes you have entries continuously named uid=user001 to at least uid=user072 in ou=People,dc=example,dc=com . For additional details, see the -E parameter description in the ldapsearch (1) man page.
[ "ldapsearch ... -E 'sss=attribute_list' -E 'vlv=query_options'", "[!]sss=[-]<attr[:OID]>[/[-]<attr[:OID]>...]", "[!]vlv=<before>/<after>(/<offset>/<count>|:<value>)", "ldapsearch -D \"cn=Directory Manager\" -W -H ldap://server.example.com -b \"ou=People,dc=example,dc=com\" -s one -x -E 'sss=cn' -E 'vlv=1/2/70/0' uid user069, People, example.com dn: uid=user069,ou=People,dc=example,dc=com uid: user069 user070, People, example.com dn: uid=user070,ou=People,dc=example,dc=com uid: user070 user071, People, example.com dn: uid=user071,ou=People,dc=example,dc=com uid: user071 user072, People, example.com dn: uid=user072,ou=People,dc=example,dc=com uid: user072 search result search: 2 result: 0 Success control: 1.2.840.113556.1.4.474 false MIQAAAADCgEA sortResult: (0) Success control: 2.16.840.1.113730.3.4.10 false MIQAAAALAgFGAgMAnaQKAQA= vlvResult: pos=70 count=40356 context= (0) Success numResponses: 5 numEntries: 4 Press [before/after(/offset/count|:value)] Enter for the next window.", "ldapmodify -D \"cn=Directory Manager\" -W -H ldap://server.example.com -x dn: oid=2.16.840.1.113730.3.4.9,cn=features,cn=config changetype: modify replace: aci aci: (targetattr != \"aci\")(version 3.0; acl \"VLV Request Control\"; allow( read, search, compare, proxy ) userdn = \"ldap:///anyone\";)", "ldapsearch -H ldap://server.example.com -b \"ou=People,dc=example,dc=com\" -s one -x -E 'sss=cn' -E 'vlv=1/2/70/0' uid", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend vlv-index add-search --name \" VLV People \" --search-base \" ou=People,dc=example,dc=com \" --search-filter \" (&(objectClass=person)(mail=*)) \" --search-scope 2 userRoot", "dsconf -D \"cn=Directory Manager\" ldap://server.example.com backend vlv-index add-index --index-name \" VLV People - cn sn \" --parent-name \" VLV People \" --sort \" cn sn \" --index-it dc=example,dc=com", "[26/Nov/2021:11:32:59.001988040 +0100] - INFO - bdb_db2index - userroot: Indexing VLV: VLV People - cn sn [26/Nov/2021:11:32:59.507092414 +0100] - INFO - bdb_db2index - userroot: Indexed 1000 entries (2%). [26/Nov/2021:11:33:21.450916820 +0100] - INFO - bdb_db2index - userroot: Indexed 40000 entries (98%). [26/Nov/2021:11:33:21.671564324 +0100] - INFO - bdb_db2index - userroot: Finished indexing.", "ldapsearch -D \"cn=Directory Manager\" -W -H ldap://server.example.com -b \" ou=People,dc=example,dc=com \" -s one -x -E ' sss=cn ' -E ' vlv=1/2/70/0 ' uid user069, People, example.com dn: uid=user069,ou=People,dc=example,dc=com cn: user069 user070, People, example.com dn: uid=user070,ou=People,dc=example,dc=com cn: user070 user071, People, example.com dn: uid=user071,ou=People,dc=example,dc=com cn: user071 user072, People, example.com dn: uid=user072,ou=People,dc=example,dc=com cn: user072", "[26/Nov/2021:11:32:59.001988040 +0100] - INFO - bdb_db2index - userroot: Indexing VLV: VLV People - cn sn [26/Nov/2021:11:32:59.507092414 +0100] - INFO - bdb_db2index - userroot: Indexed 1000 entries (2%). [26/Nov/2021:11:33:21.450916820 +0100] - INFO - bdb_db2index - userroot: Indexed 40000 entries (98%). [26/Nov/2021:11:33:21.671564324 +0100] - INFO - bdb_db2index - userroot: Finished indexing.", "ldapsearch -D \"cn=Directory Manager\" -W -H ldap://server.example.com -b \" ou=People,dc=example,dc=com \" -s one -x -E ' sss=cn ' -E ' vlv=1/2/70/0 ' uid user069, People, example.com dn: uid=user069,ou=People,dc=example,dc=com cn: user069 user070, People, example.com dn: uid=user070,ou=People,dc=example,dc=com cn: user070 user071, People, example.com dn: uid=user071,ou=People,dc=example,dc=com cn: user071 user072, People, example.com dn: uid=user072,ou=People,dc=example,dc=com cn: user072" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/Creating_Indexes-Creating_VLV_Indexes
OperatorHub APIs
OperatorHub APIs OpenShift Container Platform 4.16 Reference guide for OperatorHub APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operatorhub_apis/index
Chapter 25. Storing Authentication Secrets with Vaults
Chapter 25. Storing Authentication Secrets with Vaults A vault is a secure location for storing, retrieving, sharing, and recovering secrets. A secret is security-sensitive data that should only be accessible by a limited group of people or entities. For example, secrets include: passwords PINs private SSH keys Users and services can access the secrets stored in a vault from any machine enrolled in the Identity Management (IdM) domain. Note Vault is only available from the command line, not from the IdM web UI. Use cases for vaults include: Storing personal secrets of a user See Section 25.4, "Storing a User's Personal Secret" for details. Storing a secret for a service See Section 25.5, "Storing a Service Secret in a Vault" for details. Storing a common secret used by multiple users See Section 25.6, "Storing a Common Secret for Multiple Users" for details. Note that to use vaults, you must meet the conditions described in Section 25.2, "Prerequisites for Using Vaults" . 25.1. How Vaults Work 25.1.1. Vault Owners, Members, and Administrators IdM distinguishes the following vault user types: Vault owner A vault owner is a user or service with basic management privileges on the vault. For example, a vault owner can modify the properties of the vault or add new vault members. Each vault must have at least one owner. A vault can also have multiple owners. Vault member A vault member is a user or service who can access a vault created by another user or service. Vault administrator Vault administrators have unrestricted access to all vaults and are allowed to perform all vault operations. Note Symmetric and asymmetric vaults are protected with a password or key and apply special access control rules (see Section 25.1.2, "Standard, Symmetric, and Asymmetric Vaults" ). The administrator must meet these rules to: access secrets in symmetric and asymmetric vaults change or reset the vault password or key A vault administrator is any user with the Vault Administrators privilege. See Section 10.4, "Defining Role-Based Access Controls" for information on defining user privileges. Certain owner and member privileges depend on the type of the vault. See Section 25.1.2, "Standard, Symmetric, and Asymmetric Vaults" for details. Vault User The output of some commands, such as the ipa vault-show command, also displays Vault user for user vaults: The vault user represents the user in whose container the vault is located. For details on vault containers and user vaults, see Section 25.1.4, "The Different Types of Vault Containers" and Section 25.1.3, "User, Service, and Shared Vaults" . 25.1.2. Standard, Symmetric, and Asymmetric Vaults The following vault types are based on the level of security and access control: Standard vault Vault owners and vault members can archive and retrieve the secrets without having to use a password or key. Symmetric vault Secrets in the vault are protected with a symmetric key. Vault members and vault owners can archive and retrieve the secrets, but they must provide the vault password. Asymmetric vault Secrets in the vault are protected with an asymmetric key. Users archive the secret using a public key and retrieve it using a private key. Vault members can only archive secrets, while vault owners can both archive and retrieve secrets. 25.1.3. User, Service, and Shared Vaults The following vault types are based on ownership: User vault: a private vault for a user Owner: a single user. Any user can own one or more user vaults. Service vault: a private vault for a service Owner: a single service. Any service can own one or more service vaults. Shared vault Owner: the vault administrator who created the vault. Other vault administrators also have full access to the vault. Shared vaults can be used by multiple users or services. 25.1.4. The Different Types of Vault Containers A vault container is a collection of vaults. IdM provides the following default vault containers: User container: a private container for a user This container stores: user vaults for a particular user. Service container: a private container for a service This container stores: service vaults for a particular service. Shared container This container stores: vaults that can be shared by multiple users or services. IdM creates user and service containers for each user or service automatically when the first private vault for the user or service is created. After the user or service is deleted, IdM removes the container and its contents.
[ "ipa vault-show my_vault Vault name: my_vault Type: standard Owner users: user Vault user: user" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/vault
Chapter 8. GNU Debugger (GDB)
Chapter 8. GNU Debugger (GDB) The GNU Debugger , commonly abbreviated as GDB , is a command line tool that can be used to debug programs written in various programming languages. It allows you to inspect memory within the code being debugged, control the execution state of the code, detect the execution of particular sections of code, and much more. Red Hat Developer Toolset is distributed with GDB 11.2 . This version is more recent than the version included in Red Hat Enterprise Linux and the release of Red Hat Developer Toolset and provides some enhancements and numerous bug fixes. 8.1. Installing the GNU Debugger In Red Hat Developer Toolset, the GNU Debugger is provided by the devtoolset-12-gdb package and is automatically installed with devtoolset-12-toolchain as described in Section 1.5, "Installing Red Hat Developer Toolset" . 8.2. Preparing a Program for Debugging Compiling Programs with Debugging Information To compile a C program with debugging information that can be read by the GNU Debugger , make sure the gcc compiler is run with the -g option: Similarly, to compile a C++ program with debugging information: Example 8.1. Compiling a C Program With Debugging Information Consider a source file named fibonacci.c that has the following contents: #include <stdio.h> #include <limits.h> int main (int argc, char *argv[]) { unsigned long int a = 0; unsigned long int b = 1; unsigned long int sum; while (b < LONG_MAX) { printf("%ld ", b); sum = a + b; a = b; b = sum; } return 0; } Compile this program on the command line using GCC from Red Hat Developer Toolset with debugging information for the GNU Debugger : This creates a new binary file called fibonacci in the current working directory. Installing Debugging Information for Existing Packages To install debugging information for a package that is already installed on the system: Note that the yum-utils package must be installed for the debuginfo-install utility to be available on your system. Example 8.2. Installing Debugging Information for the glibc Package Install debugging information for the glibc package: 8.3. Running the GNU Debugger To run the GNU Debugger on a program you want to debug: This starts the gdb debugger in interactive mode and displays the default prompt, (gdb) . To quit the debugging session and return to the shell prompt, run the following command at any time: Note that you can execute any command using the scl utility, causing it to be run with the Red Hat Developer Toolset binaries used in preference to the Red Hat Enterprise Linux system equivalent. This allows you to run a shell session with Red Hat Developer Toolset gdb as default: Note To verify the version of gdb you are using at any point: Red Hat Developer Toolset's gdb executable path will begin with /opt . Alternatively, you can use the following command to confirm that the version number matches that for Red Hat Developer Toolset gdb : Example 8.3. Running the gdb Utility on the fibonacci Binary File This example assumes that you have successfully compiled the fibonacci binary file as shown in Example 8.1, "Compiling a C Program With Debugging Information" . Start debugging fibonacci with gdb : 8.4. Listing Source Code To view the source code of the program you are debugging: Before you start the execution of the program you are debugging, gdb displays the first ten lines of the source code, and any subsequent use of this command lists another ten lines. Once you start the execution, gdb displays the lines that are surrounding the line on which the execution stops, typically when you set a breakpoint. You can also display the code that is surrounding a particular line: Similarly, to display the code that is surrounding the beginning of a particular function: Note that you can change the number of lines the list command displays: Example 8.4. Listing the Source Code of the fibonacci Binary File The fibonacci.c file listed in Example 8.1, "Compiling a C Program With Debugging Information" has exactly 17 lines. Assuming that you have compiled it with debugging information and you want the gdb utility to be capable of listing the entire source code, you can run the following command to change the number of listed lines to 20: You can now display the entire source code of the file you are debugging by running the list command with no additional arguments: 8.5. Setting Breakpoints Setting a New Breakpoint To set a new breakpoint at a certain line: You can also set a breakpoint on a certain function: Example 8.5. Setting a New Breakpoint This example assumes that you have compiled the fibonacci.c file listed in Example 8.1, "Compiling a C Program With Debugging Information" with debugging information. Set a new breakpoint at line 10: Listing Breakpoints To display a list of currently set breakpoints: Example 8.6. Listing Breakpoints This example assumes that you have followed the instructions in Example 8.5, "Setting a New Breakpoint" . Display the list of currently set breakpoints: Deleting Existing Breakpoints To delete a breakpoint that is set at a certain line: Similarly, to delete a breakpoint that is set on a certain function: Example 8.7. Deleting an Existing Breakpoint This example assumes that you have compiled the fibonacci.c file listed in Example 8.1, "Compiling a C Program With Debugging Information" with debugging information. Set a new breakpoint at line 7: Remove this breakpoint: 8.6. Starting Execution To start an execution of the program you are debugging: If the program accepts any command line arguments, you can provide them as arguments to the run command: The execution stops when the first breakpoint (if any) is reached, when an error occurs, or when the program terminates. Example 8.8. Executing the fibonacci Binary File This example assumes that you have followed the instructions in Example 8.5, "Setting a New Breakpoint" . Execute the fibonacci binary file: 8.7. Displaying Current Values The gdb utility allows you to display the value of almost anything that is relevant to the program, from a variable of any complexity to a valid expression or even a library function. However, the most common task is to display the value of a variable. To display the current value of a certain variable: Example 8.9. Displaying the Current Values of Variables This example assumes that you have followed the instructions in Example 8.8, "Executing the fibonacci Binary File" and the execution of the fibonacci binary stopped after reaching the breakpoint at line 10. Display the current values of variables a and b : 8.8. Continuing Execution To resume the execution of the program you are debugging after it reached a breakpoint: The execution stops again when another breakpoint is reached. To skip a certain number of breakpoints (typically when you are debugging a loop): The gdb utility also allows you to stop the execution after executing a single line of code: Finally, you can execute a certain number of lines: Example 8.10. Continuing the Execution of the fibonacci Binary File This example assumes that you have followed the instructions in Example 8.8, "Executing the fibonacci Binary File" , and the execution of the fibonacci binary stopped after reaching the breakpoint at line 10. Resume the execution: The execution stops the time the breakpoint is reached. Execute the three lines of code: This allows you to verify the current value of the sum variable before it is assigned to b : 8.9. Additional Resources For more information about the GNU Debugger and all its features, see the resources listed below. Installed Documentation Installing the devtoolset-12-gdb-doc package provides the following documentation in HTML and PDF formats in the /opt/rh/devtoolset-12/root/usr/share/doc/devtoolset-12-gdb-doc-11.2 directory: The Debugging with GDB book, which is a copy of the upstream material with the same name. The version of this document exactly corresponds to the version of GDB available in Red Hat Developer Toolset. The GDB's Obsolete Annotations document, which lists the obsolete GDB level 2 annotations. Online Documentation Red Hat Enterprise Linux 7 Developer Guide - The Developer Guide for Red Hat Enterprise Linux 7 provides more information on the GNU Debugger and debugging. GDB Documentation - The upstream GDB documentation includes the GDB User Manual and other reference material. See Also Chapter 1, Red Hat Developer Toolset - An overview of Red Hat Developer Toolset and more information on how to install it on your system. Chapter 2, GNU Compiler Collection (GCC) - Further information on how to compile programs written in C, C++, and Fortran. Chapter 9, strace - Instructions on using the strace utility to monitor system calls that a program uses and signals it receives. Chapter 11, memstomp - Instructions on using the memstomp utility to identify calls to library functions with overlapping memory regions that are not allowed by various standards.
[ "scl enable devtoolset-12 'gcc -g -o output_file input_file ...'", "scl enable devtoolset-12 'g++ -g -o output_file input_file ...'", "#include <stdio.h> #include <limits.h> int main (int argc, char *argv[]) { unsigned long int a = 0; unsigned long int b = 1; unsigned long int sum; while (b < LONG_MAX) { printf(\"%ld \", b); sum = a + b; a = b; b = sum; } return 0; }", "scl enable devtoolset-12 'gcc -g -o fibonacci fibonacci.c'", "debuginfo-install package_name", "debuginfo-install glibc Loaded plugins: product-id, refresh-packagekit, subscription-manager --> Running transaction check ---> Package glibc-debuginfo.x86_64 0:2.17-105.el7 will be installed", "scl enable devtoolset-12 'gdb file_name '", "(gdb) quit", "scl enable devtoolset-12 'bash'", "which gdb", "gdb -v", "scl enable devtoolset-12 'gdb fibonacci' GNU gdb (GDB) Red Hat Enterprise Linux 8.2-2.el7 Copyright (C) 2017 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type \"show copying\" and \"show warranty\" for details. This GDB was configured as \"x86_64-redhat-linux-gnu\". Type \"show configuration\" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type \"help\". Type \"apropos word\" to search for commands related to \"word\" Reading symbols from fibonacci...done. (gdb)", "(gdb) list", "(gdb) list file_name : line_number", "(gdb) list file_name : function_name", "(gdb) set listsize number", "(gdb) set listsize 20", "(gdb) list 1 #include <stdio.h> 2 #include <limits.h> 3 4 int main (int argc, char *argv[]) { 5 unsigned long int a = 0; 6 unsigned long int b = 1; 7 unsigned long int sum; 8 9 while (b < LONG_MAX) { 10 printf(\"%ld \", b); 11 sum = a + b; 12 a = b; 13 b = sum; 14 } 15 16 return 0; 17 }", "(gdb) break file_name : line_number", "(gdb) break file_name : function_name", "(gdb) break 10 Breakpoint 1 at 0x4004e5: file fibonacci.c, line 10.", "(gdb) info breakpoints", "(gdb) info breakpoints Num Type Disp Enb Address What 1 breakpoint keep y 0x00000000004004e5 in main at fibonacci.c:10", "(gdb) clear line_number", "(gdb) clear function_name", "(gdb) break 7 Breakpoint 2 at 0x4004e3: file fibonacci.c, line 7.", "(gdb) clear 7 Deleted breakpoint 2", "(gdb) run", "(gdb) run argument ...", "(gdb) run Starting program: /home/john/fibonacci Breakpoint 1, main (argc=1, argv=0x7fffffffe4d8) at fibonacci.c:10 10 printf(\"%ld \", b);", "(gdb) print variable_name", "(gdb) print a USD1 = 0 (gdb) print b USD2 = 1", "(gdb) continue", "(gdb) continue number", "(gdb) step", "(gdb) step number", "(gdb) continue Continuing. Breakpoint 1, main (argc=1, argv=0x7fffffffe4d8) at fibonacci.c:10 10 printf(\"%ld \", b);", "(gdb) step 3 13 b = sum;", "(gdb) print sum USD3 = 2" ]
https://docs.redhat.com/en/documentation/red_hat_developer_toolset/12/html/user_guide/chap-gdb
3.3 Release Notes
3.3 Release Notes Red Hat Software Collections 3.3 Release Notes for Red Hat Software Collections 3.3 Lenka Spackova Red Hat Customer Content Services [email protected] Jaromir Hradilek Red Hat Customer Content Services [email protected] Eliska Slobodova Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/3.3_release_notes/index
Chapter 17. Managing tape devices
Chapter 17. Managing tape devices A tape device is a magnetic tape where data is stored and accessed sequentially. Data is written to this tape device with the help of a tape drive. There is no need to create a file system in order to store data on a tape device. Tape drives can be connected to a host computer with various interfaces like, SCSI, FC, USB, SATA, and other interfaces. 17.1. Types of tape devices The following is a list of the different types of tape devices: /dev/st0 is a rewinding tape device. /dev/nst0 is a non-rewinding tape device. Use non-rewinding devices for daily backups. There are several advantages to using tape devices. They are cost efficient and stable. Tape devices are also resilient against data corruption and are suitable for data retention. 17.2. Installing tape drive management tool Install the mt-st package for tape drive operations. Use the mt utility to control magnetic tape drive operations, and the st utility for SCSI tape driver. Procedure Install the mt-st package: Additional resources mt(1) and st(4) man pages on your system 17.3. Tape commands The following are the common mt commands: Table 17.1. mt commands Command Description mt -f /dev/ st0 status Displays the status of the tape device. mt -f /dev/ st0 erase Erases the entire tape. mt -f /dev/ nst0 rewind Rewinds the tape device. mt -f /dev/ nst0 fsf n Switches the tape head to the forward record. Here, n is an optional file count. If a file count is specified, tape head skips n records. mt -f /dev/ nst0 bsfm n Switches the tape head to the record. mt -f /dev/ nst0 eod Switches the tape head to the end of the data. 17.4. Writing to rewinding tape devices A rewind tape device rewinds the tape after every operation. To back up data, you can use the tar command. By default, in tape devices the block size is 10KB ( bs=10k ). You can set the TAPE environment variable using the export TAPE= /dev/st0 attribute. Use the -f device option instead, to specify the tape device file. This option is useful when you use more than one tape device. Prerequisites You have installed the mt-st package. For more information, see Installing tape drive management tool . Load the tape drive: Procedure Check the tape head: Here: the current file number is -1. the block number defines the tape head. By default, it is set to -1. the block size 0 indicates that the tape device does not have a fixed block size. the Soft error count indicates the number of encountered errors after executing the mt status command. the General status bits explains the stats of the tape device. DR_OPEN indicates that the door is open and the tape device is empty. IM_REP_EN is the immediate report mode. If the tape device is not empty, overwrite it: This command overwrites the data on a tape device with the content of /source/directory . Back up the /source/directory to the tape device: View the status of the tape device: Verification View the list of all files on the tape device: Additional resources mt(1) , st(4) , and tar(1) man pages on your system Tape drive media detected as write protected (Red Hat Knowledgebase) How to check if tape drives are detected in the system (Red Hat Knowledgebase) 17.5. Writing to non-rewinding tape devices A non-rewinding tape device leaves the tape in its current status, after completing the execution of a certain command. For example, after a backup, you could append more data to a non-rewinding tape device. You can also use it to avoid any unexpected rewinds. Prerequisites You have installed the mt-st package. For more information, see Installing tape drive management tool . Load the tape drive: Procedure Check the tape head of the non-rewinding tape device /dev/nst0 : Specify the pointer at the head or at the end of the tape: Append the data on the tape device: Back up the /source/directory / to the tape device: View the status of the tape device: Verification View the list of all files on the tape device: Additional resources mt(1) , st(4) , and tar(1) man pages on your system Tape drive media detected as write protected (Red Hat Knowledgebase) How to check if tape drives are detected in the system (Red Hat Knowledgebase) 17.6. Switching tape head in tape devices You can switch the tape head in the tape device by using the eod option. Prerequisites You have installed the mt-st package. For more information, see Installing tape drive management tool . Data is written to the tape device. Fore more information, see Writing to rewinding tape devices or Writing to non-rewinding tape devices . Procedure To view the current position of the tape pointer: To switch the tape head, while appending the data to the tape devices: To go to the record: To go to the forward record: Additional resources mt(1) man page on your system 17.7. Restoring data from tape devices You can restore data from a tape device by using the tar command. Prerequisites You have installed the mt-st package. For more information, see Installing tape drive management tool . Data is written to the tape device. For more information, see Writing to rewinding tape devices or Writing to non-rewinding tape devices . Procedure For rewinding tape devices /dev/st0 : Restore the /source/directory / : For non-rewinding tape devices /dev/nst0 : Rewind the tape device: Restore the etc directory: Additional resources mt(1) and tar(1) man pages on your system 17.8. Erasing data from tape devices You can erase data from a tape device by using the erase option. Prerequisites You have installed the mt-st package. For more information, see Installing tape drive management tool . Data is written to the tape device. For more information, see Writing to rewinding tape devices or Writing to non-rewinding tape devices . Procedure Erase data from the tape device: Unload the tape device: Additional resources mt(1) man page on your system
[ "dnf install mt-st", "mt -f /dev/st0 load", "mt -f /dev/st0 status SCSI 2 tape drive: File number=-1, block number=-1, partition=0. Tape block size 0 bytes. Density code 0x0 (default). Soft error count since last status=0 General status bits on (50000): DR_OPEN IM_REP_EN", "tar -czf /dev/st0 _/source/directory", "tar -czf /dev/st0 _/source/directory tar: Removing leading `/' from member names /source/directory /source/directory /man_db.conf /source/directory /DIR_COLORS /source/directory /rsyslog.conf [...]", "mt -f /dev/st0 status", "tar -tzf /dev/st0 /source/directory / /source/directory /man_db.conf /source/directory /DIR_COLORS /source/directory /rsyslog.conf [...]", "mt -f /dev/nst0 load", "mt -f /dev/nst0 status", "mt -f /dev/nst0 rewind", "mt -f /dev/nst0 eod tar -czf /dev/nst0 /source/directory /", "tar -czf /dev/nst0 /source/directory / tar: Removing leading `/' from member names /source/directory / /source/directory /man_db.conf /source/directory /DIR_COLORS /source/directory /rsyslog.conf [...]", "mt -f /dev/nst0 status", "tar -tzf /dev/nst0 /source/directory / /source/directory /man_db.conf /source/directory /DIR_COLORS /source/directory /rsyslog.conf [...]", "mt -f /dev/nst0 tell", "mt -f /dev/nst0 eod", "mt -f /dev/nst0 bsfm 1", "mt -f /dev/nst0 fsf 1", "tar -xzf /dev/st0 /source/directory /", "mt -f /dev/nst0 rewind", "tar -xzf /dev/nst0 /source/directory /", "mt -f /dev/st0 erase", "mt -f /dev/st0 offline" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_storage_devices/managing-tape-devices_managing-storage-devices
Chapter 3. Troubleshooting issues related to SELinux
Chapter 3. Troubleshooting issues related to SELinux For diagnosing issues related to SELinux, you can check the file /var/log/audit/audit.log , as follows: To query Audit logs, use the ausearch tool. SELinux decisions, such as allowing or disallowing access, are cached in the Access Vector Cache (AVC). Therefore, you should use the AVC and USER_AVC values for the message type parameter, for example: If there are no matches, check if the Audit daemon is running. If it is not running, then perform the following steps: Restart the audit. Re-run the denied scenario. Check the Audit log again. For more information about solving SELinux related issues, see Troubleshooting problems related to SELinux .
[ "ausearch -m AVC,USER_AVC,SELINUX_ERR,USER_SELINUX_ERR -ts boot" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/using_selinux_for_sap_hana/con_troubleshooting_using-selinux
probe::vm.kfree
probe::vm.kfree Name probe::vm.kfree - Fires when kfree is requested. Synopsis Values ptr Pointer to the kmemory allocated which is returned by kmalloc caller_function Name of the caller function. call_site Address of the function calling this kmemory function. name Name of the probe point
[ "vm.kfree" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-vm-kfree
Chapter 5. Installing a cluster on Alibaba Cloud with network customizations
Chapter 5. Installing a cluster on Alibaba Cloud with network customizations In OpenShift Container Platform 4.14, you can install a cluster on Alibaba Cloud with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. Important Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You registered your domain . If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud Resource Access Management (RAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain Resource Access Management (RAM) credentials . 5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 5.5. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 5.5.1. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for Alibaba Cloud 5.5.2. Generating the required installation manifests You must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. Procedure Generate the manifests by running the following command from the directory that contains the installation program: USD openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the directory in which the installation program creates files. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. 5.5.3. Sample customized install-config.yaml file for Alibaba Cloud You can customize the installation configuration file ( install-config.yaml ) to specify more details about your cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: alicloud-dev.devcluster.openshift.com credentialsMode: Manual compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: test-cluster 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 2 serviceNetwork: - 172.30.0.0/16 platform: alibabacloud: defaultMachinePlatform: 3 instanceType: ecs.g6.xlarge systemDiskCategory: cloud_efficiency systemDiskSize: 200 region: ap-southeast-1 4 resourceGroupID: rg-acfnw6j3hyai 5 vpcID: vpc-0xifdjerdibmaqvtjob2b 6 vswitchIDs: 7 - vsw-0xi8ycgwc8wv5rhviwdq5 - vsw-0xiy6v3z2tedv009b4pz2 publish: External pullSecret: '{"auths": {"cloud.openshift.com": {"auth": ... }' 8 sshKey: | ssh-rsa AAAA... 9 1 Required. The installation program prompts you for a cluster name. 2 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 3 Optional. Specify parameters for machine pools that do not define their own platform configuration. 4 Required. The installation program prompts you for the region to deploy the cluster to. 5 Optional. Specify an existing resource group where the cluster should be installed. 8 Required. The installation program prompts you for the pull secret. 9 Optional. The installation program prompts you for the SSH key value that you use to access the machines in your cluster. 6 7 Optional. These are example vswitchID values. 5.5.4. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.6. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 5.6.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 5.1. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 5.2. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. You can change this value by migrating from OpenShift SDN to OVN-Kubernetes. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 5.3. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 5.4. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 5.5. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 5.6. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 5.7. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 5.8. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 5.9. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 5.10. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 5.11. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 5.7. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 5.8. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations. Note This configuration is necessary to run both Linux and Windows nodes in the same cluster. Prerequisites You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, as in the following example: Specify a hybrid networking configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2 1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR must not overlap with the clusterNetwork CIDR. 2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. 5.9. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.10. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 5.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin /validating-an-installation.adoc 5.12. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. 5.13. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.14. steps Validate an installation . Customize your cluster . If necessary, you can opt out of remote health reporting .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: v1 baseDomain: alicloud-dev.devcluster.openshift.com credentialsMode: Manual compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: {} replicas: 3 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: {} replicas: 3 metadata: creationTimestamp: null name: test-cluster 1 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 2 serviceNetwork: - 172.30.0.0/16 platform: alibabacloud: defaultMachinePlatform: 3 instanceType: ecs.g6.xlarge systemDiskCategory: cloud_efficiency systemDiskSize: 200 region: ap-southeast-1 4 resourceGroupID: rg-acfnw6j3hyai 5 vpcID: vpc-0xifdjerdibmaqvtjob2b 6 vswitchIDs: 7 - vsw-0xi8ycgwc8wv5rhviwdq5 - vsw-0xiy6v3z2tedv009b4pz2 publish: External pullSecret: '{\"auths\": {\"cloud.openshift.com\": {\"auth\": ... }' 8 sshKey: | ssh-rsa AAAA... 9", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "./openshift-install create manifests --dir <installation_directory>", "cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/installing_on_alibaba/installing-alibaba-network-customizations
Part II. Technology Previews
Part II. Technology Previews This chapter provides a list of all Technology Previews available in Red Hat Enterprise Linux 6.10. Technology Preview features are currently not supported under Red Hat Enterprise Linux subscription services, may not be functionally complete, and are generally not suitable for production use. However, these features are included as a customer convenience and to provide the feature with wider exposure. Customers may find these features useful in a non-production environment. Customers are also free to provide feedback and functionality suggestions for a Technology Preview feature before it becomes fully supported. Errata will be provided for high-severity security issues. During the development of a Technology Preview feature, additional components may become available to the public for testing. It is the intention of Red Hat clustering to fully support Technology Preview features in a future release. For information on Red Hat scope of support for Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.10_technical_notes/part-red_hat_enterprise_linux-6.10_technical_notes-technology_previews
Chapter 2. Installing Image Builder
Chapter 2. Installing Image Builder Before using Image Builder, you must install Image Builder in a virtual machine. 2.1. Installing Image Builder in a virtual machine To install Image Builder on a dedicated virtual machine, follow these steps: Prerequisites Connect to the virtual machine. The virtual machine for Image Builder must be installed, subscribed, and running. Procedure 1. Install the Image Builder and other necessary packages on the virtual machine: lorax-composer composer-cli cockpit-composer bash-completion 2. Enable Image Builder to start after each reboot: The lorax-composer and cockpit services start automatically on first access. 3. Configure the system firewall to allow access to the web console: 4. Load the shell configuration script so that the auto-complete feature for the composer-cli tool starts working immediately without reboot:
[ "yum install lorax-composer composer-cli cockpit-composer bash-completion", "systemctl enable lorax-composer.socket", "systemctl enable cockpit.socket", "firewall-cmd --add-service=cockpit && firewall-cmd --add-service=cockpit --permanent", "source /etc/bash_completion.d/composer-cli" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/image_builder_guide/chap-documentation-image_builder-test_chapter_2
Chapter 4. Extending Red Hat Software Collections
Chapter 4. Extending Red Hat Software Collections This chapter describes extending some of the Software Collections that are part of the Red Hat Software Collections offering. 4.1. Providing an scldevel Subpackage The purpose of an scldevel subpackage is to make the process of creating dependent Software Collections easier by providing a number of generic macro files. Packagers then use these macro files when they are extending existing Software Collections. scldevel is provided as a subpackage of your Software Collection's metapackage. 4.1.1. Creating an scldevel Subpackage The following section describes creating an scldevel subpackage for two examples of Ruby Software Collections, ruby193 and ruby200. Procedure 4.1. Providing your own scldevel subpackage In your Software Collection's metapackage, add the scldevel subpackage by defining its name, summary, and description: %package scldevel Summary: Package shipping development files for %scl Provides: scldevel(%{scl_name_base}) %description scldevel Package shipping development files, especially useful for development of packages depending on %scl Software Collection. You are advised to use the virtual Provides: scldevel(%{scl_name_base}) during the build of packages of dependent Software Collections. This will ensure availability of a version of the %{scl_name_base} Software Collection and its macros, as specified in the following step. In the %install section of your Software Collection's metapackage, create the macros.%{scl_name_base}-scldevel file that is part of the scldevel subpackage and contains: cat >> %{buildroot}%{_root_sysconfdir}/rpm/macros.%{scl_name_base}-scldevel << EOF %%scl_%{scl_name_base} %{scl} %%scl_prefix_%{scl_name_base} %{scl_prefix} EOF Note that between all Software Collections that share the same %{scl_name_base} name, the provided macros.%{scl_name_base}-scldevel files must conflict. This is to disallow installing multiple versions of the %{scl_name_base} Software Collections. For example, the ruby193-scldevel subpackage cannot be installed when there is the ruby200-scldevel subpackage installed. 4.1.2. Using an scldevel Subpackage in a Dependent Software Collection To use your scldevel subpackage in a Software Collection that depends on the ruby200 Software Collection, update the metapackage of the dependent Software Collection as described below. Procedure 4.2. Using your own scldevel subpackage in a dependent Software Collection Consider adding the following at the beginning of the metapackage's spec file: %{!?scl_ruby:%global scl_ruby ruby200} %{!?scl_prefix_ruby:%global scl_prefix_ruby %{scl_ruby}-} These two lines are optional. They are only meant as a visual hint that the dependent Software Collection has been designed to depend on the ruby200 Software Collection. If there is no other scldevel subpackage available in the build root, then the ruby200-scldevel subpackage is used as a build requirement. You can substitute these lines with the following line: %{?scl_prefix_ruby} Add the following build requirement to the metapackage: BuildRequires: %{scl_prefix_ruby}scldevel By specifying this build requirement, you ensure that the scldevel subpackage is in the build root and that the default values are not in use. Omitting this package could result in broken requires at the subsequent packages' build time. Ensure that the %package runtime part of the metapackage's spec file includes the following lines: %package runtime Summary: Package that handles %scl Software Collection. Requires: scl-utils Requires: %{scl_prefix_ruby}runtime Consider including the following lines in the %package build part of the metapackage's spec file: %package build Summary: Package shipping basic build configuration Requires: %{scl_prefix_ruby}scldevel Specifying Requires: %{scl_prefix_ruby}scldevel ensures that macros are available in all packages of the Software Collection. Note that adding this Requires only makes sense in specific use cases, such as where packages in a dependent Software Collection use macros provided by the scldevel subpackage.
[ "%package scldevel Summary: Package shipping development files for %scl Provides: scldevel(%{scl_name_base}) %description scldevel Package shipping development files, especially useful for development of packages depending on %scl Software Collection.", "cat >> %{buildroot}%{_root_sysconfdir}/rpm/macros.%{scl_name_base}-scldevel << EOF %%scl_%{scl_name_base} %{scl} %%scl_prefix_%{scl_name_base} %{scl_prefix} EOF", "%{!?scl_ruby:%global scl_ruby ruby200} %{!?scl_prefix_ruby:%global scl_prefix_ruby %{scl_ruby}-}", "%{?scl_prefix_ruby}", "BuildRequires: %{scl_prefix_ruby}scldevel", "%package runtime Summary: Package that handles %scl Software Collection. Requires: scl-utils Requires: %{scl_prefix_ruby}runtime", "%package build Summary: Package shipping basic build configuration Requires: %{scl_prefix_ruby}scldevel" ]
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/chap-Extending_Red_Hat_Software_Collections
Chapter 11. Managing sudo access
Chapter 11. Managing sudo access System administrators can grant sudo access to allow non-root users to execute administrative commands that are normally reserved for the root user. 11.1. User authorizations in sudoers The /etc/sudoers file and, by default, drop-in files in the /etc/sudoers.d/ directory specify which users can use the sudo command to execute commands as other user. The rules can apply to individual users and user groups. You can also define rules for groups of hosts, commands, and even users more easily by using aliases. When a user enters a command with sudo for which the user does not have authorization, the system records a message that contains the string <username> : user NOT in sudoers to the journal log. The default /etc/sudoers file provides information and examples of authorizations. You can activate a specific example rule by uncommenting the corresponding line. The section with user authorizations is marked with the following introduction: You can create new sudoers authorizations and modify existing authorizations by using the following format: Where: <username> is the user that enters the command, for example, user1 . If the value starts with % , it defines a group, for example, %group1 . <hostname.example.com> is the name of the host on which the rule applies. The section ( <run_as_user> : <run_as_group> ) defines the user or group as which the command is executed. If you omit this section, <username> can execute the command as root. <path/to/command> is the complete absolute path to the command. You can also limit the user to only performing a command with specific options and arguments by adding those options after the command path. If you do not specify any options, the user can use the command with all options. You can apply the rule to all users, hosts, or commands by replacing any of these variables with ALL . Warning By using ALL in some or multiple segments of a rule, can cause serious security risks. You can negate the arguments by using the ! operator. For example, !root specifies all users except root. Note that allowing specific users, groups, and commands is more secure than disallowing specific users, groups, and commands. This is because allow rules also block new unauthorized users or groups. Warning Avoid using negative rules for commands because users can overcome such rules by renaming commands with the alias command. The system reads the /etc/sudoers file from beginning to end. Therefore, if the file contains multiple entries for a user, the entries are applied in order. In case of conflicting values, the system uses the last match, even if it is not the most specific match. To preserve the rules during system updates and for easier fixing of errors, enter new rules by creating new files in the /etc/sudoers.d/ directory instead of entering rules directly to the /etc/sudoers file. The system reads the files in the /etc/sudoers.d directory when it reaches the following line in the /etc/sudoers file: Note that the number sign ( # ) at the beginning of this line is part of the syntax and does not mean the line is a comment. The names of files in that directory must not contain a period and must not end with a tilde ( ~ ). Additional resources sudoers(5) man page 11.2. Adding a sudo rule to allow members of a group to execute commands as root System administrators can allow non-root users to execute administrative commands by granting them sudo access. The sudo command provides users with administrative access without using the password of the root user. When users need to perform an administrative command, they can precede that command with sudo . If the user has authorization for the command, the command is executed as if they were root. Be aware of the following limitations: Only users listed in the sudoers configuration file can use the sudo command. The command is executed in the shell of the user, not in the root shell. However, there are some exceptions such as when full sudo privileges are granted to any user. In such cases, users can switch to and run the commands in root shell. For example: sudo -i sudo su - Prerequisites You have root access to the system. Procedure As root, open the /etc/sudoers file. The /etc/sudoers file defines the policies applied by the sudo command. In the /etc/sudoers file, find the lines that grant sudo access to users in the administrative wheel group. Make sure the line that starts with %wheel is not commented out with the number sign ( # ). Save any changes, and exit the editor. Add users you want to grant sudo access to into the administrative wheel group. Replace <username> with the name of the user. Verification Log in as a member of the wheel group and run: Additional resources sudo(8) , sudoers(5) and visudo(8) man pages 11.3. Enabling unprivileged users to run certain commands As an administrator, you can allow unprivileged users to enter certain commands on specific workstations by configuring a policy in the /etc/sudoers.d/ directory. This is more secure than granting full sudo access to a user or giving someone the root password for the following reasons: More granular control over privileged actions. You can allow a user to perform certain actions on specific hosts instead of giving them full administrative access. Better logging. When a user performs an action through sudo , the action is logged with their user name and not just root. Transparent control. You can set email notifications for every time the user attempts to use sudo privileges. Prerequisites You have root access to the system. Procedure Create a new file in the /etc/sudoers.d directory: The file opens automatically in an editor. Add the following line to the /etc/sudoers.d/ <filename> file: Replace <username> with the name of the user. Replace <hostname.example.com> with the URL of the host. Replace ( <run_as_user> : <run_as_group> ) with the user or group as to which the command can be executed. If you omit this section, <username> can execute the command as root. Replace <path/to/command> with the complete absolute path to the command. You can also limit the user to only performing a command with specific options and arguments by adding those options after the command path. If you do not specify any options, the user can use the command with all options. To allow two and more commands on the same host on one line, you can list them separated by a comma followed by a space. For example, to allow user1 to execute the dnf and reboot commands on host1.example.com , enter: Optional: To receive email notifications every time a user attempts to use sudo privileges, add the following lines to the file: Save the changes, and exit the editor. Verification To verify if a user can run a command with sudo privileges, switch the account: As the user, enter the command with the sudo command: Enter the user's sudo password. If the privileges are configured correctly, sudo executes the command as the configured user. For example, with the dnf command, it shows the following output: If the system returns the following error message, the user is not allowed to run commands with sudo. + If the system returns the following error message, the configuration was not completed correctly. + If the system returns the following error message, the command is not correctly defined in the rule for the user. Additional resources visudo(8) , and sudoers(5) man pages
[ "## Next comes the main part: which users can run what software on ## which machines (the sudoers file can be shared between multiple ## systems).", "<username> <hostname.example.com> =( <run_as_user> : <run_as_group> ) <path/to/command>", "#includedir /etc/sudoers.d", "visudo", "## Allows people in group wheel to run all commands %wheel ALL=(ALL) ALL", "usermod --append -G wheel <username>", "sudo whoami root", "visudo -f /etc/sudoers.d/ <filename>", "<username> <hostname.example.com> = ( <run_as_user> : <run_as_group> ) <path/to/command>", "user1 host1.example.com = /bin/dnf, /sbin/reboot", "Defaults mail_always Defaults mailto=\" <[email protected]> \"", "su <username> -", "sudo whoami [sudo] password for <username> :", "usage: dnf [options] COMMAND", "<username> is not in the sudoers file. This incident will be reported.", "<username> is not allowed to run sudo on <host.example.com>.", "`Sorry, user _<username>_ is not allowed to execute '_<path/to/command>_' as root on _<host.example.com>_.`" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_basic_system_settings/managing-sudo-access_configuring-basic-system-settings
Chapter 2. CertificateSigningRequest [certificates.k8s.io/v1]
Chapter 2. CertificateSigningRequest [certificates.k8s.io/v1] Description CertificateSigningRequest objects provide a mechanism to obtain x509 certificates by submitting a certificate signing request, and having it asynchronously approved and issued. Kubelets use this API to obtain: 1. client certificates to authenticate to kube-apiserver (with the "kubernetes.io/kube-apiserver-client-kubelet" signerName). 2. serving certificates for TLS endpoints kube-apiserver can connect to securely (with the "kubernetes.io/kubelet-serving" signerName). This API can be used to request client certificates to authenticate to kube-apiserver (with the "kubernetes.io/kube-apiserver-client" signerName), or to obtain certificates from custom non-Kubernetes signers. Type object Required spec 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta spec object CertificateSigningRequestSpec contains the certificate request. status object CertificateSigningRequestStatus contains conditions used to indicate approved/denied/failed status of the request, and the issued certificate. 2.1.1. .spec Description CertificateSigningRequestSpec contains the certificate request. Type object Required request signerName Property Type Description expirationSeconds integer expirationSeconds is the requested duration of validity of the issued certificate. The certificate signer may issue a certificate with a different validity duration so a client must check the delta between the notBefore and and notAfter fields in the issued certificate to determine the actual duration. The v1.22+ in-tree implementations of the well-known Kubernetes signers will honor this field as long as the requested duration is not greater than the maximum duration they will honor per the --cluster-signing-duration CLI flag to the Kubernetes controller manager. Certificate signers may not honor this field for various reasons: 1. Old signer that is unaware of the field (such as the in-tree implementations prior to v1.22) 2. Signer whose configured maximum is shorter than the requested duration 3. Signer whose configured minimum is longer than the requested duration The minimum valid value for expirationSeconds is 600, i.e. 10 minutes. extra object extra contains extra attributes of the user that created the CertificateSigningRequest. Populated by the API server on creation and immutable. extra{} array (string) groups array (string) groups contains group membership of the user that created the CertificateSigningRequest. Populated by the API server on creation and immutable. request string request contains an x509 certificate signing request encoded in a "CERTIFICATE REQUEST" PEM block. When serialized as JSON or YAML, the data is additionally base64-encoded. signerName string signerName indicates the requested signer, and is a qualified name. List/watch requests for CertificateSigningRequests can filter on this field using a "spec.signerName=NAME" fieldSelector. Well-known Kubernetes signers are: 1. "kubernetes.io/kube-apiserver-client": issues client certificates that can be used to authenticate to kube-apiserver. Requests for this signer are never auto-approved by kube-controller-manager, can be issued by the "csrsigning" controller in kube-controller-manager. 2. "kubernetes.io/kube-apiserver-client-kubelet": issues client certificates that kubelets use to authenticate to kube-apiserver. Requests for this signer can be auto-approved by the "csrapproving" controller in kube-controller-manager, and can be issued by the "csrsigning" controller in kube-controller-manager. 3. "kubernetes.io/kubelet-serving" issues serving certificates that kubelets use to serve TLS endpoints, which kube-apiserver can connect to securely. Requests for this signer are never auto-approved by kube-controller-manager, and can be issued by the "csrsigning" controller in kube-controller-manager. More details are available at https://k8s.io/docs/reference/access-authn-authz/certificate-signing-requests/#kubernetes-signers Custom signerNames can also be specified. The signer defines: 1. Trust distribution: how trust (CA bundles) are distributed. 2. Permitted subjects: and behavior when a disallowed subject is requested. 3. Required, permitted, or forbidden x509 extensions in the request (including whether subjectAltNames are allowed, which types, restrictions on allowed values) and behavior when a disallowed extension is requested. 4. Required, permitted, or forbidden key usages / extended key usages. 5. Expiration/certificate lifetime: whether it is fixed by the signer, configurable by the admin. 6. Whether or not requests for CA certificates are allowed. uid string uid contains the uid of the user that created the CertificateSigningRequest. Populated by the API server on creation and immutable. usages array (string) usages specifies a set of key usages requested in the issued certificate. Requests for TLS client certificates typically request: "digital signature", "key encipherment", "client auth". Requests for TLS serving certificates typically request: "key encipherment", "digital signature", "server auth". Valid values are: "signing", "digital signature", "content commitment", "key encipherment", "key agreement", "data encipherment", "cert sign", "crl sign", "encipher only", "decipher only", "any", "server auth", "client auth", "code signing", "email protection", "s/mime", "ipsec end system", "ipsec tunnel", "ipsec user", "timestamping", "ocsp signing", "microsoft sgc", "netscape sgc" username string username contains the name of the user that created the CertificateSigningRequest. Populated by the API server on creation and immutable. 2.1.2. .spec.extra Description extra contains extra attributes of the user that created the CertificateSigningRequest. Populated by the API server on creation and immutable. Type object 2.1.3. .status Description CertificateSigningRequestStatus contains conditions used to indicate approved/denied/failed status of the request, and the issued certificate. Type object Property Type Description certificate string certificate is populated with an issued certificate by the signer after an Approved condition is present. This field is set via the /status subresource. Once populated, this field is immutable. If the certificate signing request is denied, a condition of type "Denied" is added and this field remains empty. If the signer cannot issue the certificate, a condition of type "Failed" is added and this field remains empty. Validation requirements: 1. certificate must contain one or more PEM blocks. 2. All PEM blocks must have the "CERTIFICATE" label, contain no headers, and the encoded data must be a BER-encoded ASN.1 Certificate structure as described in section 4 of RFC5280. 3. Non-PEM content may appear before or after the "CERTIFICATE" PEM blocks and is unvalidated, to allow for explanatory text as described in section 5.2 of RFC7468. If more than one PEM block is present, and the definition of the requested spec.signerName does not indicate otherwise, the first block is the issued certificate, and subsequent blocks should be treated as intermediate certificates and presented in TLS handshakes. The certificate is encoded in PEM format. When serialized as JSON or YAML, the data is additionally base64-encoded, so it consists of: base64( -----BEGIN CERTIFICATE----- ... -----END CERTIFICATE----- ) conditions array conditions applied to the request. Known conditions are "Approved", "Denied", and "Failed". conditions[] object CertificateSigningRequestCondition describes a condition of a CertificateSigningRequest object 2.1.4. .status.conditions Description conditions applied to the request. Known conditions are "Approved", "Denied", and "Failed". Type array 2.1.5. .status.conditions[] Description CertificateSigningRequestCondition describes a condition of a CertificateSigningRequest object Type object Required type status Property Type Description lastTransitionTime Time lastTransitionTime is the time the condition last transitioned from one status to another. If unset, when a new condition type is added or an existing condition's status is changed, the server defaults this to the current time. lastUpdateTime Time lastUpdateTime is the time of the last update to this condition message string message contains a human readable message with details about the request state reason string reason indicates a brief reason for the request state status string status of the condition, one of True, False, Unknown. Approved, Denied, and Failed conditions may not be "False" or "Unknown". type string type of the condition. Known conditions are "Approved", "Denied", and "Failed". An "Approved" condition is added via the /approval subresource, indicating the request was approved and should be issued by the signer. A "Denied" condition is added via the /approval subresource, indicating the request was denied and should not be issued by the signer. A "Failed" condition is added via the /status subresource, indicating the signer failed to issue the certificate. Approved and Denied conditions are mutually exclusive. Approved, Denied, and Failed conditions cannot be removed once added. Only one condition of a given type is allowed. 2.2. API endpoints The following API endpoints are available: /apis/certificates.k8s.io/v1/certificatesigningrequests DELETE : delete collection of CertificateSigningRequest GET : list or watch objects of kind CertificateSigningRequest POST : create a CertificateSigningRequest /apis/certificates.k8s.io/v1/watch/certificatesigningrequests GET : watch individual changes to a list of CertificateSigningRequest. deprecated: use the 'watch' parameter with a list operation instead. /apis/certificates.k8s.io/v1/certificatesigningrequests/{name} DELETE : delete a CertificateSigningRequest GET : read the specified CertificateSigningRequest PATCH : partially update the specified CertificateSigningRequest PUT : replace the specified CertificateSigningRequest /apis/certificates.k8s.io/v1/watch/certificatesigningrequests/{name} GET : watch changes to an object of kind CertificateSigningRequest. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/certificates.k8s.io/v1/certificatesigningrequests/{name}/status GET : read status of the specified CertificateSigningRequest PATCH : partially update status of the specified CertificateSigningRequest PUT : replace status of the specified CertificateSigningRequest /apis/certificates.k8s.io/v1/certificatesigningrequests/{name}/approval GET : read approval of the specified CertificateSigningRequest PATCH : partially update approval of the specified CertificateSigningRequest PUT : replace approval of the specified CertificateSigningRequest 2.2.1. /apis/certificates.k8s.io/v1/certificatesigningrequests Table 2.1. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of CertificateSigningRequest Table 2.2. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 2.3. Body parameters Parameter Type Description body DeleteOptions schema Table 2.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CertificateSigningRequest Table 2.5. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 2.6. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequestList schema 401 - Unauthorized Empty HTTP method POST Description create a CertificateSigningRequest Table 2.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.8. Body parameters Parameter Type Description body CertificateSigningRequest schema Table 2.9. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 202 - Accepted CertificateSigningRequest schema 401 - Unauthorized Empty 2.2.2. /apis/certificates.k8s.io/v1/watch/certificatesigningrequests Table 2.10. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of CertificateSigningRequest. deprecated: use the 'watch' parameter with a list operation instead. Table 2.11. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.3. /apis/certificates.k8s.io/v1/certificatesigningrequests/{name} Table 2.12. Global path parameters Parameter Type Description name string name of the CertificateSigningRequest Table 2.13. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a CertificateSigningRequest Table 2.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 2.15. Body parameters Parameter Type Description body DeleteOptions schema Table 2.16. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CertificateSigningRequest Table 2.17. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CertificateSigningRequest Table 2.18. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.19. Body parameters Parameter Type Description body Patch schema Table 2.20. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CertificateSigningRequest Table 2.21. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.22. Body parameters Parameter Type Description body CertificateSigningRequest schema Table 2.23. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 401 - Unauthorized Empty 2.2.4. /apis/certificates.k8s.io/v1/watch/certificatesigningrequests/{name} Table 2.24. Global path parameters Parameter Type Description name string name of the CertificateSigningRequest Table 2.25. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind CertificateSigningRequest. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 2.26. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 2.2.5. /apis/certificates.k8s.io/v1/certificatesigningrequests/{name}/status Table 2.27. Global path parameters Parameter Type Description name string name of the CertificateSigningRequest Table 2.28. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified CertificateSigningRequest Table 2.29. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CertificateSigningRequest Table 2.30. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.31. Body parameters Parameter Type Description body Patch schema Table 2.32. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CertificateSigningRequest Table 2.33. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.34. Body parameters Parameter Type Description body CertificateSigningRequest schema Table 2.35. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 401 - Unauthorized Empty 2.2.6. /apis/certificates.k8s.io/v1/certificatesigningrequests/{name}/approval Table 2.36. Global path parameters Parameter Type Description name string name of the CertificateSigningRequest Table 2.37. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read approval of the specified CertificateSigningRequest Table 2.38. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 401 - Unauthorized Empty HTTP method PATCH Description partially update approval of the specified CertificateSigningRequest Table 2.39. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 2.40. Body parameters Parameter Type Description body Patch schema Table 2.41. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 401 - Unauthorized Empty HTTP method PUT Description replace approval of the specified CertificateSigningRequest Table 2.42. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 2.43. Body parameters Parameter Type Description body CertificateSigningRequest schema Table 2.44. HTTP responses HTTP code Reponse body 200 - OK CertificateSigningRequest schema 201 - Created CertificateSigningRequest schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/security_apis/certificatesigningrequest-certificates-k8s-io-v1
Chapter 167. JacksonXML DataFormat
Chapter 167. JacksonXML DataFormat Available as of Camel version 2.16 Jackson XML is a Data Format which uses the Jackson library with the XMLMapper extension to unmarshal an XML payload into Java objects or to marshal Java objects into an XML payload. INFO:If you are familiar with Jackson, this XML data format behaves in the same way as its JSON counterpart, and thus can be used with classes annotated for JSON serialization/deserialization. This extension also mimics JAXB's "Code first" approach . This data format relies on Woodstox (especially for features like pretty printing), a fast and efficient XML processor. from("activemq:My.Queue"). unmarshal().jacksonxml(). to("mqseries:Another.Queue"); 167.1. JacksonXML Options The JacksonXML dataformat supports 15 options, which are listed below. Name Default Java Type Description xmlMapper String Lookup and use the existing XmlMapper with the given id. prettyPrint false Boolean To enable pretty printing output nicely formatted. Is by default false. unmarshalTypeName String Class name of the java type to use when unarmshalling jsonView Class When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations include String If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL allowJmsType false Boolean Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. collectionTypeName String Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. useList false Boolean To unarmshal to a List of Map or a List of Pojo. enableJaxbAnnotationModule false Boolean Whether to enable the JAXB annotations module when using jackson. When enabled then JAXB annotations can be used by Jackson. moduleClassNames String To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. moduleRefs String To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. enableFeatures String Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma disableFeatures String Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma allowUnmarshallType false Boolean If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. contentTypeHeader false Boolean Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. 167.2. Spring Boot Auto-Configuration The component supports 16 options, which are listed below. Name Description Default Type camel.dataformat.jacksonxml.allow-jms-type Used for JMS users to allow the JMSType header from the JMS spec to specify a FQN classname to use to unmarshal to. false Boolean camel.dataformat.jacksonxml.allow-unmarshall-type If enabled then Jackson is allowed to attempt to use the CamelJacksonUnmarshalType header during the unmarshalling. This should only be enabled when desired to be used. false Boolean camel.dataformat.jacksonxml.collection-type-name Refers to a custom collection type to lookup in the registry to use. This option should rarely be used, but allows to use different collection types than java.util.Collection based as default. String camel.dataformat.jacksonxml.content-type-header Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. false Boolean camel.dataformat.jacksonxml.disable-features Set of features to disable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma String camel.dataformat.jacksonxml.enable-features Set of features to enable on the Jackson com.fasterxml.jackson.databind.ObjectMapper. The features should be a name that matches a enum from com.fasterxml.jackson.databind.SerializationFeature, com.fasterxml.jackson.databind.DeserializationFeature, or com.fasterxml.jackson.databind.MapperFeature Multiple features can be separated by comma String camel.dataformat.jacksonxml.enable-jaxb-annotation-module Whether to enable the JAXB annotations module when using jackson. When enabled then JAXB annotations can be used by Jackson. false Boolean camel.dataformat.jacksonxml.enabled Enable jacksonxml dataformat true Boolean camel.dataformat.jacksonxml.include If you want to marshal a pojo to JSON, and the pojo has some fields with null values. And you want to skip these null values, you can set this option to NON_NULL String camel.dataformat.jacksonxml.json-view When marshalling a POJO to JSON you might want to exclude certain fields from the JSON output. With Jackson you can use JSON views to accomplish this. This option is to refer to the class which has JsonView annotations Class camel.dataformat.jacksonxml.module-class-names To use custom Jackson modules com.fasterxml.jackson.databind.Module specified as a String with FQN class names. Multiple classes can be separated by comma. String camel.dataformat.jacksonxml.module-refs To use custom Jackson modules referred from the Camel registry. Multiple modules can be separated by comma. String camel.dataformat.jacksonxml.pretty-print To enable pretty printing output nicely formatted. Is by default false. false Boolean camel.dataformat.jacksonxml.unmarshal-type-name Class name of the java type to use when unarmshalling String camel.dataformat.jacksonxml.use-list To unarmshal to a List of Map or a List of Pojo. false Boolean camel.dataformat.jacksonxml.xml-mapper Lookup and use the existing XmlMapper with the given id. String ND 167.2.1. Using Jackson XML in Spring DSL When using Data Format in Spring DSL you need to declare the data formats first. This is done in the DataFormats XML tag. <dataFormats> <!-- here we define a Xml data format with the id jack and that it should use the TestPojo as the class type when doing unmarshal. The unmarshalTypeName is optional, if not provided Camel will use a Map as the type --> <jacksonxml id="jack" unmarshalTypeName="org.apache.camel.component.jacksonxml.TestPojo"/> </dataFormats> And then you can refer to this id in the route: <route> <from uri="direct:back"/> <unmarshal ref="jack"/> <to uri="mock:reverse"/> </route> 167.3. Excluding POJO fields from marshalling When marshalling a POJO to XML you might want to exclude certain fields from the XML output. With Jackson you can use JSON views to accomplish this. First create one or more marker classes. Use the marker classes with the @JsonView annotation to include/exclude certain fields. The annotation also works on getters. Finally use the Camel JacksonXMLDataFormat to marshall the above POJO to XML. Note that the weight field is missing in the resulting XML: <pojo age="30" weight="70"/> 167.4. Include/Exclude fields using the jsonView attribute with `JacksonXML`DataFormat As an example of using this attribute you can instead of: JacksonXMLDataFormat ageViewFormat = new JacksonXMLDataFormat(TestPojoView.class, Views.Age.class); from("direct:inPojoAgeView"). marshal(ageViewFormat); Directly specify your JSON view inside the Java DSL as: from("direct:inPojoAgeView"). marshal().jacksonxml(TestPojoView.class, Views.Age.class); And the same in XML DSL: <from uri="direct:inPojoAgeView"/> <marshal> <jacksonxml unmarshalTypeName="org.apache.camel.component.jacksonxml.TestPojoView" jsonView="org.apache.camel.component.jacksonxml.ViewsUSDAge"/> </marshal> 167.5. Setting serialization include option If you want to marshal a pojo to XML, and the pojo has some fields with null values. And you want to skip these null values, then you need to set either an annotation on the pojo, @JsonInclude(Include.NON_NULL) public class MyPojo { ... } But this requires you to include that annotation in your pojo source code. You can also configure the Camel JacksonXMLDataFormat to set the include option, as shown below: JacksonXMLDataFormat format = new JacksonXMLDataFormat(); format.setInclude("NON_NULL"); Or from XML DSL you configure this as <dataFormats> <jacksonxml id="jacksonxml" include="NON_NULL"/> </dataFormats> 167.6. Unmarshalling from XML to POJO with dynamic class name If you use jackson to unmarshal XML to POJO, then you can now specify a header in the message that indicate which class name to unmarshal to. The header has key CamelJacksonUnmarshalType if that header is present in the message, then Jackson will use that as FQN for the POJO class to unmarshal the XML payload as. JacksonDataFormat format = new JacksonDataFormat(); format.setAllowJmsType(true); Or from XML DSL you configure this as <dataFormats> <jacksonxml id="jacksonxml" allowJmsType="true"/> </dataFormats> 167.7. Unmarshalling from XML to List<Map> or List<pojo> If you are using Jackson to unmarshal XML to a list of map/pojo, you can now specify this by setting useList="true" or use the org.apache.camel.component.jacksonxml.ListJacksonXMLDataFormat . For example with Java you can do as shown below: JacksonXMLDataFormat format = new ListJacksonXMLDataFormat(); // or JacksonXMLDataFormat format = new JacksonXMLDataFormat(); format.useList(); // and you can specify the pojo class type also format.setUnmarshalType(MyPojo.class); And if you use XML DSL then you configure to use list using useList attribute as shown below: <dataFormats> <jacksonxml id="jack" useList="true"/> </dataFormats> And you can specify the pojo type also <dataFormats> <jacksonxml id="jack" useList="true" unmarshalTypeName="com.foo.MyPojo"/> </dataFormats> 167.8. Using custom Jackson modules You can use custom Jackson modules by specifying the class names of those using the moduleClassNames option as shown below. <dataFormats> <jacksonxml id="jack" useList="true" unmarshalTypeName="com.foo.MyPojo" moduleClassNames="com.foo.MyModule,com.foo.MyOtherModule"/> </dataFormats> When using moduleClassNames then the custom jackson modules are not configured, by created using default constructor and used as-is. If a custom module needs any custom configuration, then an instance of the module can be created and configured, and then use modulesRefs to refer to the module as shown below: <bean id="myJacksonModule" class="com.foo.MyModule"> ... // configure the module as you want </bean> <dataFormats> <jacksonxml id="jacksonxml" useList="true" unmarshalTypeName="com.foo.MyPojo" moduleRefs="myJacksonModule"/> </dataFormats> Multiple modules can be specified separated by comma, such as moduleRefs="myJacksonModule,myOtherModule" 167.9. Enabling or disable features using Jackson Jackson has a number of features you can enable or disable, which its ObjectMapper uses. For example to disable failing on unknown properties when marshalling, you can configure this using the disableFeatures: <dataFormats> <jacksonxml id="jacksonxml" unmarshalTypeName="com.foo.MyPojo" disableFeatures="FAIL_ON_UNKNOWN_PROPERTIES"/> </dataFormats> You can disable multiple features by separating the values using comma. The values for the features must be the name of the enums from Jackson from the following enum classes com.fasterxml.jackson.databind.SerializationFeature com.fasterxml.jackson.databind.DeserializationFeature com.fasterxml.jackson.databind.MapperFeature To enable a feature use the enableFeatures options instead. From Java code you can use the type safe methods from camel-jackson module: JacksonDataFormat df = new JacksonDataFormat(MyPojo.class); df.disableFeature(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES); df.disableFeature(DeserializationFeature.FAIL_ON_NULL_FOR_PRIMITIVES); 167.10. Converting Maps to POJO using Jackson Jackson ObjectMapper can be used to convert maps to POJO objects. Jackson component comes with the data converter that can be used to convert java.util.Map instance to non-String, non-primitive and non-Number objects. Map<String, Object> invoiceData = new HashMap<String, Object>(); invoiceData.put("netValue", 500); producerTemplate.sendBody("direct:mapToInvoice", invoiceData); ... // Later in the processor Invoice invoice = exchange.getIn().getBody(Invoice.class); If there is a single ObjectMapper instance available in the Camel registry, it will used by the converter to perform the conversion. Otherwise the default mapper will be used. 167.11. Formatted XML marshalling (pretty-printing) Using the prettyPrint option one can output a well formatted XML while marshalling: <dataFormats> <jacksonxml id="jack" prettyPrint="true"/> </dataFormats> And in Java DSL: from("direct:inPretty").marshal().jacksonxml(true); Please note that there are 5 different overloaded jacksonxml() DSL methods which support the prettyPrint option in combination with other settings for unmarshalType , jsonView etc. 167.12. Dependencies To use Jackson XML in your camel routes you need to add the dependency on camel-jacksonxml which implements this data format. If you use maven you could just add the following to your pom.xml, substituting the version number for the latest & greatest release (see the download page for the latest versions). <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jacksonxml</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>
[ "from(\"activemq:My.Queue\"). unmarshal().jacksonxml(). to(\"mqseries:Another.Queue\");", "<dataFormats> <!-- here we define a Xml data format with the id jack and that it should use the TestPojo as the class type when doing unmarshal. The unmarshalTypeName is optional, if not provided Camel will use a Map as the type --> <jacksonxml id=\"jack\" unmarshalTypeName=\"org.apache.camel.component.jacksonxml.TestPojo\"/> </dataFormats>", "<route> <from uri=\"direct:back\"/> <unmarshal ref=\"jack\"/> <to uri=\"mock:reverse\"/> </route>", "<pojo age=\"30\" weight=\"70\"/>", "JacksonXMLDataFormat ageViewFormat = new JacksonXMLDataFormat(TestPojoView.class, Views.Age.class); from(\"direct:inPojoAgeView\"). marshal(ageViewFormat);", "from(\"direct:inPojoAgeView\"). marshal().jacksonxml(TestPojoView.class, Views.Age.class);", "<from uri=\"direct:inPojoAgeView\"/> <marshal> <jacksonxml unmarshalTypeName=\"org.apache.camel.component.jacksonxml.TestPojoView\" jsonView=\"org.apache.camel.component.jacksonxml.ViewsUSDAge\"/> </marshal>", "@JsonInclude(Include.NON_NULL) public class MyPojo { }", "JacksonXMLDataFormat format = new JacksonXMLDataFormat(); format.setInclude(\"NON_NULL\");", "<dataFormats> <jacksonxml id=\"jacksonxml\" include=\"NON_NULL\"/> </dataFormats>", "For JMS end users there is the JMSType header from the JMS spec that indicates that also. To enable support for JMSType you would need to turn that on, on the jackson data format as shown:", "JacksonDataFormat format = new JacksonDataFormat(); format.setAllowJmsType(true);", "<dataFormats> <jacksonxml id=\"jacksonxml\" allowJmsType=\"true\"/> </dataFormats>", "JacksonXMLDataFormat format = new ListJacksonXMLDataFormat(); // or JacksonXMLDataFormat format = new JacksonXMLDataFormat(); format.useList(); // and you can specify the pojo class type also format.setUnmarshalType(MyPojo.class);", "<dataFormats> <jacksonxml id=\"jack\" useList=\"true\"/> </dataFormats>", "<dataFormats> <jacksonxml id=\"jack\" useList=\"true\" unmarshalTypeName=\"com.foo.MyPojo\"/> </dataFormats>", "<dataFormats> <jacksonxml id=\"jack\" useList=\"true\" unmarshalTypeName=\"com.foo.MyPojo\" moduleClassNames=\"com.foo.MyModule,com.foo.MyOtherModule\"/> </dataFormats>", "<bean id=\"myJacksonModule\" class=\"com.foo.MyModule\"> ... // configure the module as you want </bean> <dataFormats> <jacksonxml id=\"jacksonxml\" useList=\"true\" unmarshalTypeName=\"com.foo.MyPojo\" moduleRefs=\"myJacksonModule\"/> </dataFormats>", "<dataFormats> <jacksonxml id=\"jacksonxml\" unmarshalTypeName=\"com.foo.MyPojo\" disableFeatures=\"FAIL_ON_UNKNOWN_PROPERTIES\"/> </dataFormats>", "JacksonDataFormat df = new JacksonDataFormat(MyPojo.class); df.disableFeature(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES); df.disableFeature(DeserializationFeature.FAIL_ON_NULL_FOR_PRIMITIVES);", "Map<String, Object> invoiceData = new HashMap<String, Object>(); invoiceData.put(\"netValue\", 500); producerTemplate.sendBody(\"direct:mapToInvoice\", invoiceData); // Later in the processor Invoice invoice = exchange.getIn().getBody(Invoice.class);", "<dataFormats> <jacksonxml id=\"jack\" prettyPrint=\"true\"/> </dataFormats>", "from(\"direct:inPretty\").marshal().jacksonxml(true);", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jacksonxml</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/jacksonxml-dataformat
Chapter 3. Policy controller advanced configuration
Chapter 3. Policy controller advanced configuration You can customize policy controller configurations on your managed clusters by using the ManagedClusterAddOn custom resources. The following ManagedClusterAddOns configure the policy framework, Kubernetes configuration policy controller, and the Certificate policy controller. Required access: Cluster administrator Configure the concurrency of the governance framework Configure the concurrency of the configuration policy controller Configure the rate of requests to the API server Configure debug log Governance metric Verify configuration changes 3.1. Configure the concurrency of the governance framework Configure the concurrency of the governance framework for each managed cluster. To change the default value of 2 , set the policy-evaluation-concurrency annotation with a nonzero integer within quotation marks. Then set the value on the ManagedClusterAddOn object name to governance-policy-framework in the managed cluster namespace of the hub cluster. See the following YAML example where the concurrency is set to 2 on the managed cluster named cluster1 : apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: governance-policy-framework namespace: cluster1 annotations: policy-evaluation-concurrency: "2" spec: installNamespace: open-cluster-management-agent-addon To set the client-qps and client-burst annotations, update the ManagedClusterAddOn resource and define the parameters. See the following YAML example where the queries for each second is set to 30 and the burst is set to 45 on the managed cluster called cluster1 : apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: governance-policy-framework namespace: cluster1 annotations: client-qps: "30" client-burst: "45" spec: installNamespace: open-cluster-management-agent-addon 3.2. Configure the concurrency of the configuration policy controller You can configure the concurrency of the configuration policy controller for each managed cluster to change how many configuration policies it can evaluate at the same time. To change the default value of 2 , set the policy-evaluation-concurrency annotation with a nonzero integer within quotation marks. Then set the value on the ManagedClusterAddOn object name to config-policy-controller in the managed cluster namespace of the hub cluster. Note: Increased concurrency values increase CPU and memory utilization on the config-policy-controller pod, Kubernetes API server, and OpenShift API server. See the following YAML example where the concurrency is set to 5 on the managed cluster named cluster1 : apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: config-policy-controller namespace: cluster1 annotations: policy-evaluation-concurrency: "5" spec: installNamespace: open-cluster-management-agent-addon 3.3. Configure the rate of requests to the API server Configure the rate of requests to the API server that the configuration policy controller makes on each managed cluster. An increased rate improves the responsiveness of the configuration policy controller, which also increases the CPU and memory utilization of the Kubernetes API server and OpenShift API server. By default, the rate of requests scales with the policy-evaluation-concurrency setting and is set to 30 queries for each second (QPS), with a 45 burst value, representing a higher number of requests over short periods of time. You can configure the rate and burst by setting the client-qps and client-burst annotations with nonzero integers within quotation marks. You can set the value on the ManagedClusterAddOn object name to config-policy-controller in the managed cluster namespace of the hub cluster. See the following YAML example where the queries for each second is set to 20 and the burst is set to 100 on the managed cluster called cluster1 : apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: config-policy-controller namespace: cluster1 annotations: client-qps: "20" client-burst: "100" spec: installNamespace: open-cluster-management-agent-addon 3.4. Configure debug log When you configure and collect debug logs for each policy controller, you can adjust the log level. Note: Reducing the volume of debug logs means there is less information displayed from the logs. You can reduce the debug logs emitted by the policy controllers to be display error-only bugs in the logs. To reduce the debug logs, set the debug log value to -1 in the annotation. See what each value represents: -1 : error logs only 0 : informative logs 1 : debug logs 2 : verbose debugging logs To receive the second level of debugging information for the Kubernetes configuration controller, add the log-level annotation with the value of 2 to the ManagedClusterAddOn custom resource. By default, the log-level is set to 0 , which means you receive informative messages. View the following example: apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: config-policy-controller namespace: cluster1 annotations: log-level: "2" spec: installNamespace: open-cluster-management-agent-addon Additionally, for each spec.object-template[] in a ConfigurationPolicy resource, you can set the parameter recordDiff to Log . The difference between the objectDefinition and the object on the managed cluster is logged in the config-policy-controller pod on the managed cluster. View the following example: This ConfigurationPolicy resource with recordDiff: Log : apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: my-config-policy spec: object-templates: - complianceType: musthave recordDiff: Log objectDefinition: apiVersion: v1 kind: ConfigMap metadata: name: my-configmap data: fieldToUpdate: "2" If the ConfigMap resource on the cluster lists fieldToUpdate: "1" , then the diff appears in the config-policy-controller pod logs with the following information: Important: Avoid logging the difference for a secure object. The difference is logged in plain text. 3.5. Governance metric The policy framework exposes metrics that show policy distribution and status. Use the policy_governance_info metric on the hub cluster to view trends and analyze any policy failures. See the following topics for an overview of metrics: 3.5.1. Metric: policy_governance_info The OpenShift Container Platform monitoring component collects the policy_governance_info metric. If you enable observability, the component collects some aggregate data. Note: If you enable observability, enter a query for the metric from the Grafana Explore page. When you create a policy, you are creating a root policy. The framework watches for root policies, Placement resources, and PlacementBindings resources to for information about where to create propagated policies, to distribute the policy to managed clusters. For both root and propagated policies, a metric of 0 is recorded if the policy is compliant, 1 if it is non-compliant, and -1 if it is in an unknown or pending state. The policy_governance_info metric uses the following labels: type : The label values are root or propagated . policy : The name of the associated root policy. policy_namespace : The namespace on the hub cluster where the root policy is defined. cluster_namespace : The namespace for the cluster where the policy is distributed. These labels and values enable queries that can show us many things happening in the cluster that might be difficult to track. Note: If you do not need the metrics, and you have any concerns about performance or security, you can disable the metric collection. Set the DISABLE_REPORT_METRICS environment variable to true in the propagator deployment. You can also add policy_governance_info metric to the observability allowlist as a custom metric. See Adding custom metrics for more details. 3.5.2. Metric: config_policies_evaluation_duration_seconds The config_policies_evaluation_duration_seconds histogram tracks the number of seconds it takes to process all configuration policies that are ready to be evaluated on the cluster. Use the following metrics to query the histogram: config_policies_evaluation_duration_seconds_bucket : The buckets are cumulative and represent seconds with the following possible entries: 1, 3, 9, 10.5, 15, 30, 60, 90, 120, 180, 300, 450, 600, and greater. config_policies_evaluation_duration_seconds_count : The count of all events. config_policies_evaluation_duration_seconds_sum : The sum of all values. Use the config_policies_evaluation_duration_seconds metric to determine if the ConfigurationPolicy evaluationInterval setting needs to be changed for resource intensive policies that do not need frequent evaluation. You can also increase the concurrency at the cost of higher resource utilization on the Kubernetes API server. See Configure the concurrency section for more details. To receive information about the time used to evaluate configuration policies, perform a Prometheus query that resembles the following expression: rate(config_policies_evaluation_duration_seconds_sum[10m])/rate (config_policies_evaluation_duration_seconds_count[10m] The config-policy-controller pod running on managed clusters in the open-cluster-management-agent-addon namespace calculates the metric. The config-policy-controller does not send the metric to observability by default. 3.6. Verify configuration changes When you apply the new configuration with the controller, the ManifestApplied parameter is updated in the ManagedClusterAddOn . That condition timestamp helps verify the configuration correctly. For example, this command can verify when the cert-policy-controller on the local-cluster was updated: You might receive the following output: 3.7. Additional resources See Kubernetes configuration policy controller Return to the Governance topic for more topics. Return to the beginning of this topic, Policy controller advanced configuration .
[ "apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: governance-policy-framework namespace: cluster1 annotations: policy-evaluation-concurrency: \"2\" spec: installNamespace: open-cluster-management-agent-addon", "apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: governance-policy-framework namespace: cluster1 annotations: client-qps: \"30\" client-burst: \"45\" spec: installNamespace: open-cluster-management-agent-addon", "apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: config-policy-controller namespace: cluster1 annotations: policy-evaluation-concurrency: \"5\" spec: installNamespace: open-cluster-management-agent-addon", "apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: config-policy-controller namespace: cluster1 annotations: client-qps: \"20\" client-burst: \"100\" spec: installNamespace: open-cluster-management-agent-addon", "apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: config-policy-controller namespace: cluster1 annotations: log-level: \"2\" spec: installNamespace: open-cluster-management-agent-addon", "apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: my-config-policy spec: object-templates: - complianceType: musthave recordDiff: Log objectDefinition: apiVersion: v1 kind: ConfigMap metadata: name: my-configmap data: fieldToUpdate: \"2\"", "Logging the diff: --- default/my-configmap : existing +++ default/my-configmap : updated @@ -2,3 +2,3 @@ data: - fieldToUpdate: \"1\" + fieldToUpdate: \"2\" kind: ConfigMap", "get -n local-cluster managedclusteraddon cert-policy-controller | grep -B4 'type: ManifestApplied'", "- lastTransitionTime: \"2023-01-26T15:42:22Z\" message: manifests of addon are applied successfully reason: AddonManifestApplied status: \"True\" type: ManifestApplied" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/governance/policy-controller-advanced-config
4.6. Testing the New Replica
4.6. Testing the New Replica To check if replication works as expected after creating a replica: Create a user on one of the servers: Make sure the user is visible on the other server:
[ "[admin@server1 ~]USD ipa user-add test_user --first= Test --last= User", "[admin@server2 ~]USD ipa user-show test_user" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/replica-verify
Chapter 17. EphemeralStorage schema reference
Chapter 17. EphemeralStorage schema reference Used in: JbodStorage , KafkaClusterSpec , KafkaNodePoolSpec , ZookeeperClusterSpec The type property is a discriminator that distinguishes use of the EphemeralStorage type from PersistentClaimStorage . It must have the value ephemeral for the type EphemeralStorage . Property Description id Storage identification number. It is mandatory only for storage volumes defined in a storage of type 'jbod'. integer sizeLimit When type=ephemeral, defines the total amount of local storage required for this EmptyDir volume (for example 1Gi). string type Must be ephemeral . string
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-ephemeralstorage-reference
Deploying OpenShift Data Foundation using Red Hat OpenShift Service on AWS with hosted control planes
Deploying OpenShift Data Foundation using Red Hat OpenShift Service on AWS with hosted control planes Red Hat OpenShift Data Foundation 4.18 Instructions for deploying OpenShift Data Foundation using Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) Red Hat Storage Documentation Team Abstract Read this document for instructions about how to install Red Hat OpenShift Data Foundation using Red Hat OpenShift Container Platform on ROSA with hosted control planes (HCP). Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Service on AWS with hosted control planes. Note Only internal OpenShift Data Foundation clusters are supported on AWS. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the deployment process in Deploying using dynamic storage devices . Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps: Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Kubernetes authentication using KMS . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Create a KMIP client if one does not exist. From the user interface, select KMIP -> Client Profile -> Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP -> Registration Token -> New Registration Token . Copy the token for the step. To register the client, navigate to KMIP -> Registered Clients -> Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings -> Interfaces -> Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the certificate authority (CA) to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys -> Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in the Planning guide . Important In order perform stop and start node operations, or to create or add a machine pool, it is necessary to apply proper labeling. For example: Replace <cluster-name> with the cluster name and <machinepool-name> with the machine pool name. Chapter 2. Deploy OpenShift Data Foundation using dynamic storage devices You can deploy OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provided by Amazon Web Services (AWS) EBS (type, gp2-csi or gp3-csi ) that provides you with the option to create internal cluster resources. This results in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Although, it is possible to deploy only the Multicloud Object Gateway (MCG) component with OpenShift Data Foundation, this deployment method is not supported on ROSA. Note Only internal OpenShift Data Foundation clusters are supported on ROSA. See Planning your deployment for more information about deployment requirements. Also, ensure that you have addressed the requirements in Preparing to deploy OpenShift Data Foundation chapter before proceeding with the below steps for deploying using dynamic storage devices: Install the Red Hat OpenShift Data Foundation Operator . Create the OpenShift Data Foundation Cluster . 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub for ROSA with hosted control planes (HCP). Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the storage namespace: Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators -> OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Fill in role ARN . For instruction to create a Amazon resource name (ARN), see Creating an AWS role using a script . Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Select a Namespace . Note openshift-storage Namespace is not recommended for ROSA deployments. Use a user defined namespace for this deployment. Avoid using "redhat" or "openshift" prefixes in namespaces. Important This guide uses <storage_namespace> as an example namespace. Replace <storage_namespace> with your defined namespace in later steps. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Manual updates strategy is recommended for ROSA with hosted control planes. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Enabling cluster-wide encryption with KMS using the Token authentication method You can enable the key value backend path and policy in the vault for token authentication. Prerequisites Administrator access to the vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Carefully, select a unique path name as the backend path that follows the naming convention since you cannot change it later. Procedure Enable the Key/Value (KV) backend path in the vault. For vault KV secret engine API, version 1: For vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Create a token that matches the above policy: 2.3. Enabling cluster-wide encryption with KMS using the Kubernetes authentication method You can enable the Kubernetes authentication method for cluster-wide encryption using the Key Management System (KMS). Prerequisites Administrator access to Vault. A valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . The OpenShift Data Foundation operator must be installed from the Operator Hub. Select a unique path name as the backend path that follows the naming convention carefully. You cannot change this path name later. Procedure Create a service account: where, <serviceaccount_name> specifies the name of the service account. For example: Create clusterrolebindings and clusterroles : For example: Create a secret for the serviceaccount token and CA certificate. where, <serviceaccount_name> is the service account created in the earlier step, and <storage_namespace> is the namespace where ODF operator and StorageSystem were created. Get the token and the CA certificate from the secret. Retrieve the OCP cluster endpoint. Fetch the service account issuer: Use the information collected in the step to setup the Kubernetes authentication method in Vault: Important To configure the Kubernetes authentication method in Vault when the issuer is empty: Enable the Key/Value (KV) backend path in Vault. For Vault KV secret engine API, version 1: For Vault KV secret engine API, version 2: Create a policy to restrict the users to perform a write or delete operation on the secret: Generate the roles: The role odf-rook-ceph-op is later used while you configure the KMS connection details during the creation of the storage system. 2.4. Creating OpenShift Data Foundation cluster Create an OpenShift Data Foundation cluster after you install the OpenShift Data Foundation operator. Procedure In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator, and then click Create StorageSystem . In the Backing storage page, select the following: Select Full Deployment for the Deployment type option. Select the Use an existing StorageClass option. Optional: Select Use external PostgreSQL checkbox to use an external PostgreSQL [Technology preview] . This provides high availability solution for Multicloud Object Gateway where the PostgreSQL pod is a single point of failure. Provide the following connection details: Username Password Server name and Port Database name Select Enable TLS/SSL checkbox to enable encryption for the Postgres server. Click . In the Capacity and nodes page, provide the necessary information: Select a value for Requested Capacity from the dropdown list. It is set to 2 TiB by default. Note Once you select the initial storage capacity, cluster expansion is performed only using the selected usable capacity (three times of raw storage). In the Select Nodes section, select at least three available nodes. In the Configure performance section, select one of the following performance profiles: Lean Use this in a resource constrained environment with minimum resources that are lower than the recommended. This profile minimizes resource consumption by allocating fewer CPUs and less memory. Balanced (default) Use this when recommended resources are available. This profile provides a balance between resource consumption and performance for diverse workloads. Performance Use this in an environment with sufficient resources to get the best performance. This profile is tailored for high performance by allocating ample memory and CPUs to ensure optimal execution of demanding workloads. Note You have the option to configure the performance profile even after the deployment using the Configure performance option from the options menu of the StorageSystems tab. Important Before selecting a resource profile, make sure to check the current availability of resources within the cluster. Opting for a higher resource profile in a cluster with insufficient resources might lead to installation failures. For more information about resource requirements, see Resource requirement for performance profiles . Optional: Select the Taint nodes checkbox to dedicate the selected nodes for OpenShift Data Foundation. Click . Optional: In the Security and network page, configure the following based on your requirements: To enable encryption, select Enable data encryption for block and file storage . Select either one or both the encryption levels: Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, select one of the following providers and provide the necessary details: Vault Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save . Note In case you need to enable key rotation for Vault KMS, run the following command in the OpenShift web console after the storage cluster is created: Thales CipherTrust Manager (using KMIP) Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . To enable in-transit encryption, select In-transit encryption . Select a Network . Click . In the Review and create page, review the configuration details. To modify any configuration settings, click Back . Click Create StorageSystem . Note When your deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains present in the deployment, you can configure Ceph monitor counts based on the number of racks or zones. An alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the option to increase the number of Ceph monitor counts. You can use the Configure option in the alert to configure the Ceph monitor counts. For more information, see Resolving low Ceph monitor count alert . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators -> OpenShift Data Foundation -> Storage System -> ocs-storagecluster-storagesystem -> Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. Additional resources To enable Overprovision Control alerts, refer to Alerts in Monitoring guide. 2.5. Verifying OpenShift Data Foundation deployment To verify that OpenShift Data Foundation is deployed correctly: Verify the state of the pods . Verify that the OpenShift Data Foundation cluster is healthy . Verify that the OpenShift Data Foundation specific storage classes exist . 2.5.1. Verifying the state of the pods Procedure Click Workloads -> Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. For more information on the expected number of pods for each component and how it varies depending on the number of nodes, see the following table: Set filter for Running and Completed pods to verify that the following pods are in Running and Completed state: 2.5.2. Verifying the OpenShift Data Foundation cluster is healthy Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. For more information on the health of the OpenShift Data Foundation cluster using the Block and File dashboard, see Monitoring OpenShift Data Foundation . 2.5.3. Verifying that the specific storage classes exist Procedure Click Storage -> Storage Classes from the left pane of the OpenShift Web Console. Verify that the following storage classes are created with the OpenShift Data Foundation cluster creation: Chapter 3. View OpenShift Data Foundation Topology The topology shows the mapped visualization of the OpenShift Data Foundation storage cluster at various abstraction levels and also lets you to interact with these layers. The view also shows how the various elements compose the Storage cluster altogether. Procedure On the OpenShift Web Console, navigate to Storage -> Data Foundation -> Topology . The view shows the storage cluster and the zones inside it. You can see the nodes depicted by circular entities within the zones, which are indicated by dotted lines. The label of each item or resource contains basic information such as status and health or indication for alerts. Choose a node to view node details on the right-hand panel. You can also access resources or deployments within a node by clicking on the search/preview decorator icon. To view deployment details Click the preview decorator on a node. A modal window appears above the node that displays all of the deployments associated with that node along with their statuses. Click the Back to main view button in the model's upper left corner to close and return to the view. Select a specific deployment to see more information about it. All relevant data is shown in the side panel. Click the Resources tab to view the pods information. This tab provides a deeper understanding of the problems and offers granularity that aids in better troubleshooting. Click the pod links to view the pod information page on OpenShift Container Platform. The link opens in a new window. Chapter 4. Uninstalling OpenShift Data Foundation 4.1. Uninstalling OpenShift Data Foundation in Internal mode To uninstall OpenShift Data Foundation in Internal mode, refer to the knowledgebase article on Uninstalling OpenShift Data Foundation .
[ "rosa edit machinepool --cluster <cluster-name> --labels cluster.ocs.openshift.io/openshift-storage=\"\" <machinepool-name>", "oc annotate namespace storage-namespace openshift.io/node-selector=", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault token create -policy=odf -format json", "oc -n <storage-namespace> create serviceaccount <serviceaccount_name>", "oc -n <storage-namespace> create serviceaccount odf-vault-auth", "oc -n <storage-namespace> create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:_<serviceaccount_name>_", "oc -n <storage-namespace> create clusterrolebinding vault-tokenreview-binding --clusterrole=system:auth-delegator --serviceaccount=openshift-storage:odf-vault-auth", "cat <<EOF | oc create -f - apiVersion: v1 kind: Secret metadata: name: odf-vault-auth-token namespace: <storage-namespace> annotations: kubernetes.io/service-account.name: <serviceaccount_name> type: kubernetes.io/service-account-token data: {} EOF", "SA_JWT_TOKEN=USD(oc -n <storage_namespace> get secret odf-vault-auth-token -o jsonpath=\"{.data['token']}\" | base64 --decode; echo) SA_CA_CRT=USD(oc -n <storage_namespace> get secret odf-vault-auth-token -o jsonpath=\"{.data['ca\\.crt']}\" | base64 --decode; echo)", "OCP_HOST=USD(oc config view --minify --flatten -o jsonpath=\"{.clusters[0].cluster.server}\")", "oc proxy & proxy_pid=USD! issuer=\"USD( curl --silent http://127.0.0.1:8001/.well-known/openid-configuration | jq -r .issuer)\" kill USDproxy_pid", "vault auth enable kubernetes", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\" issuer=\"USDissuer\"", "vault write auth/kubernetes/config token_reviewer_jwt=\"USDSA_JWT_TOKEN\" kubernetes_host=\"USDOCP_HOST\" kubernetes_ca_cert=\"USDSA_CA_CRT\"", "vault secrets enable -path=odf kv", "vault secrets enable -path=odf kv-v2", "echo ' path \"odf/*\" { capabilities = [\"create\", \"read\", \"update\", \"delete\", \"list\"] } path \"sys/mounts\" { capabilities = [\"read\"] }'| vault policy write odf -", "vault write auth/kubernetes/role/odf-rook-ceph-op bound_service_account_names=rook-ceph-system,rook-ceph-osd,noobaa bound_service_account_namespaces=<storage_namespace> policies=odf ttl=1440h", "vault write auth/kubernetes/role/odf-rook-ceph-osd bound_service_account_names=rook-ceph-osd bound_service_account_namespaces=<storage_namespace> policies=odf ttl=1440h", "patch storagecluster ocs-storagecluster -n openshift-storage --type=json -p '[{\"op\": \"add\", \"path\":\"/spec/encryption/keyRotation/enable\", \"value\": true}]'" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/deploying_openshift_data_foundation_using_red_hat_openshift_service_on_aws_with_hosted_control_planes/index
Chapter 1. Preparing to install on Alibaba Cloud
Chapter 1. Preparing to install on Alibaba Cloud Important Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Requirements for installing OpenShift Container Platform on Alibaba Cloud Before installing OpenShift Container Platform on Alibaba Cloud, you must configure and register your domain, create a Resource Access Management (RAM) user for the installation, and review the supported Alibaba Cloud data center regions and zones for the installation. 1.3. Registering and Configuring Alibaba Cloud Domain To install OpenShift Container Platform, the Alibaba Cloud account you use must have a dedicated public hosted zone in your account. This zone must be authoritative for the domain. This service provides cluster DNS resolution and name lookup for external connections to the cluster. Procedure Identify your domain, or subdomain, and registrar. You can transfer an existing domain and registrar or obtain a new one through Alibaba Cloud or another source. Note If you purchase a new domain through Alibaba Cloud, it takes time for the relevant DNS changes to propagate. For more information about purchasing domains through Alibaba Cloud, see Alibaba Cloud domains . If you are using an existing domain and registrar, migrate its DNS to Alibaba Cloud. See Domain name transfer in the Alibaba Cloud documentation. Configure DNS for your domain. This includes: Registering a generic domain name . Completing real-name verification for your domain name . Applying for an Internet Content Provider (ICP) filing . Enabling domain name resolution . Use an appropriate root domain, such as openshiftcorp.com , or subdomain, such as clusters.openshiftcorp.com . If you are using a subdomain, follow the procedures of your company to add its delegation records to the parent domain. 1.4. Supported Alibaba regions You can deploy an OpenShift Container Platform cluster to the regions listed in the Alibaba Regions and zones documentation . 1.5. steps Create the required Alibaba Cloud resources .
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_alibaba/preparing-to-install-on-alibaba
Chapter 20. Managing Kerberos Flags and Principal Aliases
Chapter 20. Managing Kerberos Flags and Principal Aliases 20.1. Kerberos Flags for Services and Hosts You can use various Kerberos flags to define certain specific aspects of the Kerberos ticket behavior. You can add these flags to service and host Kerberos principals. Principals in Identity Management (IdM) accept the following Kerberos flags: OK_AS_DELEGATE Use this flag to specify Kerberos tickets trusted for delegation. Active directory (AD) clients check the OK_AS_DELEGATE flag on the Kerberos ticket to determine whether the user credentials can be forwarded or delegated to the specific server. AD forwards the ticket-granting ticket (TGT) only to services or hosts with OK_AS_DELEGATE set. With this flag, system security services daemon (SSSD) can add the AD user TGT to the default Kerberos credentials cache on the IdM client machine. REQUIRES_PRE_AUTH Use this flag to specify that only pre-authenticated tickets are allowed to authenticate to the principal. With the REQUIRES_PRE_AUTH flag set, the key distribution center (KDC) requires additional authentication: the KDC issues the TGT for a principal with REQUIRES_PRE_AUTH only if the TGT has been pre-authenticated. You can clear REQUIRES_PRE_AUTH to disable pre-authentication for selected services or hosts, which lowers the load on the KDC but also slightly increases the possibility of a brute-force attack on a long-term key to succeed. OK_TO_AUTH_AS_DELEGATE Use the OK_TO_AUTH_AS_DELEGATE flag to specify that the service is allowed to obtain a kerberos ticket on behalf of the user. Note, that while this is enough to perform protocol transition, in order to obtain other tickets on behalf of the user, the service needs the OK_AS_DELEGATE flag and a corresponding policy decision allowed on the key distribution center side. 20.1.1. Setting Kerberos Flags from the Web UI To add OK_AS_DELEGATE , REQUIRES_PRE_AUTH , or OK_TO_AUTH_AS_DELEGATE to a principal: Select the Services subtab, accessible through the Identity main tab. Figure 20.1. List of Services Click on the service to which you want to add the flags. Check the option that you want to set. For example, to set the REQUIRES_PRE_AUTH flag, check the Requires pre-authentication option: Figure 20.2. Adding the REQUIRES_PRE_AUTH flag The following table lists the names of the Kerberos flags and the corresponding name in the Web UI: Table 20.1. Kerberos flags' mapping in WebUI Kerberos flag name Web UI option OK_AS_DELEGATE Trusted for delegation REQUIRES_PRE_AUTH Requires pre-authentication OK_TO_AUTH_AS_DELEGATE Trusted to authenticate as user 20.1.2. Setting and Removing Kerberos Flags from the Command Line To add a flag to a principal from the command line or to remove a flag, add one of the following options to the ipa service-mod command: --ok-as-delegate for OK_AS_DELEGATE --requires-pre-auth for REQUIRES_PRE_AUTH --ok-to-auth-as-delegate for OK_TO_AUTH_AS_DELEGATE To add a flag, set the corresponding option to 1 . For example, to add the OK_AS_DELEGATE flag to the service/[email protected] principal: To remove a flag or to disable it, set the corresponding option to 0 . For example, to disable the REQUIRES_PRE_AUTH flag for the test/[email protected] principal: 20.1.3. Displaying Kerberos Flags from the Command Line To find out if OK_AS_DELEGATE is currently set for a principal: Run the kvno utility. Run the klist -f command. OK_AS_DELEGATE is represented by the O character in the klist -f output: Table 20.2. Abbreviations for kerberos flags Kerberos flag name Abbreviation OK_AS_DELEGATE O REQUIRES_PRE_AUTH A OK_TO_AUTH_AS_DELEGATE F To find out what flags are currently set for a principal, use the kadmin.local utility. The current flags are displayed on the Attributes line of kadmin.local output, for example:
[ "ipa service-mod service/[email protected] --ok-as-delegate= 1", "ipa service-mod test/[email protected] --requires-pre-auth= 0", "kvno test/[email protected] klist -f Ticket cache: KEYRING:persistent:0:0 Default principal: [email protected] Valid starting Expires Service principal 02/19/2014 09:59:02 02/20/2014 08:21:33 test/ipa/[email protected] Flags: FAT O", "kadmin.local kadmin.local: getprinc test/ipa.example.com Principal: test/[email protected] Expiration date: [never] Attributes: REQUIRES_PRE_AUTH OK_AS_DELEGATE OK_TO_AUTH_AS_DELEGATE Policy: [none]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/kerberos-for-entries
Chapter 6. ControllerRevision [apps/v1]
Chapter 6. ControllerRevision [apps/v1] Description ControllerRevision implements an immutable snapshot of state data. Clients are responsible for serializing and deserializing the objects that contain their internal state. Once a ControllerRevision has been successfully created, it can not be updated. The API Server will fail validation of all requests that attempt to mutate the Data field. ControllerRevisions may, however, be deleted. Note that, due to its use by both the DaemonSet and StatefulSet controllers for update and rollback, this object is beta. However, it may be subject to name and representation changes in future releases, and clients should not depend on its stability. It is primarily for internal use by controllers. Type object Required revision 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources data RawExtension Data is the serialized representation of the state. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata revision integer Revision indicates the revision of the state represented by Data. 6.2. API endpoints The following API endpoints are available: /apis/apps/v1/controllerrevisions GET : list or watch objects of kind ControllerRevision /apis/apps/v1/watch/controllerrevisions GET : watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/controllerrevisions DELETE : delete collection of ControllerRevision GET : list or watch objects of kind ControllerRevision POST : create a ControllerRevision /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions GET : watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/controllerrevisions/{name} DELETE : delete a ControllerRevision GET : read the specified ControllerRevision PATCH : partially update the specified ControllerRevision PUT : replace the specified ControllerRevision /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions/{name} GET : watch changes to an object of kind ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 6.2.1. /apis/apps/v1/controllerrevisions Table 6.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind ControllerRevision Table 6.2. HTTP responses HTTP code Reponse body 200 - OK ControllerRevisionList schema 401 - Unauthorized Empty 6.2.2. /apis/apps/v1/watch/controllerrevisions Table 6.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. Table 6.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /apis/apps/v1/namespaces/{namespace}/controllerrevisions Table 6.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 6.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ControllerRevision Table 6.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 6.8. Body parameters Parameter Type Description body DeleteOptions schema Table 6.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ControllerRevision Table 6.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 6.11. HTTP responses HTTP code Reponse body 200 - OK ControllerRevisionList schema 401 - Unauthorized Empty HTTP method POST Description create a ControllerRevision Table 6.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.13. Body parameters Parameter Type Description body ControllerRevision schema Table 6.14. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 201 - Created ControllerRevision schema 202 - Accepted ControllerRevision schema 401 - Unauthorized Empty 6.2.4. /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions Table 6.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 6.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. Table 6.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.5. /apis/apps/v1/namespaces/{namespace}/controllerrevisions/{name} Table 6.18. Global path parameters Parameter Type Description name string name of the ControllerRevision namespace string object name and auth scope, such as for teams and projects Table 6.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ControllerRevision Table 6.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 6.21. Body parameters Parameter Type Description body DeleteOptions schema Table 6.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ControllerRevision Table 6.23. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ControllerRevision Table 6.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 6.25. Body parameters Parameter Type Description body Patch schema Table 6.26. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 201 - Created ControllerRevision schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ControllerRevision Table 6.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.28. Body parameters Parameter Type Description body ControllerRevision schema Table 6.29. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 201 - Created ControllerRevision schema 401 - Unauthorized Empty 6.2.6. /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions/{name} Table 6.30. Global path parameters Parameter Type Description name string name of the ControllerRevision namespace string object name and auth scope, such as for teams and projects Table 6.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/metadata_apis/controllerrevision-apps-v1
Configuring firewalls and packet filters
Configuring firewalls and packet filters Red Hat Enterprise Linux 9 Managing the firewalld service, the nftables framework, and XDP packet filtering features Red Hat Customer Content Services
[ "<?xml version=\"1.0\" encoding=\"utf-8\"?> <zone> <short>My Zone</short> <description>Here you can describe the characteristic features of the zone.</description> <service name=\"ssh\"/> <port protocol=\"udp\" port=\"1025-65535\"/> <port protocol=\"tcp\" port=\"1025-65535\"/> </zone>", "firewall-cmd --get-zones", "firewall-cmd --add-service=ssh --zone= <your_chosen_zone> firewall-cmd --remove-service=ftp --zone= <same_chosen_zone>", "firewall-cmd --get-active-zones", "firewall-cmd --zone= <your_chosen_zone> --change-interface=< interface_name > --permanent", "firewall-cmd --zone= <your_chosen_zone> --list-all", "firewall-cmd --get-default-zone", "firewall-cmd --set-default-zone <zone_name >", "firewall-cmd --get-active-zones", "firewall-cmd --zone= zone_name --change-interface= interface_name --permanent", "nmcli connection modify profile connection.zone zone_name", "nmcli connection up profile", "nmcli -f NAME,FILENAME connection NAME FILENAME enp1s0 /etc/NetworkManager/system-connections/enp1s0.nmconnection enp7s0 /etc/sysconfig/network-scripts/ifcfg-enp7s0", "[connection] zone=internal", "ZONE=internal", "nmcli connection reload", "nmcli connection up <profile_name>", "firewall-cmd --get-zone-of-interface enp1s0 internal", "firewall-cmd --permanent --new-zone= zone-name", "firewall-cmd --reload", "firewall-cmd --get-zones --permanent", "firewall-cmd --zone= zone-name --list-all", "firewall-cmd --permanent --zone=zone-name --set-target=<default|ACCEPT|REJECT|DROP>", "firewall-cmd --list-services ssh dhcpv6-client", "firewall-cmd --get-services RH-Satellite-6 amanda-client amanda-k5-client bacula bacula-client bitcoin bitcoin-rpc bitcoin-testnet bitcoin-testnet-rpc ceph ceph-mon cfengine condor-collector ctdb dhcp dhcpv6 dhcpv6-client dns docker-registry", "firewall-cmd --add-service= <service_name>", "firewall-cmd --runtime-to-permanent", "firewall-cmd --list-all --permanent public target: default icmp-block-inversion: no interfaces: sources: services: cockpit dhcpv6-client ssh ports: protocols: forward: no masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:", "firewall-cmd --check-config success", "firewall-cmd --check-config Error: INVALID_PROTOCOL: 'public.xml': 'tcpx' not from {'tcp'|'udp'|'sctp'|'dccp'}", "firewall-cmd --get-active-zones", "firewall-cmd --zone= <zone_name> --add-service=https --permanent", "firewall-cmd --reload", "firewall-cmd --zone= <zone_name> --list-all", "firewall-cmd --zone= <zone_name> --list-services", "firewall-cmd --list-ports", "firewall-cmd --remove-port=port-number/port-type", "firewall-cmd --runtime-to-permanent", "firewall-cmd --get-active-zones", "firewall-cmd --zone= <zone_to_inspect> --list-ports", "firewall-cmd --panic-on", "firewall-cmd --panic-off", "firewall-cmd --query-panic", "firewall-cmd --add-source=<source>", "firewall-cmd --zone=zone-name --add-source=<source>", "firewall-cmd --get-zones", "firewall-cmd --zone=trusted --add-source=192.168.2.15", "firewall-cmd --runtime-to-permanent", "firewall-cmd --zone=zone-name --list-sources", "firewall-cmd --zone=zone-name --remove-source=<source>", "firewall-cmd --runtime-to-permanent", "firewall-cmd --zone=zone-name --remove-source-port=<port-name>/<tcp|udp|sctp|dccp>", "firewall-cmd --get-zones block dmz drop external home internal public trusted work", "firewall-cmd --zone=internal --add-source=192.0.2.0/24", "firewall-cmd --zone=internal --add-service=http", "firewall-cmd --runtime-to-permanent", "firewall-cmd --zone=internal --list-all internal (active) target: default icmp-block-inversion: no interfaces: sources: 192.0.2.0/24 services: cockpit dhcpv6-client mdns samba-client ssh http", "firewall-cmd --permanent --new-policy myOutputPolicy firewall-cmd --permanent --policy myOutputPolicy --add-ingress-zone HOST firewall-cmd --permanent --policy myOutputPolicy --add-egress-zone ANY", "firewall-cmd --permanent --policy mypolicy --set-priority -500", "firewall-cmd --permanent --new-policy podmanToAny", "firewall-cmd --permanent --policy podmanToAny --set-target REJECT firewall-cmd --permanent --policy podmanToAny --add-service dhcp firewall-cmd --permanent --policy podmanToAny --add-service dns firewall-cmd --permanent --policy podmanToAny --add-service https", "firewall-cmd --permanent --new-zone=podman", "firewall-cmd --permanent --policy podmanToHost --add-ingress-zone podman", "firewall-cmd --permanent --policy podmanToHost --add-egress-zone ANY", "systemctl restart firewalld", "firewall-cmd --info-policy podmanToAny podmanToAny (active) target: REJECT ingress-zones: podman egress-zones: ANY services: dhcp dns https", "firewall-cmd --permanent --policy mypolicy --set-target CONTINUE", "firewall-cmd --info-policy mypolicy", "firewall-cmd --permanent --new-policy <example_policy>", "firewall-cmd --permanent --policy= <example_policy> --add-ingress-zone=HOST firewall-cmd --permanent --policy= <example_policy> --add-egress-zone=ANY", "firewall-cmd --permanent --policy= <example_policy> --add-rich-rule='rule family=\"ipv4\" destination address=\" 192.0.2.1 \" forward-port port=\" 443 \" protocol=\"tcp\" to-port=\" 443 \" to-addr=\" 192.51.100.20 \"'", "firewall-cmd --reload success", "echo \"net.ipv4.conf.all.route_localnet=1\" > /etc/sysctl.d/90-enable-route-localnet.conf", "sysctl -p /etc/sysctl.d/90-enable-route-localnet.conf", "curl https://192.0.2.1:443", "sysctl net.ipv4.conf.all.route_localnet net.ipv4.conf.all.route_localnet = 1", "firewall-cmd --info-policy= <example_policy> example_policy (active) priority: -1 target: CONTINUE ingress-zones: HOST egress-zones: ANY services: ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: rule family=\"ipv4\" destination address=\"192.0.2.1\" forward-port port=\"443\" protocol=\"tcp\" to-port=\"443\" to-addr=\"192.51.100.20\"", "firewall-cmd --zone= external --query-masquerade", "firewall-cmd --zone= external --add-masquerade", "firewall-cmd --zone= external --remove-masquerade", "net.ipv4.ip_forward=1", "sysctl -p /etc/sysctl.d/90-enable-IP-forwarding.conf", "firewall-cmd --zone=public --add-forward-port=port=80:proto=tcp:toaddr=198.51.100.10:toport=8080 --permanent", "firewall-cmd --reload", "firewall-cmd --list-forward-ports --zone=public port=80:proto=tcp:toport=8080:toaddr=198.51.100.10", "cat /etc/firewalld/zones/public.xml <?xml version=\"1.0\" encoding=\"utf-8\"?> <zone> <short>Public</short> <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description> <service name=\"ssh\"/> <service name=\"dhcpv6-client\"/> <service name=\"cockpit\"/> <forward-port port=\"80\" protocol=\"tcp\" to-port=\"8080\" to-addr=\"198.51.100.10\"/> <forward/> </zone>", "net.ipv4.ip_forward=1", "sysctl -p /etc/sysctl.d/90-enable-IP-forwarding.conf", "firewall-cmd --zone=public --add-forward-port=port= <standard_port> :proto=tcp:toport= <non_standard_port> --permanent", "firewall-cmd --reload", "firewall-cmd --list-forward-ports port=8080:proto=tcp:toport=80:toaddr=", "cat /etc/firewalld/zones/public.xml <?xml version=\"1.0\" encoding=\"utf-8\"?> <zone> <short>Public</short> <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description> <service name=\"ssh\"/> <service name=\"dhcpv6-client\"/> <service name=\"cockpit\"/> <forward-port port=\"8080\" protocol=\"tcp\" to-port=\"80\"/> <forward/> </zone>", "firewall-cmd --get-icmptypes address-unreachable bad-header beyond-scope communication-prohibited destination-unreachable echo-reply echo-request failed-policy fragmentation-needed host-precedence-violation host-prohibited host-redirect host-unknown host-unreachable", "firewall-cmd --zone= <target-zone> --remove-icmp-block= echo-request --permanent", "firewall-cmd --zone= <target-zone> --add-icmp-block= redirect --permanent", "firewall-cmd --reload", "firewall-cmd --list-icmp-blocks redirect", "firewall-cmd --permanent --new-ipset= allowlist --type=hash:ip", "firewall-cmd --permanent --ipset= allowlist --add-entry= 198.51.100.10", "firewall-cmd --permanent --zone=public --add-source=ipset: allowlist", "firewall-cmd --reload", "firewall-cmd --get-ipsets allowlist", "firewall-cmd --list-all public (active) target: default icmp-block-inversion: no interfaces: enp0s1 sources: ipset:allowlist services: cockpit dhcpv6-client ssh ports: protocols:", "cat /etc/firewalld/ipsets/allowlist.xml <?xml version=\"1.0\" encoding=\"utf-8\"?> <ipset type=\"hash:ip\"> <entry>198.51.100.10</entry> </ipset>", "firewall-cmd --add-rich-rule='rule priority=32767 log prefix=\"UNEXPECTED: \" limit value=\"5/m\"'", "nft list chain inet firewalld filter_IN_public_post table inet firewalld { chain filter_IN_public_post { log prefix \"UNEXPECTED: \" limit rate 5/minute } }", "firewall-cmd --query-lockdown", "firewall-cmd --lockdown-on", "firewall-cmd --lockdown-off", "<?xml version=\"1.0\" encoding=\"utf-8\"?> <whitelist> <command name=\"/usr/bin/python3 -s /usr/bin/firewall-config\"/> <selinux context=\"system_u:system_r:NetworkManager_t:s0\"/> <selinux context=\"system_u:system_r:virtd_t:s0-s0:c0.c1023\"/> <user id=\"0\"/> </whitelist>", "<?xml version=\"1.0\" encoding=\"utf-8\"?> <whitelist> <command name=\"/usr/libexec/platform-python -s /bin/firewall-cmd*\"/> <selinux context=\"system_u:system_r:NetworkManager_t:s0\"/> <user id=\"815\"/> <user name=\"user\"/> </whitelist>", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "firewall-cmd --get-active-zones", "firewall-cmd --zone=internal --change-interface= interface_name --permanent", "firewall-cmd --zone=internal --add-interface=enp1s0 --add-interface=wlp0s20", "firewall-cmd --zone=internal --add-forward", "ncat -e /usr/bin/cat -l 12345", "ncat <other_host> 12345", "--- - name: Reset firewalld example hosts: managed-node-01.example.com tasks: - name: Reset firewalld ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - previous: replaced", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --list-all-zones'", "--- - name: Configure firewalld hosts: managed-node-01.example.com tasks: - name: Forward incoming traffic on port 8080 to 443 ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - forward_port: 8080/tcp;443; state: enabled runtime: true permanent: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --list-forward-ports' managed-node-01.example.com | CHANGED | rc=0 >> port=8080:proto=tcp:toport=443:toaddr=", "--- - name: Configure firewalld hosts: managed-node-01.example.com tasks: - name: Creating a DMZ with access to HTTPS port and masquerading for hosts in DMZ ansible.builtin.include_role: name: rhel-system-roles.firewall vars: firewall: - zone: dmz interface: enp1s0 service: https state: enabled runtime: true permanent: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m ansible.builtin.command -a 'firewall-cmd --zone=dmz --list-all' managed-node-01.example.com | CHANGED | rc=0 >> dmz (active) target: default icmp-block-inversion: no interfaces: enp1s0 sources: services: https ssh ports: protocols: forward: no masquerade: no forward-ports: source-ports: icmp-blocks:", "table <table_address_family> <table_name> { }", "nft add table <table_address_family> <table_name>", "table <table_address_family> <table_name> { chain <chain_name> { type <type> hook <hook> priority <priority> policy <policy> ; } }", "nft add chain <table_address_family> <table_name> <chain_name> { type <type> hook <hook> priority <priority> \\; policy <policy> \\; }", "table <table_address_family> <table_name> { chain <chain_name> { type <type> hook <hook> priority <priority> ; policy <policy> ; <rule> } }", "nft add rule <table_address_family> <table_name> <chain_name> <rule>", "nft add table inet nftables_svc", "nft add chain inet nftables_svc INPUT { type filter hook input priority filter \\; policy accept \\; }", "nft add rule inet nftables_svc INPUT tcp dport 22 accept nft add rule inet nftables_svc INPUT tcp dport 443 accept nft add rule inet nftables_svc INPUT reject with icmpx type port-unreachable", "nft -a list table inet nftables_svc table inet nftables_svc { # handle 13 chain INPUT { # handle 1 type filter hook input priority filter ; policy accept ; tcp dport 22 accept # handle 2 tcp dport 443 accept # handle 3 reject # handle 4 } }", "nft insert rule inet nftables_svc INPUT position 3 tcp dport 636 accept", "nft add rule inet nftables_svc INPUT position 3 tcp dport 80 accept", "nft -a list table inet nftables_svc table inet nftables_svc { # handle 13 chain INPUT { # handle 1 type filter hook input priority filter ; policy accept ; tcp dport 22 accept # handle 2 tcp dport 636 accept # handle 5 tcp dport 443 accept # handle 3 tcp dport 80 accept # handle 6 reject # handle 4 } }", "nft delete rule inet nftables_svc INPUT handle 6", "nft -a list table inet nftables_svc table inet nftables_svc { # handle 13 chain INPUT { # handle 1 type filter hook input priority filter ; policy accept ; tcp dport 22 accept # handle 2 tcp dport 636 accept # handle 5 tcp dport 443 accept # handle 3 reject # handle 4 } }", "nft flush chain inet nftables_svc INPUT", "nft list table inet nftables_svc table inet nftables_svc { chain INPUT { type filter hook input priority filter; policy accept } }", "nft delete chain inet nftables_svc INPUT", "nft list table inet nftables_svc table inet nftables_svc { }", "nft delete table inet nftables_svc", "iptables-save >/root/iptables.dump ip6tables-save >/root/ip6tables.dump", "iptables-restore-translate -f /root/iptables.dump > /etc/nftables/ruleset-migrated-from-iptables.nft ip6tables-restore-translate -f /root/ip6tables.dump > /etc/nftables/ruleset-migrated-from-ip6tables.nft", "include \"/etc/nftables/ruleset-migrated-from-iptables.nft\" include \"/etc/nftables/ruleset-migrated-from-ip6tables.nft\"", "systemctl disable --now iptables", "systemctl enable --now nftables", "nft list ruleset", "iptables-translate -A INPUT -s 192.0.2.0/24 -j ACCEPT nft add rule ip filter INPUT ip saddr 192.0.2.0/24 counter accept", "iptables-translate -A INPUT -j CHECKSUM --checksum-fill nft # -A INPUT -j CHECKSUM --checksum-fill", "nft list table inet firewalld nft list table ip firewalld nft list table ip6 firewalld", "#!/usr/sbin/nft -f Flush the rule set flush ruleset table inet example_table { chain example_chain { # Chain for incoming packets that drops all packets that # are not explicitly allowed by any rule in this chain type filter hook input priority 0; policy drop; # Accept connections to port 22 (ssh) tcp dport ssh accept } }", "#!/usr/sbin/nft -f Flush the rule set flush ruleset Create a table add table inet example_table Create a chain for incoming packets that drops all packets that are not explicitly allowed by any rule in this chain add chain inet example_table example_chain { type filter hook input priority 0 ; policy drop ; } Add a rule that accepts connections to port 22 (ssh) add rule inet example_table example_chain tcp dport ssh accept", "nft -f /etc/nftables/<example_firewall_script>.nft", "#!/usr/sbin/nft -f", "chown root /etc/nftables/<example_firewall_script>.nft", "chmod u+x /etc/nftables/<example_firewall_script>.nft", "/etc/nftables/<example_firewall_script>.nft", "Flush the rule set flush ruleset add table inet example_table # Create a table", "define INET_DEV = enp1s0", "add rule inet example_table example_chain iifname USDINET_DEV tcp dport ssh accept", "define DNS_SERVERS = { 192.0.2.1 , 192.0.2.2 }", "add rule inet example_table example_chain ip daddr USDDNS_SERVERS accept", "include \"example.nft\"", "include \"/etc/nftables/rulesets/*.nft\"", "include \"/etc/nftables/_example_.nft\"", "systemctl start nftables", "systemctl enable nftables", "nft add table nat", "nft add chain nat postrouting { type nat hook postrouting priority 100 \\; }", "nft add rule nat postrouting oifname \" ens3 \" masquerade", "nft add table nat", "nft add chain nat postrouting { type nat hook postrouting priority 100 \\; }", "nft add rule nat postrouting oifname \" ens3 \" snat to 192.0.2.1", "nft add table nat", "nft -- add chain nat prerouting { type nat hook prerouting priority -100 \\; } nft add chain nat postrouting { type nat hook postrouting priority 100 \\; }", "nft add rule nat prerouting iifname ens3 tcp dport { 80, 443 } dnat to 192.0.2.1", "nft add rule nat postrouting oifname \"ens3\" masquerade", "nft add rule nat postrouting oifname \"ens3\" snat to 198.51.100.1", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "nft add table nat", "nft -- add chain nat prerouting { type nat hook prerouting priority -100 \\; }", "nft add rule nat prerouting tcp dport 22 redirect to 2222", "nft add table inet <example-table>", "nft add flowtable inet <example-table> <example-flowtable> { hook ingress priority filter \\; devices = { enp1s0, enp7s0 } \\; }", "nft add chain inet <example-table> <example-forwardchain> { type filter hook forward priority filter \\; }", "nft add rule inet <example-table> <example-forwardchain> ct state established flow add @ <example-flowtable>", "nft list table inet <example-table> table inet example-table { flowtable example-flowtable { hook ingress priority filter devices = { enp1s0, enp7s0 } } chain example-forwardchain { type filter hook forward priority filter; policy accept; ct state established flow add @example-flowtable } }", "nft add rule inet example_table example_chain tcp dport { 22, 80, 443 } accept", "nft list table inet example_table table inet example_table { chain example_chain { type filter hook input priority filter; policy accept; tcp dport { ssh, http, https } accept } }", "nft add set inet example_table example_set { type ipv4_addr \\; }", "nft add set inet example_table example_set { type ipv4_addr \\; flags interval \\; }", "nft add rule inet example_table example_chain ip saddr @ example_set drop", "nft add element inet example_table example_set { 192.0.2.1, 192.0.2.2 }", "nft add element inet example_table example_set { 192.0.2.0-192.0.2.255 }", "nft add table inet example_table", "nft add chain inet example_table tcp_packets", "nft add rule inet example_table tcp_packets counter", "nft add chain inet example_table udp_packets", "nft add rule inet example_table udp_packets counter", "nft add chain inet example_table incoming_traffic { type filter hook input priority 0 \\; }", "nft add rule inet example_table incoming_traffic ip protocol vmap { tcp : jump tcp_packets, udp : jump udp_packets }", "nft list table inet example_table table inet example_table { chain tcp_packets { counter packets 36379 bytes 2103816 } chain udp_packets { counter packets 10 bytes 1559 } chain incoming_traffic { type filter hook input priority filter; policy accept; ip protocol vmap { tcp : jump tcp_packets, udp : jump udp_packets } } }", "nft add table ip example_table", "nft add chain ip example_table example_chain { type filter hook input priority 0 \\; }", "nft add map ip example_table example_map { type ipv4_addr : verdict \\; }", "nft add rule example_table example_chain ip saddr vmap @ example_map", "nft add element ip example_table example_map { 192.0.2.1 : accept, 192.0.2.2 : drop }", "nft add element ip example_table example_map { 192.0.2.3 : accept }", "nft delete element ip example_table example_map { 192.0.2.1 }", "nft list ruleset table ip example_table { map example_map { type ipv4_addr : verdict elements = { 192.0.2.2 : drop, 192.0.2.3 : accept } } chain example_chain { type filter hook input priority filter; policy accept; ip saddr vmap @example_map } }", ":msg, startswith, \"nft drop\" -/var/log/nftables.log & stop", "systemctl restart rsyslog", "/var/log/nftables.log { size +10M maxage 30 sharedscripts postrotate /usr/bin/systemctl kill -s HUP rsyslog.service >/dev/null 2>&1 || true endscript }", "Remove all rules flush ruleset Table for both IPv4 and IPv6 rules table inet nftables_svc { # Define variables for the interface name define INET_DEV = enp1s0 define LAN_DEV = enp7s0 define DMZ_DEV = enp8s0 # Set with the IPv4 addresses of admin PCs set admin_pc_ipv4 { type ipv4_addr elements = { 10.0.0.100, 10.0.0.200 } } # Chain for incoming trafic. Default policy: drop chain INPUT { type filter hook input priority filter policy drop # Accept packets in established and related state, drop invalid packets ct state vmap { established:accept, related:accept, invalid:drop } # Accept incoming traffic on loopback interface iifname lo accept # Allow request from LAN and DMZ to local DNS server iifname { USDLAN_DEV, USDDMZ_DEV } meta l4proto { tcp, udp } th dport 53 accept # Allow admins PCs to access the router using SSH iifname USDLAN_DEV ip saddr @admin_pc_ipv4 tcp dport 22 accept # Last action: Log blocked packets # (packets that were not accepted in previous rules in this chain) log prefix \"nft drop IN : \" } # Chain for outgoing traffic. Default policy: drop chain OUTPUT { type filter hook output priority filter policy drop # Accept packets in established and related state, drop invalid packets ct state vmap { established:accept, related:accept, invalid:drop } # Accept outgoing traffic on loopback interface oifname lo accept # Allow local DNS server to recursively resolve queries oifname USDINET_DEV meta l4proto { tcp, udp } th dport 53 accept # Last action: Log blocked packets log prefix \"nft drop OUT: \" } # Chain for forwarding traffic. Default policy: drop chain FORWARD { type filter hook forward priority filter policy drop # Accept packets in established and related state, drop invalid packets ct state vmap { established:accept, related:accept, invalid:drop } # IPv4 access from LAN and internet to the HTTPS server in the DMZ iifname { USDLAN_DEV, USDINET_DEV } oifname USDDMZ_DEV ip daddr 198.51.100.5 tcp dport 443 accept # IPv6 access from internet to the HTTPS server in the DMZ iifname USDINET_DEV oifname USDDMZ_DEV ip6 daddr 2001:db8:b::5 tcp dport 443 accept # Access from LAN and DMZ to HTTPS servers on the internet iifname { USDLAN_DEV, USDDMZ_DEV } oifname USDINET_DEV tcp dport 443 accept # Last action: Log blocked packets log prefix \"nft drop FWD: \" } # Postrouting chain to handle SNAT chain postrouting { type nat hook postrouting priority srcnat; policy accept; # SNAT for IPv4 traffic from LAN to internet iifname USDLAN_DEV oifname USDINET_DEV snat ip to 203.0.113.1 } }", "include \"/etc/nftables/firewall.nft\"", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "systemctl enable --now nftables", "nft list ruleset", "ssh router.example.com ssh: connect to host router.example.com port 22 : Network is unreachable", "journalctl -k -g \"nft drop\" Oct 14 17:27:18 router kernel: nft drop IN : IN=enp8s0 OUT= MAC=... SRC=198.51.100.5 DST=198.51.100.1 ... PROTO=TCP SPT=40464 DPT=22 ... SYN", "Oct 14 17:27:18 router kernel: nft drop IN : IN=enp8s0 OUT= MAC=... SRC=198.51.100.5 DST=198.51.100.1 ... PROTO=TCP SPT=40464 DPT=22 ... SYN", "nft add table ip nat", "nft -- add chain ip nat prerouting { type nat hook prerouting priority -100 \\; }", "nft add rule ip nat prerouting tcp dport 8022 redirect to :22", "nft add table ip nat", "nft -- add chain ip nat prerouting { type nat hook prerouting priority -100 \\; } nft add chain ip nat postrouting { type nat hook postrouting priority 100 \\; }", "nft add rule ip nat prerouting tcp dport 443 dnat to 192.0.2.1", "nft add rule ip nat postrouting daddr 192.0.2.1 masquerade", "echo \"net.ipv4.ip_forward=1\" > /etc/sysctl.d/95-IPv4-forwarding.conf sysctl -p /etc/sysctl.d/95-IPv4-forwarding.conf", "nft add table inet filter", "nft add chain inet filter input { type filter hook input priority 0 \\; }", "nft add set inet filter limit-ssh { type ipv4_addr\\; flags dynamic \\;}", "nft add rule inet filter input tcp dport ssh ct state new add @limit-ssh { ip saddr ct count over 2 } counter reject", "nft list set inet filter limit-ssh table inet filter { set limit-ssh { type ipv4_addr size 65535 flags dynamic elements = { 192.0.2.1 ct count over 2 , 192.0.2.2 ct count over 2 } } }", "nft add table ip filter", "nft add chain ip filter input { type filter hook input priority 0 \\; }", "nft add set ip filter denylist { type ipv4_addr \\; flags dynamic, timeout \\; timeout 5m \\; }", "nft add rule ip filter input ip protocol tcp ct state new, untracked add @denylist { ip saddr limit rate over 10/minute } drop", "nft add rule inet example_table example_chain tcp dport 22 counter accept", "nft list ruleset table inet example_table { chain example_chain { type filter hook input priority filter; policy accept; tcp dport ssh counter packets 6872 bytes 105448565 accept } }", "nft --handle list chain inet example_table example_chain table inet example_table { chain example_chain { # handle 1 type filter hook input priority filter; policy accept; tcp dport ssh accept # handle 4 } }", "nft replace rule inet example_table example_chain handle 4 tcp dport 22 counter accept", "nft list ruleset table inet example_table { chain example_chain { type filter hook input priority filter; policy accept; tcp dport ssh counter packets 6872 bytes 105448565 accept } }", "nft --handle list chain inet example_table example_chain table inet example_table { chain example_chain { # handle 1 type filter hook input priority filter; policy accept; tcp dport ssh accept # handle 4 } }", "nft replace rule inet example_table example_chain handle 4 tcp dport 22 meta nftrace set 1 accept", "nft monitor | grep \"inet example_table example_chain\" trace id 3c5eb15e inet example_table example_chain packet: iif \"enp1s0\" ether saddr 52:54:00:17:ff:e4 ether daddr 52:54:00:72:2f:6e ip saddr 192.0.2.1 ip daddr 192.0.2.2 ip dscp cs0 ip ecn not-ect ip ttl 64 ip id 49710 ip protocol tcp ip length 60 tcp sport 56728 tcp dport ssh tcp flags == syn tcp window 64240 trace id 3c5eb15e inet example_table example_chain rule tcp dport ssh nftrace set 1 accept (verdict accept)", "nft list ruleset > file .nft", "nft -j list ruleset > file .json", "nft -f file .nft", "nft -j -f file .json", "xdp-filter load enp1s0", "xdp-filter port 22", "xdp-filter ip 192.0.2.1 -m src", "xdp-filter ether 00:53:00:AA:07:BE -m src", "xdp-filter status", "xdp-filter load enp1s0 -p deny", "xdp-filter port 22", "xdp-filter ip 192.0.2.1", "xdp-filter ether 00:53:00:AA:07:BE", "xdp-filter status" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/configuring_firewalls_and_packet_filters/index
Chapter 101. FHIR Component
Chapter 101. FHIR Component Available as of Camel version 2.23 The FHIR component integrates with the HAPI-FHIR library which is an open-source implementation of the FHIR (Fast Healthcare Interoperability Resources) specification in Java. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-fhir</artifactId> <version>USD{camel-version}</version> </dependency> 101.1. URI Format The FHIR Component uses the following URI format: fhir://endpoint-prefix/endpoint?[options] Endpoint prefix can be one of: capabilities create delete history load-page meta operation patch read search transaction update validate The FHIR component supports 2 options, which are listed below. Name Description Default Type configuration (common) To use the shared configuration FhirConfiguration resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The FHIR endpoint is configured using URI syntax: with the following path and query parameters: 101.1.1. Path Parameters (2 parameters): Name Description Default Type apiName Required What kind of operation to perform FhirApiName methodName Required What sub operation to use for the selected operation String 101.1.2. Query Parameters (26 parameters): Name Description Default Type encoding (common) Encoding to use for all request String fhirVersion (common) The FHIR Version to use DSTU3 String inBody (common) Sets the name of a parameter to be passed in the exchange In Body String log (common) Will log every requests and responses false boolean prettyPrint (common) Pretty print all request false boolean serverUrl (common) The FHIR server base URL String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern compress (advanced) Compresses outgoing (POST/PUT) contents to the GZIP format false boolean connectionTimeout (advanced) How long to try and establish the initial TCP connection (in ms) 10000 Integer deferModelScanning (advanced) When this option is set, model classes will not be scanned for children until the child list for the given type is actually accessed. false boolean fhirContext (advanced) FhirContext is an expensive object to create. To avoid creating multiple instances, it can be set directly. FhirContext forceConformanceCheck (advanced) Force conformance check false boolean sessionCookie (advanced) HTTP session cookie to add to every request String socketTimeout (advanced) How long to block for individual read/write operations (in ms) 10000 Integer summary (advanced) Request that the server modify the response using the _summary param String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean validationMode (advanced) When should Camel validate the FHIR Server's conformance statement ONCE String proxyHost (proxy) The proxy host String proxyPassword (proxy) The proxy password String proxyPort (proxy) The proxy port Integer proxyUser (proxy) The proxy username String accessToken (security) OAuth access token String password (security) Username to use for basic authentication String username (security) Username to use for basic authentication String 101.2. Spring Boot Auto-Configuration The component supports 23 options, which are listed below. Name Description Default Type camel.component.fhir.configuration.access-token OAuth access token String camel.component.fhir.configuration.api-name What kind of operation to perform FhirApiName camel.component.fhir.configuration.client To use the custom client IGenericClient camel.component.fhir.configuration.client-factory To use the custom client factory IRestfulClientFactory camel.component.fhir.configuration.compress Compresses outgoing (POST/PUT) contents to the GZIP format false Boolean camel.component.fhir.configuration.connection-timeout How long to try and establish the initial TCP connection (in ms) 10000 Integer camel.component.fhir.configuration.defer-model-scanning When this option is set, model classes will not be scanned for children until the child list for the given type is actually accessed. false Boolean camel.component.fhir.configuration.fhir-context FhirContext is an expensive object to create. To avoid creating multiple instances, it can be set directly. FhirContext camel.component.fhir.configuration.force-conformance-check Force conformance check false Boolean camel.component.fhir.configuration.log Will log every requests and responses false Boolean camel.component.fhir.configuration.method-name What sub operation to use for the selected operation String camel.component.fhir.configuration.password Username to use for basic authentication String camel.component.fhir.configuration.pretty-print Pretty print all request false Boolean camel.component.fhir.configuration.proxy-host The proxy host String camel.component.fhir.configuration.proxy-password The proxy password String camel.component.fhir.configuration.proxy-port The proxy port Integer camel.component.fhir.configuration.proxy-user The proxy username String camel.component.fhir.configuration.server-url The FHIR server base URL String camel.component.fhir.configuration.session-cookie HTTP session cookie to add to every request String camel.component.fhir.configuration.socket-timeout How long to block for individual read/write operations (in ms) 10000 Integer camel.component.fhir.configuration.username Username to use for basic authentication String camel.component.fhir.enabled Whether to enable auto configuration of the fhir component. This is enabled by default. Boolean camel.component.fhir.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-fhir</artifactId> <version>USD{camel-version}</version> </dependency>", "fhir://endpoint-prefix/endpoint?[options]", "fhir:apiName/methodName" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/fhir-component
Chapter 3. Getting support
Chapter 3. Getting support Windows Container Support for Red Hat OpenShift is provided and available as an optional, installable component. Windows Container Support for Red Hat OpenShift is not part of the OpenShift Container Platform subscription. It requires an additional Red Hat subscription and is supported according to the Scope of coverage and Service level agreements . You must have this separate subscription to receive support for Windows Container Support for Red Hat OpenShift. Without this additional Red Hat subscription, deploying Windows container workloads in production clusters is not supported. You can request support through the Red Hat Customer Portal . For more information, see the Red Hat OpenShift Container Platform Life Cycle Policy document for Red Hat OpenShift support for Windows Containers . If you do not have this additional Red Hat subscription, you can use the Community Windows Machine Config Operator, a distribution that lacks official support.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/windows_container_support_for_openshift/windows-containers-support
Chapter 63. Apache CXF Provided Interceptors
Chapter 63. Apache CXF Provided Interceptors 63.1. Core Apache CXF Interceptors Inbound Table 63.1, "Core inbound interceptors" lists the core inbound interceptors that are added to all Apache CXF endpoints. Table 63.1. Core inbound interceptors Class Phase Description ServiceInvokerInterceptor INVOKE Invokes the proper method on the service. Outbound The Apache CXF does not add any core interceptors to the outbound interceptor chain by default. The contents of an endpoint's outbound interceptor chain depend on the features in use. 63.2. Front-Ends JAX-WS Table 63.2, "Inbound JAX-WS interceptors" lists the interceptors added to a JAX-WS endpoint's inbound message chain. Table 63.2. Inbound JAX-WS interceptors Class Phase Description HolderInInterceptor PRE_INVOKE Creates holder objects for any out or in/out parameters in the message. WrapperClassInInterceptor POST_LOGICAL Unwraps the parts of a wrapped doc/literal message into the appropriate array of objects. LogicalHandlerInInterceptor PRE_PROTOCOL Passes message processing to the JAX-WS logical handlers used by the endpoint. When the JAX-WS handlers complete, the message is passed along to the interceptor on the inbound chain. SOAPHandlerInterceptor PRE_PROTOCOL Passes message processing to the JAX-WS SOAP handlers used by the endpoint. When the SOAP handlers finish with the message, the message is passed along to the interceptor in the chain. Table 63.3, "Outbound JAX-WS interceptors" lists the interceptors added to a JAX-WS endpoint's outbound message chain. Table 63.3. Outbound JAX-WS interceptors Class Phase Description HolderOutInterceptor PRE_LOGICAL Removes the values of any out and in/out parameters from their holder objects and adds the values to the message's parameter list. WebFaultOutInterceptor PRE_PROTOCOL Processes outbound fault messages. WrapperClassOutInterceptor PRE_LOGICAL Makes sure that wrapped doc/literal messages and rpc/literal messages are properly wrapped before being added to the message. LogicalHandlerOutInterceptor PRE_MARSHAL Passes message processing to the JAX-WS logical handlers used by the endpoint. When the JAX-WS handlers complete, the message is passed along to the interceptor on the outbound chain. SOAPHandlerInterceptor PRE_PROTOCOL Passes message processing to the JAX-WS SOAP handlers used by the endpoint. When the SOAP handlers finish processing the message, it is passed along to the interceptor in the chain. MessageSenderInterceptor PREPARE_SEND Calls back to the Destination object to have it setup the output streams, headers, etc. to prepare the outgoing transport. JAX-RS Table 63.4, "Inbound JAX-RS interceptors" lists the interceptors added to a JAX-RS endpoint's inbound message chain. Table 63.4. Inbound JAX-RS interceptors Class Phase Description JAXRSInInterceptor PRE_STREAM Selects the root resource class, invokes any configured JAX-RS request filters, and determines the method to invoke on the root resource. Important The inbound chain for a JAX-RS endpoint skips straight to the ServiceInvokerInInterceptor interceptor. No other interceptors will be invoked after the JAXRSInInterceptor . Table 63.5, "Outbound JAX-RS interceptors" lists the interceptors added to a JAX-RS endpoint's outbound message chain. Table 63.5. Outbound JAX-RS interceptors Class Phase Description JAXRSOutInterceptor MARSHAL Marshals the response into the proper format for transmission. 63.3. Message bindings SOAP Table 63.6, "Inbound SOAP interceptors" lists the interceptors added to a endpoint's inbound message chain when using the SOAP Binding. Table 63.6. Inbound SOAP interceptors Class Phase Description CheckFaultInterceptor POST_PROTOCOL Checks if the message is a fault message. If the message is a fault message, normal processing is aborted and fault processing is started. MustUnderstandInterceptor PRE_PROTOCOL Processes the must understand headers. RPCInInterceptor UNMARSHAL Unmarshals rpc/literal messages. If the message is bare, the message is passed to a BareInInterceptor object to deserialize the message parts. ReadsHeadersInterceptor READ Parses the SOAP headers and stores them in the message object. SoapActionInInterceptor READ Parses the SOAP action header and attempts to find a unique operation for the action. SoapHeaderInterceptor UNMARSHAL Binds the SOAP headers that map to operation parameters to the appropriate objects. AttachmentInInterceptor RECEIVE Parses the mime headers for mime boundaries, finds the root part and resets the input stream to it, and stores the other parts in a collection of Attachment objects. DocLiteralInInterceptor UNMARSHAL Examines the first element in the SOAP body to determine the appropriate operation and calls the data binding to read in the data. StaxInInterceptor POST_STREAM Creates an XMLStreamReader object from the message. URIMappingInterceptor UNMARSHAL Handles the processing of HTTP GET methods. SwAInInterceptor PRE_INVOKE Creates the required MIME handlers for binary SOAP attachments and adds the data to the parameter list. Table 63.7, "Outbound SOAP interceptors" lists the interceptors added to a endpoint's outbound message chain when using the SOAP Binding. Table 63.7. Outbound SOAP interceptors Class Phase Description RPCOutInterceptor MARSHAL Marshals rpc style messages for transmission. SoapHeaderOutFilterInterceptor PRE_LOGICAL Removes all SOAP headers that are marked as inbound only. SoapPreProtocolOutInterceptor POST_LOGICAL Sets up the SOAP version and the SOAP action header. AttachmentOutInterceptor PRE_STREAM Sets up the attachment marshalers and the mime stuff required to process any attachments that might be in the message. BareOutInterceptor MARSHAL Writes the message parts. StaxOutInterceptor PRE_STREAM Creates an XMLStreamWriter object from the message. WrappedOutInterceptor MARSHAL Wraps the outbound message parameters. SoapOutInterceptor WRITE Writes the soap:envelope element and the elements for the header blocks in the message. Also writes an empty soap:body element for the remaining interceptors to populate. SwAOutInterceptor PRE_LOGICAL Removes any binary data that will be packaged as a SOAP attachment and stores it for later processing. XML Table 63.8, "Inbound XML interceptors" lists the interceptors added to a endpoint's inbound message chain when using the XML Binding. Table 63.8. Inbound XML interceptors Class Phase Description AttachmentInInterceptor RECEIVE Parses the mime headers for mime boundaries, finds the root part and resets the input stream to it, and then stores the other parts in a collection of Attachment objects. DocLiteralInInterceptor UNMARSHAL Examines the first element in the message body to determine the appropriate operation and then calls the data binding to read in the data. StaxInInterceptor POST_STREAM Creates an XMLStreamReader object from the message. URIMappingInterceptor UNMARSHAL Handles the processing of HTTP GET methods. XMLMessageInInterceptor UNMARSHAL Unmarshals the XML message. Table 63.9, "Outbound XML interceptors" lists the interceptors added to a endpoint's outbound message chain when using the XML Binding. Table 63.9. Outbound XML interceptors Class Phase Description StaxOutInterceptor PRE_STREAM Creates an XMLStreamWriter objects from the message. WrappedOutInterceptor MARSHAL Wraps the outbound message parameters. XMLMessageOutInterceptor MARSHAL Marshals the message for transmission. CORBA Table 63.10, "Inbound CORBA interceptors" lists the interceptors added to a endpoint's inbound message chain when using the CORBA Binding. Table 63.10. Inbound CORBA interceptors Class Phase Description CorbaStreamInInterceptor PRE_STREAM Deserializes the CORBA message. BareInInterceptor UNMARSHAL Deserializes the message parts. Table 63.11, "Outbound CORBA interceptors" lists the interceptors added to a endpoint's outbound message chain when using the CORBA Binding. Table 63.11. Outbound CORBA interceptors Class Phase Description CorbaStreamOutInterceptor PRE_STREAM Serializes the message. BareOutInterceptor MARSHAL Writes the message parts. CorbaStreamOutEndingInterceptor USER_STREAM Creates a streamable object for the message and stores it in the message context. 63.4. Other features Logging Table 63.12, "Inbound logging interceptors" lists the interceptors added to a endpoint's inbound message chain to support logging. Table 63.12. Inbound logging interceptors Class Phase Description LoggingInInterceptor RECEIVE Writes the raw message data to the logging system. Table 63.13, "Outbound logging interceptors" lists the interceptors added to a endpoint's outbound message chain to support logging. Table 63.13. Outbound logging interceptors Class Phase Description LoggingOutInterceptor PRE_STREAM Writes the outbound message to the logging system. For more information about logging see Chapter 19, Apache CXF Logging . WS-Addressing Table 63.14, "Inbound WS-Addressing interceptors" lists the interceptors added to a endpoint's inbound message chain when using WS-Addressing. Table 63.14. Inbound WS-Addressing interceptors Class Phase Description MAPCodec PRE_PROTOCOL Decodes the message addressing properties. Table 63.15, "Outbound WS-Addressing interceptors" lists the interceptors added to a endpoint's outbound message chain when using WS-Addressing. Table 63.15. Outbound WS-Addressing interceptors Class Phase Description MAPAggregator PRE_LOGICAL Aggregates the message addressing properties for a message. MAPCodec PRE_PROTOCOL Encodes the message addressing properties. For more information about WS-Addressing see Chapter 20, Deploying WS-Addressing . WS-RM Important WS-RM relies on WS-Addressing so all of the WS-Addressing interceptors will also be added to the interceptor chains. Table 63.16, "Inbound WS-RM interceptors" lists the interceptors added to a endpoint's inbound message chain when using WS-RM. Table 63.16. Inbound WS-RM interceptors Class Phase Description RMInInterceptor PRE_LOGICAL Handles the aggregation of message parts and acknowledgement messages. RMSoapInterceptor PRE_PROTOCOL Encodes and decodes the WS-RM properties from messages. Table 63.17, "Outbound WS-RM interceptors" lists the interceptors added to a endpoint's outbound message chain when using WS-RM. Table 63.17. Outbound WS-RM interceptors Class Phase Description RMOutInterceptor PRE_LOGICAL Handles the chunking of messages and the transmission of the chunks. Also handles the processing of acknowledgements and resend requests. RMSoapInterceptor PRE_PROTOCOL Encodes and decodes the WS-RM properties from messages. For more information about WS-RM see Chapter 21, Enabling Reliable Messaging .
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/CXFInterceptFeaturesAppx
Chapter 1. Workloads APIs
Chapter 1. Workloads APIs 1.1. BuildConfig [build.openshift.io/v1] Description Build configurations define a build process for new container images. There are three types of builds possible - a container image build using a Dockerfile, a Source-to-Image build that uses a specially prepared base image that accepts source code that it can make runnable, and a custom build that can run // arbitrary container images as a base and accept the build parameters. Builds run on the cluster and on completion are pushed to the container image registry specified in the "output" section. A build can be triggered via a webhook, when the base image changes, or when a user manually requests a new build be // created. Each build created by a build configuration is numbered and refers back to its parent configuration. Multiple builds can be triggered at once. Builds that do not have "output" set can be used to test code or run a verification build. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. Build [build.openshift.io/v1] Description Build encapsulates the inputs needed to produce a new deployable image, as well as the status of the execution and a reference to the Pod which executed the build. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. BuildLog [build.openshift.io/v1] Description BuildLog is the (unused) resource associated with the build log redirector Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. BuildRequest [build.openshift.io/v1] Description BuildRequest is the resource used to pass parameters to build generator Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. CronJob [batch/v1] Description CronJob represents the configuration of a single cron job. Type object 1.6. DaemonSet [apps/v1] Description DaemonSet represents the configuration of a daemon set. Type object 1.7. Deployment [apps/v1] Description Deployment enables declarative updates for Pods and ReplicaSets. Type object 1.8. DeploymentConfig [apps.openshift.io/v1] Description Deployment Configs define the template for a pod and manages deploying new images or configuration changes. A single deployment configuration is usually analogous to a single micro-service. Can support many different deployment patterns, including full restart, customizable rolling updates, and fully custom behaviors, as well as pre- and post- deployment hooks. Each individual deployment is represented as a replication controller. A deployment is "triggered" when its configuration is changed or a tag in an Image Stream is changed. Triggers can be disabled to allow manual control over a deployment. The "strategy" determines how the deployment is carried out and may be changed at any time. The latestVersion field is updated when a new deployment is triggered by any means. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Deprecated: Use deployments or other means for declarative updates for pods instead. Type object 1.9. DeploymentConfigRollback [apps.openshift.io/v1] Description DeploymentConfigRollback provides the input to rollback generation. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.10. DeploymentLog [apps.openshift.io/v1] Description DeploymentLog represents the logs for a deployment Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.11. DeploymentRequest [apps.openshift.io/v1] Description DeploymentRequest is a request to a deployment config for a new deployment. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.12. Job [batch/v1] Description Job represents the configuration of a single job. Type object 1.13. Pod [v1] Description Pod is a collection of containers that can run on a host. This resource is created by clients and scheduled onto hosts. Type object 1.14. ReplicationController [v1] Description ReplicationController represents the configuration of a replication controller. Type object 1.15. ReplicaSet [apps/v1] Description ReplicaSet ensures that a specified number of pod replicas are running at any given time. Type object 1.16. StatefulSet [apps/v1] Description StatefulSet represents a set of pods with consistent identities. Identities are defined as: - Network: A single stable DNS and hostname. - Storage: As many VolumeClaims as requested. The StatefulSet guarantees that a given network identity will always map to the same storage identity. Type object
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/workloads_apis/workloads-apis
32.2. Anaconda Rescue Mode
32.2. Anaconda Rescue Mode The Anaconda installation program's rescue mode is a minimal Linux environment that can be booted from the Red Hat Enterprise Linux 7 DVD or other boot media. It contains command-line utilities for repairing a wide variety of issues. This rescue mode can be accessed from the Troubleshooting submenu of the boot menu. In this mode, you can mount file systems as read-only or even to not mount them at all, blacklist or add a driver provided on a driver disc, install or upgrade system packages, or manage partitions. Note Anaconda rescue mode is different from rescue mode (an equivalent to single-user mode ) and emergency mode , which are provided as parts of the systemd system and service manager. For more information about these modes, see Red Hat Enterprise Linux 7 System Administrator's Guide . To boot into Anaconda rescue mode, you must be able to boot the system using one Red Hat Enterprise Linux boot media, such as a minimal boot disc or USB drive, or a full installation DVD. For detailed information about booting the system using media provided by Red Hat, see the appropriate chapters: Chapter 7, Booting the Installation on 64-bit AMD, Intel, and ARM systems for 64-bit AMD, Intel, and ARM systems Chapter 12, Booting the Installation on IBM Power Systems for IBM Power Systems servers Chapter 16, Booting the Installation on IBM Z for IBM Z Important Advanced storage, such as iSCSI or zFCP devices, must be configured either using dracut boot options (such as rd.zfcp= or root=iscsi: options ), or in the CMS configuration file on IBM Z. It is not possible to configure these storage devices interactively after booting into rescue mode. For information about dracut boot options, see the dracut.cmdline(7) man page. For information about the CMS configuration file, see Chapter 21, Parameter and Configuration Files on IBM Z . Procedure 32.1. Booting into Anaconda Rescue Mode Boot the system from either minimal boot media, or a full installation DVD or USB drive, and wait for the boot menu to appear. From the boot menu, either select the Rescue a Red Hat Enterprise Linux system option from the Troubleshooting submenu, or append the inst.rescue option to the boot command line. To enter the boot command line, press the Tab key on BIOS-based systems or the e key on the UEFI-based systems. If your system requires a third-party driver provided on a driver disc to boot, append the inst.dd= driver_name to the boot command line: For more information on using a driver disc at boot time, see Section 6.3.3, "Manual Driver Update" for AMD64 and Intel 64 systems or Section 11.2.3, "Manual Driver Update" for IBM Power Systems servers. If a driver that is part of the Red Hat Enterprise Linux 7 distribution prevents the system from booting, append the modprobe.blacklist= option to the boot command line: For more information about blacklisting drivers, see Section 6.3.4, "Blacklisting a Driver" . When ready, press Enter (BIOS-based systems) or Ctrl + X (UEFI-based systems) to boot the modified option. Then wait until the following message is displayed: If you select Continue , it attempts to mount your file system under the directory /mnt/sysimage/ . If it fails to mount a partition, you will be notified. If you select Read-Only , it attempts to mount your file system under the directory /mnt/sysimage/ , but in read-only mode. If you select Skip , your file system is not mounted. Choose Skip if you think your file system is corrupted. Once you have your system in rescue mode, a prompt appears on VC (virtual console) 1 and VC 2 (use the Ctrl + Alt + F1 key combination to access VC 1 and Ctrl + Alt + F2 to access VC 2): Even if your file system is mounted, the default root partition while in Anaconda rescue mode is a temporary root partition, not the root partition of the file system used during normal user mode ( multi-user.target or graphical.target ). If you selected to mount your file system and it mounted successfully, you can change the root partition of the Anaconda rescue mode environment to the root partition of your file system by executing the following command: This is useful if you need to run commands, such as rpm , that require your root partition to be mounted as / . To exit the chroot environment, type exit to return to the prompt. If you selected Skip , you can still try to mount a partition or LVM2 logical volume manually inside Anaconda rescue mode by creating a directory, such as /directory/ , and typing the following command: In the above command, /directory/ is a directory that you have created and /dev/mapper/VolGroup00-LogVol02 is the LVM2 logical volume you want to mount. If the partition is a different type than XFS, replace the xfs string with the correct type (such as ext4 ). If you do not know the names of all physical partitions, use the following command to list them: If you do not know the names of all LVM2 physical volumes, volume groups, or logical volumes, use the pvdisplay , vgdisplay or lvdisplay commands, respectively. From the prompt, you can run many useful commands, such as: ssh , scp , and ping if the network is started For details, see the Red Hat Enterprise Linux 7 System Administrator's Guide . dump and restore for users with tape drives For details, see the RHEL Backup and Restore Assistant . parted and fdisk for managing partitions For details, see the Red Hat Enterprise Linux 7 Storage Administration Guide . yum for installing or upgrading software For details, see the Red Hat Enterprise Linux 7 Administrator's Guide 32.2.1. Capturing an sosreport The sosreport command-line utility collects configuration and diagnostic information, such as the running kernel version, loaded modules, and system and service configuration files, from the system. The utility output is stored in a tar archive in the /var/tmp/ directory. The sosreport utility is useful for analyzing the system errors and can make troubleshooting easier. The following procedure describes how to capture an sosreport output in Anaconda rescue mode: Procedure 32.2. Using sosreport in Anaconda Rescue Mode Follow steps in Procedure 32.1, "Booting into Anaconda Rescue Mode" to boot into Anaconda rescue mode. Ensure that you mount the installed system / (root) partition in read-write mode. Change the root directory to the /mnt/sysimage/ directory: Execute sosreport to generate an archive with system configuration and diagnostic information: Important When running, sosreport will prompt you to enter your name and case number that you get when you contact Red Hat Support service and open a new support case. Use only letters and numbers because adding any of the following characters or spaces could render the report unusable: Optional . If you want to transfer the generated archive to a new location using the network, it is necessary to have a network interface configured. In case you use the dynamic IP addressing, there are no other steps required. However, when using the static addressing, enter the following command to assign an IP address (for example 10.13.153.64/23 ) to a network interface (for example dev eth0 ): See the Red Hat Enterprise Linux 7 Networking Guide for additional information about static addressing. Exit the chroot environment: Store the generated archive in a new location, from where it can be easily accessible: For transferring the archive through the network, use the scp utility: See the references below for further information: For general information about sosreport , see What is a sosreport and how to create one in Red Hat Enterprise Linux 4.6 and later? . For information about using sosreport within Anaconda rescue mode, see How to generate sosreport from the rescue environment . For information about generating an sosreport to a different location than /tmp , see How do I make sosreport write to an alternative location? . For information about collecting an sosreport manually, see Sosreport fails. What data should I provide in its place? . 32.2.2. Reinstalling the Boot Loader In some cases, the GRUB2 boot loader can mistakenly be deleted, corrupted, or replaced by other operating systems. The following steps detail the process on how GRUB is reinstalled on the master boot record: Procedure 32.3. Reinstalling the GRUB2 Boot Loader Follow instructions in Procedure 32.1, "Booting into Anaconda Rescue Mode" to boot into Anaconda rescue mode. Ensure that you mount the installed system's / (root) partition in read-write mode. Change the root partition: Use the following command to reinstall the GRUB2 boot loader, where install_device is the boot device (typically, /dev/sda): Reboot the system. 32.2.3. Using RPM to Add, Remove, or Replace a Driver Missing or malfunctioning drivers can cause problems when booting the system. Anaconda rescue mode provides an environment in which you can add, remove, or replace a driver even when the system fails to boot. Wherever possible, we recommend that you use the RPM package manager to remove malfunctioning drivers or to add updated or missing drivers. Note When you install a driver from a driver disc, the driver disc updates all initramfs images on the system to use this driver. If a problem with a driver prevents a system from booting, you cannot rely on booting the system from another initramfs image. Procedure 32.4. Using RPM to Remove a Driver Boot the system into Anaconda rescue mode. Follow the instructions in Procedure 32.1, "Booting into Anaconda Rescue Mode" . Ensure that you mount the installed system in read-write mode. Change the root directory to /mnt/sysimage/ : Use the rpm -e command to remove the driver package. For example, to remove the xorg-x11-drv-wacom driver package, run: Exit the chroot environment: If you cannot remove a malfunctioning driver for some reason, you can instead blacklist the driver so that it does not load at boot time. See Section 6.3.4, "Blacklisting a Driver" and Chapter 23, Boot Options for more information about blacklisting drivers. Installing a driver is a similar process but the RPM package must be available on the system: Procedure 32.5. Installing a Driver from an RPM package Boot the system into Anaconda rescue mode. Follow the instructions in Procedure 32.1, "Booting into Anaconda Rescue Mode" . Do not choose to mount the installed system as read only. Make the RPM package that contains the driver available. For example, mount a CD or USB flash drive and copy the RPM package to a location of your choice under /mnt/sysimage/ , for example: /mnt/sysimage/root/drivers/ Change the root directory to /mnt/sysimage/ : Use the rpm -ivh command to install the driver package. For example, to install the xorg-x11-drv-wacom driver package from /root/drivers/ , run: Note The /root/drivers/ directory in this chroot environment is the /mnt/sysimage/root/drivers/ directory in the original rescue environment. Exit the chroot environment: When you have finished removing and installing drivers, reboot the system.
[ "inst.rescue inst.dd= driver_name", "inst.rescue modprobe.blacklist= driver_name", "The rescue environment will now attempt to find your Linux installation and mount it under the /mnt/sysimage/ directory. You can then make any changes required to your system. If you want to proceed with this step choose 'Continue'. You can also choose to mount your file systems read-only instead of read-write by choosing 'Read-only'. If for some reason this process fails you can choose 'Skip' and this step will be skipped and you will go directly to a command line.", "sh-4.2#", "sh-4.2# chroot /mnt/sysimage", "sh-4.2# mount -t xfs /dev/mapper/VolGroup00-LogVol02 /directory", "sh-4.2# fdisk -l", "sh-4.2# chroot /mnt/sysimage/", "sh-4.2# sosreport", "% & { } \\ < > > * ? / USD ~ ' \" : @ + ` | =", "bash-4.2# ip addr add 10.13.153.64/23 dev eth0", "sh-4.2# exit", "sh-4.2# cp /mnt/sysimage/var/tmp/ sosreport new_location", "sh-4.2# scp /mnt/sysimage/var/tmp/ sosreport username@hostname:sosreport", "sh-4.2# chroot /mnt/sysimage/", "sh-4.2# /sbin/grub2-install install_device", "sh-4.2# chroot /mnt/sysimage/", "sh-4.2# rpm -e xorg-x11-drv-wacom", "sh-4.2# exit", "sh-4.2# chroot /mnt/sysimage/", "sh-4.2# rpm -\\u00adivh /root/drivers/xorg-x11-drv-wacom-0.23.0-6.el7.x86_64.rpm", "sh-4.2# exit" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-rescue-mode
Appendix C. Understanding the archive_config_inventory.yml file
Appendix C. Understanding the archive_config_inventory.yml file The archive_config_inventory.yml file is an example Ansible inventory file that you can use to backup and restore the configurations of Red Hat Hyperconverged Infrastructure for Virtualization cluster. You can find this file at /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/archive_config_inventory.yml on any hyperconverged host. There are 2 playbooks archive_config.yml and backup.yml . The archive_config.yml is a wrapper playbook, that in turn imports tasks/backup.yml . C.1. Configuration parameters for backup and restore in archive_config_inventory.yml hosts The backend FQDN of each host in the cluster that you want to back up. backup_dir The directory in which to store backup files. nbde_setup Upgrade does not support setting of NBDE, set to false. upgrade Set to true. For example: C.2. Creating the archive_config.yml playbook file Create the archive_config.yml playbook file only if it is not available at the location /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment Add the following content to archive_config.yml file: C.3. Creating the tasks/backup.yml playbook file Create the tasks/backup.yml playbook file only if it is not available at the location /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment Add the following content to the backup.yml file:
[ "all: hosts: host1: host2: host3: vars: backup_dir: /archive nbde_setup: false upgrade: true", "--- - import_playbook: tasks/backup.yml tags: backupfiles", "--- - hosts: all tasks: - name: Check if backup dir is already available stat: path: \"{{ backup_dir }}\" register: result - fail: msg: Backup directory \"{{backup_dir}}\" exists, remove it and retry when: result.stat.isdir is defined - name: Create temporary backup directory file: path: \"{{ backup_dir }}\" state: directory - name: Get the hostname shell: uname -n register: hostname - name: Add hostname details to archive shell: echo {{ hostname.stdout }} > {{ backup_dir }}/hostname - name: Dump the IP configuration details shell: ip addr show > {{ backup_dir }}/ipconfig - name: Dump the IPv4 routing information shell: ip route > {{ backup_dir }}/ip4route - name: Dump the IPv6 routing information shell: ip -6 route > {{ backup_dir }}/ip6route - name: Get the disk layout information shell: lsblk > {{ backup_dir }}/lsblk - name: Get the mount information for reference shell: df -Th > {{ backup_dir }}/mount - name: Check for VDO configuration stat: path: /etc/vdoconf.yml register: vdoconfstat - name: Copy VDO configuration, if available shell: cp -a /etc/vdoconf.yml \"{{backup_dir}}\" when: vdoconfstat.stat.isreg is defined - name: Backup fstab shell: cp -a /etc/fstab \"{{backup_dir}}\" - name: Backup glusterd config directory shell: cp -a /var/lib/glusterd \"{{backup_dir}}\" - name: Backup /etc/crypttab, if NBDE is enabled shell: cp -a /etc/crypttab \"{{ backup_dir }}\" when: nbde_setup is defined and nbde_setup - name: Backup keyfiles used for LUKS decryption shell: cp -a /etc/sd*keyfile \"{{ backup_dir }}\" when: nbde_setup is defined and nbde_setup - name: Check for the inventory file generated from cockpit stat: path: /etc/ansible/hc_wizard_inventory.yml register: inventory - name: Copy the host inventory file generated from cockpit shell: cp /etc/ansible/hc_wizard_inventory.yml {{ backup_dir }} when: inventory.stat.isreg is defined - name: Create a tar.gz with all the contents archive: path: \"{{ backup_dir }}/*\" dest: /root/rhvh-node-{{ hostname.stdout }}-backup.tar.gz" ]
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/upgrading_red_hat_hyperconverged_infrastructure_for_virtualization/understanding-archive-config-inventory-yml
Chapter 9. Authentication and Interoperability
Chapter 9. Authentication and Interoperability Identity Management Red Hat Enterprise Linux 6.2 includes identity management capabilities that allow for central management of user identities, policy-based access control and authentication services. This identity management service, previously referred to as IPA, is based on the open source FreeIPA project. These services have been present as a Technology Preview in Red Hat Enterprise Linux 6 releases. With this release, identity management has been promoted to fully supported. Note The Identity Management Guide provides detailed information about the Identity Management solution, the technologies with which it works, and some of the terminology used to describe it. It also provides high-level design information for both the client and server components. PIV support for smart cards Support for smart cards with a PIV (Personal Identity Verification) interface has been added in Red Hat Enterprise Linux 6.2. It is now possible to use FIPS 201 compliant PIV cards that allow for secure use of data. PIV cards enable confidentiality of data by restricting access to the card holder. They also ensure data integrity by allowing only the card holder to make modifications. They guarantee the authenticity of the information and prevent non-repudiation of data. The use of PIV cards is mandated by the U.S. Homeland Security Presidential Directive 12 (HSPC-12) which requires the use of this type of technology to gain access to all government IT systems.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_release_notes/interoperability
Chapter 20. ipaddr6
Chapter 20. ipaddr6 The IPv6 address of the source server, if available. Can be an array. Data type ip
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/logging/ipaddr6
Chapter 92. Openapi Java
Chapter 92. Openapi Java The Rest DSL can be integrated with the camel-openapi-java module which is used for exposing the REST services and their APIs using OpenApi . The camel-openapi-java module can be used from the REST components (without the need for servlet). 92.1. Dependencies When using openapi-java with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-openapi-java-starter</artifactId> </dependency> 92.2. Using OpenApi in rest-dsl You can enable the OpenApi api from the rest-dsl by configuring the apiContextPath dsl as shown below: public class UserRouteBuilder extends RouteBuilder { @Override public void configure() throws Exception { // configure we want to use servlet as the component for the rest DSL // and we enable json binding mode restConfiguration().component("netty-http").bindingMode(RestBindingMode.json) // and output using pretty print .dataFormatProperty("prettyPrint", "true") // setup context path and port number that netty will use .contextPath("/").port(8080) // add OpenApi api-doc out of the box .apiContextPath("/api-doc") .apiProperty("api.title", "User API").apiProperty("api.version", "1.2.3") // and enable CORS .apiProperty("cors", "true"); // this user REST service is json only rest("/user").description("User rest service") .consumes("application/json").produces("application/json") .get("/{id}").description("Find user by id").outType(User.class) .param().name("id").type(path).description("The id of the user to get").dataType("int").endParam() .to("bean:userService?method=getUser(USD{header.id})") .put().description("Updates or create a user").type(User.class) .param().name("body").type(body).description("The user to update or create").endParam() .to("bean:userService?method=updateUser") .get("/findAll").description("Find all users").outType(User[].class) .to("bean:userService?method=listUsers"); } } 92.3. Options The OpenApi module can be configured using the following options. To configure using a servlet you use the init-param as shown above. When configuring directly in the rest-dsl, you use the appropriate method, such as enableCORS , host,contextPath , dsl. The options with api.xxx is configured using apiProperty dsl. Option Type Description cors Boolean Whether to enable CORS. Notice this only enables CORS for the api browser, and not the actual access to the REST services. Is default false. openapi.version String OpenApi spec version. Is default 3.0. host String To setup the hostname. If not configured camel-openapi-java will calculate the name as localhost based. schemes String The protocol schemes to use. Multiple values can be separated by comma such as "http,https". The default value is "http". base.path String Required : To setup the base path where the REST services is available. The path is relative (eg do not start with http/https) and camel-openapi-java will calculate the absolute base path at runtime, which will be protocol://host:port/context-path/base.path api.path String To setup the path where the API is available (eg /api-docs). The path is relative (eg do not start with http/https) and camel-openapi-java will calculate the absolute base path at runtime, which will be protocol://host:port/context-path/api.path So using relative paths is much easier. See above for an example. api.version String The version of the api. Is default 0.0.0. api.title String The title of the application. api.description String A short description of the application. api.termsOfService String A URL to the Terms of Service of the API. api.contact.name String Name of person or organization to contact api.contact.email String An email to be used for API-related correspondence. api.contact.url String A URL to a website for more contact information. api.license.name String The license name used for the API. api.license.url String A URL to the license used for the API. 92.4. Adding Security Definitions in API doc The Rest DSL now supports declaring OpenApi securityDefinitions in the generated API document. For example as shown below: rest("/user").tag("dude").description("User rest service") // setup security definitions .securityDefinitions() .oauth2("petstore_auth").authorizationUrl("http://petstore.swagger.io/oauth/dialog").end() .apiKey("api_key").withHeader("myHeader").end() .end() .consumes("application/json").produces("application/json") Here we have setup two security definitions OAuth2 - with implicit authorization with the provided url Api Key - using an api key that comes from HTTP header named myHeader Then you need to specify on the rest operations which security to use by referring to their key (petstore_auth or api_key). .get("/{id}/{date}").description("Find user by id and date").outType(User.class) .security("api_key") ... .put().description("Updates or create a user").type(User.class) .security("petstore_auth", "write:pets,read:pets") Here the get operation is using the Api Key security and the put operation is using OAuth security with permitted scopes of read and write pets. 92.5. JSon or Yaml The camel-openapi-java module supports both JSon and Yaml out of the box. You can specify in the request url what you want returned by using /openapi.json or /openapi.yaml for either one. If none is specified then the HTTP Accept header is used to detect if json or yaml can be accepted. If either both is accepted or none was set as accepted then json is returned as the default format. 92.6. useXForwardHeaders and API URL resolution The OpenApi specification allows you to specify the host, port & path that is serving the API. In OpenApi V2 this is done via the host field and in OpenAPI V3 it is part of the servers field. By default, the value for these fields is determined by X-Forwarded headers, X-Forwarded-Host & X-Forwarded-Proto . This can be overridden by disabling the lookup of X-Forwarded headers and by specifying your own host, port & scheme on the REST configuration. restConfiguration().component("netty-http") .useXForwardHeaders(false) .apiProperty("schemes", "https"); .host("localhost") .port(8080); 92.7. Examples In the Apache Camel distribution we ship the camel-example-openapi-cdi and camel-example-spring-boot-rest-openapi-simple which demonstrates using this OpenApi component. 92.8. Spring Boot Auto-Configuration The component supports 1 options, which are listed below. Name Description Default Type camel.openapi.enabled Enables Camel Rest DSL to automatic register its OpenAPI (eg swagger doc) in Spring Boot which allows tooling such as SpringDoc to integrate with Camel. true Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-openapi-java-starter</artifactId> </dependency>", "public class UserRouteBuilder extends RouteBuilder { @Override public void configure() throws Exception { // configure we want to use servlet as the component for the rest DSL // and we enable json binding mode restConfiguration().component(\"netty-http\").bindingMode(RestBindingMode.json) // and output using pretty print .dataFormatProperty(\"prettyPrint\", \"true\") // setup context path and port number that netty will use .contextPath(\"/\").port(8080) // add OpenApi api-doc out of the box .apiContextPath(\"/api-doc\") .apiProperty(\"api.title\", \"User API\").apiProperty(\"api.version\", \"1.2.3\") // and enable CORS .apiProperty(\"cors\", \"true\"); // this user REST service is json only rest(\"/user\").description(\"User rest service\") .consumes(\"application/json\").produces(\"application/json\") .get(\"/{id}\").description(\"Find user by id\").outType(User.class) .param().name(\"id\").type(path).description(\"The id of the user to get\").dataType(\"int\").endParam() .to(\"bean:userService?method=getUser(USD{header.id})\") .put().description(\"Updates or create a user\").type(User.class) .param().name(\"body\").type(body).description(\"The user to update or create\").endParam() .to(\"bean:userService?method=updateUser\") .get(\"/findAll\").description(\"Find all users\").outType(User[].class) .to(\"bean:userService?method=listUsers\"); } }", "rest(\"/user\").tag(\"dude\").description(\"User rest service\") // setup security definitions .securityDefinitions() .oauth2(\"petstore_auth\").authorizationUrl(\"http://petstore.swagger.io/oauth/dialog\").end() .apiKey(\"api_key\").withHeader(\"myHeader\").end() .end() .consumes(\"application/json\").produces(\"application/json\")", ".get(\"/{id}/{date}\").description(\"Find user by id and date\").outType(User.class) .security(\"api_key\") .put().description(\"Updates or create a user\").type(User.class) .security(\"petstore_auth\", \"write:pets,read:pets\")", "restConfiguration().component(\"netty-http\") .useXForwardHeaders(false) .apiProperty(\"schemes\", \"https\"); .host(\"localhost\") .port(8080);" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-openapi-java-starter
Chapter 2. Managing your cluster resources
Chapter 2. Managing your cluster resources You can apply global configuration options in OpenShift Container Platform. Operators apply these configuration settings across the cluster. 2.1. Interacting with your cluster resources You can interact with cluster resources by using the OpenShift CLI ( oc ) tool in OpenShift Container Platform. The cluster resources that you see after running the oc api-resources command can be edited. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have access to the web console or you have installed the oc CLI tool. Procedure To see which configuration Operators have been applied, run the following command: USD oc api-resources -o name | grep config.openshift.io To see what cluster resources you can configure, run the following command: USD oc explain <resource_name>.config.openshift.io To see the configuration of custom resource definition (CRD) objects in the cluster, run the following command: USD oc get <resource_name>.config -o yaml To edit the cluster resource configuration, run the following command: USD oc edit <resource_name>.config -o yaml
[ "oc api-resources -o name | grep config.openshift.io", "oc explain <resource_name>.config.openshift.io", "oc get <resource_name>.config -o yaml", "oc edit <resource_name>.config -o yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/support/managing-cluster-resources
Appendix A. Initialization script for provisioning examples
Appendix A. Initialization script for provisioning examples If you have not followed the examples in Managing content , you can use the following initialization script to create an environment for provisioning examples. Procedure Create a script file ( content-init.sh ) and include the following: #!/bin/bash MANIFEST=USD1 # Import the content from Red Hat CDN hammer organization create \ --name "ACME" \ --label "ACME" \ --description "My example organization for managing content" hammer subscription upload \ --file ~/USDMANIFEST \ --organization "ACME" hammer repository-set enable \ --name "Red Hat Enterprise Linux 7 Server (RPMs)" \ --releasever "7Server" \ --basearch "x86_64" \ --product "Red Hat Enterprise Linux Server" \ --organization "ACME" hammer repository-set enable \ --name "Red Hat Enterprise Linux 7 Server (Kickstart)" \ --releasever "7Server" \ --basearch "x86_64" \ --product "Red Hat Enterprise Linux Server" \ --organization "ACME" hammer repository-set enable \ --name "Red Hat Satellite Client 6 (for RHEL 7 Server) (RPMs)" \ --basearch "x86_64" \ --product "Red Hat Enterprise Linux Server" \ --organization "ACME" hammer product synchronize --name "Red Hat Enterprise Linux Server" \ --organization "ACME" # Create your application lifecycle hammer lifecycle-environment create \ --name "Development" \ --description "Environment for ACME's Development Team" \ --prior "Library" \ --organization "ACME" hammer lifecycle-environment create \ --name "Testing" \ --description "Environment for ACME's Quality Engineering Team" \ --prior "Development" \ --organization "ACME" hammer lifecycle-environment create \ --name "Production" \ --description "Environment for ACME's Product Releases" \ --prior "Testing" \ --organization "ACME" # Create and publish your content view hammer content-view create \ --name "Base" \ --description "Base operating system" \ --repositories "Red Hat Enterprise Linux 7 Server RPMs x86_64 7Server,Red Hat Satellite Client 6 for RHEL 7 Server RPMs x86_64" \ --organization "ACME" hammer content-view publish \ --name "Base" \ --description "My initial content view for my operating system" \ --organization "ACME" hammer content-view version promote \ --content-view "Base" \ --version 1 \ --to-lifecycle-environment "Development" \ --organization "ACME" hammer content-view version promote \ --content-view "Base" \ --version 1 \ --to-lifecycle-environment "Testing" \ --organization "ACME" hammer content-view version promote \ --content-view "Base" \ --version 1 \ --to-lifecycle-environment "Production" \ --organization "ACME" Set executable permissions on the script: Download a copy of your Red Hat Subscription Manifest from the Red Hat Customer Portal and run the script on the manifest: This imports the necessary Red Hat content for the provisioning examples in this guide.
[ "#!/bin/bash MANIFEST=USD1 Import the content from Red Hat CDN hammer organization create --name \"ACME\" --label \"ACME\" --description \"My example organization for managing content\" hammer subscription upload --file ~/USDMANIFEST --organization \"ACME\" hammer repository-set enable --name \"Red Hat Enterprise Linux 7 Server (RPMs)\" --releasever \"7Server\" --basearch \"x86_64\" --product \"Red Hat Enterprise Linux Server\" --organization \"ACME\" hammer repository-set enable --name \"Red Hat Enterprise Linux 7 Server (Kickstart)\" --releasever \"7Server\" --basearch \"x86_64\" --product \"Red Hat Enterprise Linux Server\" --organization \"ACME\" hammer repository-set enable --name \"Red Hat Satellite Client 6 (for RHEL 7 Server) (RPMs)\" --basearch \"x86_64\" --product \"Red Hat Enterprise Linux Server\" --organization \"ACME\" hammer product synchronize --name \"Red Hat Enterprise Linux Server\" --organization \"ACME\" Create your application lifecycle hammer lifecycle-environment create --name \"Development\" --description \"Environment for ACME's Development Team\" --prior \"Library\" --organization \"ACME\" hammer lifecycle-environment create --name \"Testing\" --description \"Environment for ACME's Quality Engineering Team\" --prior \"Development\" --organization \"ACME\" hammer lifecycle-environment create --name \"Production\" --description \"Environment for ACME's Product Releases\" --prior \"Testing\" --organization \"ACME\" Create and publish your content view hammer content-view create --name \"Base\" --description \"Base operating system\" --repositories \"Red Hat Enterprise Linux 7 Server RPMs x86_64 7Server,Red Hat Satellite Client 6 for RHEL 7 Server RPMs x86_64\" --organization \"ACME\" hammer content-view publish --name \"Base\" --description \"My initial content view for my operating system\" --organization \"ACME\" hammer content-view version promote --content-view \"Base\" --version 1 --to-lifecycle-environment \"Development\" --organization \"ACME\" hammer content-view version promote --content-view \"Base\" --version 1 --to-lifecycle-environment \"Testing\" --organization \"ACME\" hammer content-view version promote --content-view \"Base\" --version 1 --to-lifecycle-environment \"Production\" --organization \"ACME\"", "chmod +x content-init.sh", "./content-init.sh manifest_98f4290e-6c0b-4f37-ba79-3a3ec6e405ba.zip" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/provisioning_hosts/initialization_script_for_provisioning_examples_provisioning
Chapter 9. Image tags overview
Chapter 9. Image tags overview An image tag refers to a label or identifier assigned to a specific version or variant of a container image. Container images are typically composed of multiple layers that represent different parts of the image. Image tags are used to differentiate between different versions of an image or to provide additional information about the image. Image tags have the following benefits: Versioning and Releases : Image tags allow you to denote different versions or releases of an application or software. For example, you might have an image tagged as v1.0 to represent the initial release and v1.1 for an updated version. This helps in maintaining a clear record of image versions. Rollbacks and Testing : If you encounter issues with a new image version, you can easily revert to a version by specifying its tag. This is helpful during debugging and testing phases. Development Environments : Image tags are beneficial when working with different environments. You might use a dev tag for a development version, qa for quality assurance testing, and prod for production, each with their respective features and configurations. Continuous Integration/Continuous Deployment (CI/CD) : CI/CD pipelines often utilize image tags to automate the deployment process. New code changes can trigger the creation of a new image with a specific tag, enabling seamless updates. Feature Branches : When multiple developers are working on different features or bug fixes, they can create distinct image tags for their changes. This helps in isolating and testing individual features. Customization : You can use image tags to customize images with different configurations, dependencies, or optimizations, while keeping track of each variant. Security and Patching : When security vulnerabilities are discovered, you can create patched versions of images with updated tags, ensuring that your systems are using the latest secure versions. Dockerfile Changes : If you modify the Dockerfile or build process, you can use image tags to differentiate between images built from the and updated Dockerfiles. Overall, image tags provide a structured way to manage and organize container images, enabling efficient development, deployment, and maintenance workflows. 9.1. Viewing image tag information by using the UI Use the following procedure to view image tag information using the v2 UI. Prerequisites You have pushed an image tag to a repository. Procedure On the v2 UI, click Repositories . Click the name of a repository. Click the name of a tag. You are taken to the Details page of that tag. The page reveals the following information: Name Repository Digest Vulnerabilities Creation Modified Size Labels How to fetch the image tag Click Security Report to view the tag's vulnerabilities. You can expand an advisory column to open up CVE data. Click Packages to view the tag's packages. Click the name of the repository to return to the Tags page. 9.2. Adding a new image tag to an image by using the UI You can add a new tag to an image in Quay.io. Procedure On the Red Hat Quay v2 UI dashboard, click Repositories in the navigation pane. Click the name of a repository that has image tags. Click the menu kebab, then click Add new tag . Enter a name for the tag, then, click Create tag . The new tag is now listed on the Repository Tags page. 9.3. Adding and managing labels by using the UI Administrators can add and manage labels for tags by using the following procedure. Procedure On the v2 UI dashboard, click Repositories in the navigation pane. Click the name of a repository that has image tags. Click the menu kebab for an image and select Edit labels . In the Edit labels window, click Add new label . Enter a label for the image tag using the key=value format, for example, com.example.release-date=2023-11-14 . Note The following error is returned when failing to use the key=value format: Invalid label format, must be key value separated by = . Click the whitespace of the box to add the label. Optional. Add a second label. Click Save labels to save the label to the image tag. The following notification is returned: Created labels successfully . Optional. Click the same image tag's menu kebab Edit labels X on the label to remove it; alternatively, you can edit the text. Click Save labels . The label is now removed or edited. 9.4. Setting tag expirations Image tags can be set to expire from a Quay.io repository at a chosen date and time using the tag expiration feature. This feature includes the following characteristics: When an image tag expires, it is deleted from the repository. If it is the last tag for a specific image, the image is also set to be deleted. Expiration is set on a per-tag basis. It is not set for a repository as a whole. After a tag is expired or deleted, it is not immediately removed from the registry. This is contingent upon the allotted time designed in the time machine feature, which defines when the tag is permanently deleted, or garbage collected. By default, this value is set at 14 days , however the administrator can adjust this time to one of multiple options. Up until the point that garbage collection occurs, tags changes can be reverted. Tag expiration can be set up in one of two ways: By setting the quay.expires-after= label in the Dockerfile when the image is created. This sets a time to expire from when the image is built. By selecting an expiration date on the Quay.io UI. For example: Setting tag expirations can help automate the cleanup of older or unused tags, helping to reduce storage space. 9.4.1. Setting tag expiration from a repository Procedure On the Red Hat Quay v2 UI dashboard, click Repositories in the navigation pane. Click the name of a repository that has image tags. Click the menu kebab for an image and select Change expiration . Optional. Alternatively, you can bulk add expiration dates by clicking the box of multiple tags, and then select Actions Set expiration . In the Change Tags Expiration window, set an expiration date, specifying the day of the week, month, day of the month, and year. For example, Wednesday, November 15, 2023 . Alternatively, you can click the calendar button and manually select the date. Set the time, for example, 2:30 PM . Click Change Expiration to confirm the date and time. The following notification is returned: Successfully set expiration for tag test to Nov 15, 2023, 2:26 PM . On the Red Hat Quay v2 UI Tags page, you can see when the tag is set to expire. For example: 9.4.2. Setting tag expiration from a Dockerfile You can add a label, for example, quay.expires-after=20h to an image tag by using the docker label command to cause the tag to automatically expire after the time that is indicated. The following values for hours, days, or weeks are accepted: 1h 2d 3w Expiration begins from the time that the image is pushed to the registry. Procedure Enter the following docker label command to add a label to the desired image tag. The label should be in the format quay.expires-after=20h to indicate that the tag should expire after 20 hours. Replace 20h with the desired expiration time. For example: USD docker label quay.expires-after=20h quay-server.example.com/quayadmin/<image>:<tag> 9.5. Fetching an image by tag or digest Quay.io offers multiple ways of pulling images using Docker and Podman clients. Procedure Navigate to the Tags page of a repository. Under Manifest , click the Fetch Tag icon. When the popup box appears, users are presented with the following options: Podman Pull (by tag) Docker Pull (by tag) Podman Pull (by digest) Docker Pull (by digest) Selecting any one of the four options returns a command for the respective client that allows users to pull the image. Click Copy Command to copy the command, which can be used on the command-line interface (CLI). For example: USD podman pull quay.io/quayadmin/busybox:test2 9.6. Viewing Red Hat Quay tag history by using the UI Quay.io offers a comprehensive history of images and their respective image tags. Procedure On the Red Hat Quay v2 UI dashboard, click Repositories in the navigation pane. Click the name of a repository that has image tags. Click Tag History . On this page, you can perform the following actions: Search by tag name Select a date range View tag changes View tag modification dates and the time at which they were changed 9.7. Deleting an image tag Deleting an image tag removes that specific version of the image from the registry. To delete an image tag, use the following procedure. Procedure On the Repositories page of the v2 UI, click the name of the image you want to delete, for example, quay/admin/busybox . Click the More Actions drop-down menu. Click Delete . Note If desired, you could click Make Public or Make Private . Type confirm in the box, and then click Delete . After deletion, you are returned to the Repositories page. Note Deleting an image tag can be reverted based on the amount of time allotted assigned to the time machine feature. For more information, see "Reverting tag changes". 9.8. Reverting tag changes by using the UI Quay.io offers a comprehensive time machine feature that allows older images tags to remain in the repository for set periods of time so that they can revert changes made to tags. This feature allows users to revert tag changes, like tag deletions. Procedure On the Repositories page of the v2 UI, click the name of the image you want to revert. Click the Tag History tab. Find the point in the timeline at which image tags were changed or removed. , click the option under Revert to restore a tag to its image.
[ "docker label quay.expires-after=20h quay-server.example.com/quayadmin/<image>:<tag>", "podman pull quay.io/quayadmin/busybox:test2" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/about_quay_io/image-tags-overview
Chapter 7. GFS2 file systems in a cluster
Chapter 7. GFS2 file systems in a cluster Use the following administrative procedures to configure GFS2 file systems in a Red Hat high availability cluster. 7.1. Configuring a GFS2 file system in a cluster You can set up a Pacemaker cluster that includes GFS2 file systems with the following procedure. In this example, you create three GFS2 file systems on three logical volumes in a two-node cluster. Prerequisites Install and start the cluster software on both cluster nodes and create a basic two-node cluster. Configure fencing for the cluster. For information about creating a Pacemaker cluster and configuring fencing for the cluster, see Creating a Red Hat High-Availability cluster with Pacemaker . Procedure On both nodes in the cluster, enable the repository for Resilient Storage that corresponds to your system architecture. For example, to enable the Resilient Storage repository for an x86_64 system, you can enter the following subscription-manager command: Note that the Resilient Storage repository is a superset of the High Availability repository. If you enable the Resilient Storage repository you do not also need to enable the High Availability repository. On both nodes of the cluster, install the lvm2-lockd , gfs2-utils , and dlm packages. To support these packages, you must be subscribed to the AppStream channel and the Resilient Storage channel. On both nodes of the cluster, set the use_lvmlockd configuration option in the /etc/lvm/lvm.conf file to use_lvmlockd=1 . Set the global Pacemaker parameter no-quorum-policy to freeze . Note By default, the value of no-quorum-policy is set to stop , indicating that once quorum is lost, all the resources on the remaining partition will immediately be stopped. Typically this default is the safest and most optimal option, but unlike most resources, GFS2 requires quorum to function. When quorum is lost both the applications using the GFS2 mounts and the GFS2 mount itself cannot be correctly stopped. Any attempts to stop these resources without quorum will fail which will ultimately result in the entire cluster being fenced every time quorum is lost. To address this situation, set no-quorum-policy to freeze when GFS2 is in use. This means that when quorum is lost, the remaining partition will do nothing until quorum is regained. Set up a dlm resource. This is a required dependency for configuring a GFS2 file system in a cluster. This example creates the dlm resource as part of a resource group named locking . Clone the locking resource group so that the resource group can be active on both nodes of the cluster. Set up an lvmlockd resource as part of the locking resource group. Check the status of the cluster to ensure that the locking resource group has started on both nodes of the cluster. On one node of the cluster, create two shared volume groups. One volume group will contain two GFS2 file systems, and the other volume group will contain one GFS2 file system. Note If your LVM volume group contains one or more physical volumes that reside on remote block storage, such as an iSCSI target, Red Hat recommends that you ensure that the service starts before Pacemaker starts. For information about configuring startup order for a remote physical volume used by a Pacemaker cluster, see Configuring startup order for resource dependencies not managed by Pacemaker . The following command creates the shared volume group shared_vg1 on /dev/vdb . The following command creates the shared volume group shared_vg2 on /dev/vdc . On the second node in the cluster: (RHEL 8.5 and later) If you have enabled the use of a devices file by setting use_devicesfile = 1 in the lvm.conf file, add the shared devices to the devices file. By default, the use of a devices file is not enabled. Start the lock manager for each of the shared volume groups. On one node in the cluster, create the shared logical volumes and format the volumes with a GFS2 file system. One journal is required for each node that mounts the file system. Ensure that you create enough journals for each of the nodes in your cluster. The format of the lock table name is ClusterName:FSName where ClusterName is the name of the cluster for which the GFS2 file system is being created and FSName is the file system name, which must be unique for all lock_dlm file systems over the cluster. Create an LVM-activate resource for each logical volume to automatically activate that logical volume on all nodes. Create an LVM-activate resource named sharedlv1 for the logical volume shared_lv1 in volume group shared_vg1 . This command also creates the resource group shared_vg1 that includes the resource. In this example, the resource group has the same name as the shared volume group that includes the logical volume. Create an LVM-activate resource named sharedlv2 for the logical volume shared_lv2 in volume group shared_vg1 . This resource will also be part of the resource group shared_vg1 . Create an LVM-activate resource named sharedlv3 for the logical volume shared_lv1 in volume group shared_vg2 . This command also creates the resource group shared_vg2 that includes the resource. Clone the two new resource groups. Configure ordering constraints to ensure that the locking resource group that includes the dlm and lvmlockd resources starts first. Configure colocation constraints to ensure that the vg1 and vg2 resource groups start on the same node as the locking resource group. On both nodes in the cluster, verify that the logical volumes are active. There may be a delay of a few seconds. Create a file system resource to automatically mount each GFS2 file system on all nodes. You should not add the file system to the /etc/fstab file because it will be managed as a Pacemaker cluster resource. Mount options can be specified as part of the resource configuration with options= options . Run the pcs resource describe Filesystem command to display the full configuration options. The following commands create the file system resources. These commands add each resource to the resource group that includes the logical volume resource for that file system. Verification Verify that the GFS2 file systems are mounted on both nodes of the cluster. Check the status of the cluster. Additional resources Configuring GFS2 file systems Configuring a Red Hat High Availability cluster on Microsoft Azure Configuring a Red Hat High Availability cluster on AWS Configuring a Red Hat High Availability Cluster on Google Cloud Platform Configuring shared block storage for a Red Hat High Availability cluster on Alibaba Cloud 7.2. Configuring an encrypted GFS2 file system in a cluster (RHEL 8.4 and later) You can create a Pacemaker cluster that includes a LUKS encrypted GFS2 file system with the following procedure. In this example, you create one GFS2 file systems on a logical volume and encrypt the file system. Encrypted GFS2 file systems are supported using the crypt resource agent, which provides support for LUKS encryption. There are three parts to this procedure: Configuring a shared logical volume in a Pacemaker cluster Encrypting the logical volume and creating a crypt resource Formatting the encrypted logical volume with a GFS2 file system and creating a file system resource for the cluster 7.2.1. Configure a shared logical volume in a Pacemaker cluster Prerequisites Install and start the cluster software on two cluster nodes and create a basic two-node cluster. Configure fencing for the cluster. For information about creating a Pacemaker cluster and configuring fencing for the cluster, see Creating a Red Hat High-Availability cluster with Pacemaker . Procedure On both nodes in the cluster, enable the repository for Resilient Storage that corresponds to your system architecture. For example, to enable the Resilient Storage repository for an x86_64 system, you can enter the following subscription-manager command: Note that the Resilient Storage repository is a superset of the High Availability repository. If you enable the Resilient Storage repository you do not also need to enable the High Availability repository. On both nodes of the cluster, install the lvm2-lockd , gfs2-utils , and dlm packages. To support these packages, you must be subscribed to the AppStream channel and the Resilient Storage channel. On both nodes of the cluster, set the use_lvmlockd configuration option in the /etc/lvm/lvm.conf file to use_lvmlockd=1 . Set the global Pacemaker parameter no-quorum-policy to freeze . Note By default, the value of no-quorum-policy is set to stop , indicating that when quorum is lost, all the resources on the remaining partition will immediately be stopped. Typically this default is the safest and most optimal option, but unlike most resources, GFS2 requires quorum to function. When quorum is lost both the applications using the GFS2 mounts and the GFS2 mount itself cannot be correctly stopped. Any attempts to stop these resources without quorum will fail which will ultimately result in the entire cluster being fenced every time quorum is lost. To address this situation, set no-quorum-policy to freeze when GFS2 is in use. This means that when quorum is lost, the remaining partition will do nothing until quorum is regained. Set up a dlm resource. This is a required dependency for configuring a GFS2 file system in a cluster. This example creates the dlm resource as part of a resource group named locking . Clone the locking resource group so that the resource group can be active on both nodes of the cluster. Set up an lvmlockd resource as part of the group locking . Check the status of the cluster to ensure that the locking resource group has started on both nodes of the cluster. On one node of the cluster, create a shared volume group. Note If your LVM volume group contains one or more physical volumes that reside on remote block storage, such as an iSCSI target, Red Hat recommends that you ensure that the service starts before Pacemaker starts. For information about configuring startup order for a remote physical volume used by a Pacemaker cluster, see Configuring startup order for resource dependencies not managed by Pacemaker . The following command creates the shared volume group shared_vg1 on /dev/sda1 . On the second node in the cluster: (RHEL 8.5 and later) If you have enabled the use of a devices file by setting use_devicesfile = 1 in the lvm.conf file, add the shared device to the devices file on the second node in the cluster. By default, the use of a devices file is not enabled. Start the lock manager for the shared volume group. On one node in the cluster, create the shared logical volume. Create an LVM-activate resource for the logical volume to automatically activate the logical volume on all nodes. The following command creates an LVM-activate resource named sharedlv1 for the logical volume shared_lv1 in volume group shared_vg1 . This command also creates the resource group shared_vg1 that includes the resource. In this example, the resource group has the same name as the shared volume group that includes the logical volume. Clone the new resource group. Configure an ordering constraints to ensure that the locking resource group that includes the dlm and lvmlockd resources starts first. Configure a colocation constraints to ensure that the vg1 and vg2 resource groups start on the same node as the locking resource group. Verification On both nodes in the cluster, verify that the logical volume is active. There may be a delay of a few seconds. 7.2.2. Encrypt the logical volume and create a crypt resource Prerequisites You have configured a shared logical volume in a Pacemaker cluster. Procedure On one node in the cluster, create a new file that will contain the crypt key and set the permissions on the file so that it is readable only by root. Create the crypt key. Distribute the crypt keyfile to the other nodes in the cluster, using the -p parameter to preserve the permissions you set. Create the encrypted device on the LVM volume where you will configure the encrypted GFS2 file system. Create the crypt resource as part of the shared_vg1 volume group. Verification Ensure that the crypt resource has created the crypt device, which in this example is /dev/mapper/luks_lv1 . 7.2.3. Format the encrypted logical volume with a GFS2 file system and create a file system resource for the cluster Prerequisites You have encrypted the logical volume and created a crypt resource. Procedure On one node in the cluster, format the volume with a GFS2 file system. One journal is required for each node that mounts the file system. Ensure that you create enough journals for each of the nodes in your cluster. The format of the lock table name is ClusterName:FSName where ClusterName is the name of the cluster for which the GFS2 file system is being created and FSName is the file system name, which must be unique for all lock_dlm file systems over the cluster. Create a file system resource to automatically mount the GFS2 file system on all nodes. Do not add the file system to the /etc/fstab file because it will be managed as a Pacemaker cluster resource. Mount options can be specified as part of the resource configuration with options= options . Run the pcs resource describe Filesystem command for full configuration options. The following command creates the file system resource. This command adds the resource to the resource group that includes the logical volume resource for that file system. Verification Verify that the GFS2 file system is mounted on both nodes of the cluster. Check the status of the cluster. Additional resources Configuring GFS2 file systems 7.3. Migrating a GFS2 file system from RHEL7 to RHEL8 You can use your existing Red Hat Enterprise 7 logical volumes when configuring a RHEL 8 cluster that includes GFS2 file systems. In Red Hat Enterprise Linux 8, LVM uses the LVM lock daemon lvmlockd instead of clvmd for managing shared storage devices in an active/active cluster. This requires that you configure the logical volumes that your active/active cluster will require as shared logical volumes. Additionally, this requires that you use the LVM-activate resource to manage an LVM volume and that you use the lvmlockd resource agent to manage the lvmlockd daemon. See Configuring a GFS2 file system in a cluster for a full procedure for configuring a Pacemaker cluster that includes GFS2 file systems using shared logical volumes. To use your existing Red Hat Enterprise Linux 7 logical volumes when configuring a RHEL8 cluster that includes GFS2 file systems, perform the following procedure from the RHEL8 cluster. In this example, the clustered RHEL 7 logical volume is part of the volume group upgrade_gfs_vg . Note The RHEL8 cluster must have the same name as the RHEL7 cluster that includes the GFS2 file system in order for the existing file system to be valid. Procedure Ensure that the logical volumes containing the GFS2 file systems are currently inactive. This procedure is safe only if all nodes have stopped using the volume group. From one node in the cluster, forcibly change the volume group to be local. From one node in the cluster, change the local volume group to a shared volume group On each node in the cluster, start locking for the volume group. After performing this procedure, you can create an LVM-activate resource for each logical volume.
[ "subscription-manager repos --enable=rhel-8-for-x86_64-resilientstorage-rpms", "yum install lvm2-lockd gfs2-utils dlm", "use_lvmlockd = 1", "pcs property set no-quorum-policy=freeze", "pcs resource create dlm --group locking ocf:pacemaker:controld op monitor interval=30s on-fail=fence", "pcs resource clone locking interleave=true", "pcs resource create lvmlockd --group locking ocf:heartbeat:lvmlockd op monitor interval=30s on-fail=fence", "pcs status --full Cluster name: my_cluster [...] Online: [ z1.example.com (1) z2.example.com (2) ] Full list of resources: smoke-apc (stonith:fence_apc): Started z1.example.com Clone Set: locking-clone [locking] Resource Group: locking:0 dlm (ocf::pacemaker:controld): Started z1.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z1.example.com Resource Group: locking:1 dlm (ocf::pacemaker:controld): Started z2.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z2.example.com Started: [ z1.example.com z2.example.com ]", "vgcreate --shared shared_vg1 /dev/vdb Physical volume \"/dev/vdb\" successfully created. Volume group \"shared_vg1\" successfully created VG shared_vg1 starting dlm lockspace Starting locking. Waiting until locks are ready", "vgcreate --shared shared_vg2 /dev/vdc Physical volume \"/dev/vdc\" successfully created. Volume group \"shared_vg2\" successfully created VG shared_vg2 starting dlm lockspace Starting locking. Waiting until locks are ready", "lvmdevices --adddev /dev/vdb lvmdevices --adddev /dev/vdc", "vgchange --lockstart shared_vg1 VG shared_vg1 starting dlm lockspace Starting locking. Waiting until locks are ready vgchange --lockstart shared_vg2 VG shared_vg2 starting dlm lockspace Starting locking. Waiting until locks are ready", "lvcreate --activate sy -L5G -n shared_lv1 shared_vg1 Logical volume \"shared_lv1\" created. lvcreate --activate sy -L5G -n shared_lv2 shared_vg1 Logical volume \"shared_lv2\" created. lvcreate --activate sy -L5G -n shared_lv1 shared_vg2 Logical volume \"shared_lv1\" created. mkfs.gfs2 -j2 -p lock_dlm -t my_cluster:gfs2-demo1 /dev/shared_vg1/shared_lv1 mkfs.gfs2 -j2 -p lock_dlm -t my_cluster:gfs2-demo2 /dev/shared_vg1/shared_lv2 mkfs.gfs2 -j2 -p lock_dlm -t my_cluster:gfs2-demo3 /dev/shared_vg2/shared_lv1", "pcs resource create sharedlv1 --group shared_vg1 ocf:heartbeat:LVM-activate lvname=shared_lv1 vgname=shared_vg1 activation_mode=shared vg_access_mode=lvmlockd", "pcs resource create sharedlv2 --group shared_vg1 ocf:heartbeat:LVM-activate lvname=shared_lv2 vgname=shared_vg1 activation_mode=shared vg_access_mode=lvmlockd", "pcs resource create sharedlv3 --group shared_vg2 ocf:heartbeat:LVM-activate lvname=shared_lv1 vgname=shared_vg2 activation_mode=shared vg_access_mode=lvmlockd", "pcs resource clone shared_vg1 interleave=true pcs resource clone shared_vg2 interleave=true", "pcs constraint order start locking-clone then shared_vg1-clone Adding locking-clone shared_vg1-clone (kind: Mandatory) (Options: first-action=start then-action=start) pcs constraint order start locking-clone then shared_vg2-clone Adding locking-clone shared_vg2-clone (kind: Mandatory) (Options: first-action=start then-action=start)", "pcs constraint colocation add shared_vg1-clone with locking-clone pcs constraint colocation add shared_vg2-clone with locking-clone", "lvs LV VG Attr LSize shared_lv1 shared_vg1 -wi-a----- 5.00g shared_lv2 shared_vg1 -wi-a----- 5.00g shared_lv1 shared_vg2 -wi-a----- 5.00g lvs LV VG Attr LSize shared_lv1 shared_vg1 -wi-a----- 5.00g shared_lv2 shared_vg1 -wi-a----- 5.00g shared_lv1 shared_vg2 -wi-a----- 5.00g", "pcs resource create sharedfs1 --group shared_vg1 ocf:heartbeat:Filesystem device=\"/dev/shared_vg1/shared_lv1\" directory=\"/mnt/gfs1\" fstype=\"gfs2\" options=noatime op monitor interval=10s on-fail=fence pcs resource create sharedfs2 --group shared_vg1 ocf:heartbeat:Filesystem device=\"/dev/shared_vg1/shared_lv2\" directory=\"/mnt/gfs2\" fstype=\"gfs2\" options=noatime op monitor interval=10s on-fail=fence pcs resource create sharedfs3 --group shared_vg2 ocf:heartbeat:Filesystem device=\"/dev/shared_vg2/shared_lv1\" directory=\"/mnt/gfs3\" fstype=\"gfs2\" options=noatime op monitor interval=10s on-fail=fence", "mount | grep gfs2 /dev/mapper/shared_vg1-shared_lv1 on /mnt/gfs1 type gfs2 (rw,noatime,seclabel) /dev/mapper/shared_vg1-shared_lv2 on /mnt/gfs2 type gfs2 (rw,noatime,seclabel) /dev/mapper/shared_vg2-shared_lv1 on /mnt/gfs3 type gfs2 (rw,noatime,seclabel) mount | grep gfs2 /dev/mapper/shared_vg1-shared_lv1 on /mnt/gfs1 type gfs2 (rw,noatime,seclabel) /dev/mapper/shared_vg1-shared_lv2 on /mnt/gfs2 type gfs2 (rw,noatime,seclabel) /dev/mapper/shared_vg2-shared_lv1 on /mnt/gfs3 type gfs2 (rw,noatime,seclabel)", "pcs status --full Cluster name: my_cluster [...] Full list of resources: smoke-apc (stonith:fence_apc): Started z1.example.com Clone Set: locking-clone [locking] Resource Group: locking:0 dlm (ocf::pacemaker:controld): Started z2.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z2.example.com Resource Group: locking:1 dlm (ocf::pacemaker:controld): Started z1.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z1.example.com Started: [ z1.example.com z2.example.com ] Clone Set: shared_vg1-clone [shared_vg1] Resource Group: shared_vg1:0 sharedlv1 (ocf::heartbeat:LVM-activate): Started z2.example.com sharedlv2 (ocf::heartbeat:LVM-activate): Started z2.example.com sharedfs1 (ocf::heartbeat:Filesystem): Started z2.example.com sharedfs2 (ocf::heartbeat:Filesystem): Started z2.example.com Resource Group: shared_vg1:1 sharedlv1 (ocf::heartbeat:LVM-activate): Started z1.example.com sharedlv2 (ocf::heartbeat:LVM-activate): Started z1.example.com sharedfs1 (ocf::heartbeat:Filesystem): Started z1.example.com sharedfs2 (ocf::heartbeat:Filesystem): Started z1.example.com Started: [ z1.example.com z2.example.com ] Clone Set: shared_vg2-clone [shared_vg2] Resource Group: shared_vg2:0 sharedlv3 (ocf::heartbeat:LVM-activate): Started z2.example.com sharedfs3 (ocf::heartbeat:Filesystem): Started z2.example.com Resource Group: shared_vg2:1 sharedlv3 (ocf::heartbeat:LVM-activate): Started z1.example.com sharedfs3 (ocf::heartbeat:Filesystem): Started z1.example.com Started: [ z1.example.com z2.example.com ]", "subscription-manager repos --enable=rhel-8-for-x86_64-resilientstorage-rpms", "yum install lvm2-lockd gfs2-utils dlm", "use_lvmlockd = 1", "pcs property set no-quorum-policy=freeze", "pcs resource create dlm --group locking ocf:pacemaker:controld op monitor interval=30s on-fail=fence", "pcs resource clone locking interleave=true", "pcs resource create lvmlockd --group locking ocf:heartbeat:lvmlockd op monitor interval=30s on-fail=fence", "pcs status --full Cluster name: my_cluster [...] Online: [ z1.example.com (1) z2.example.com (2) ] Full list of resources: smoke-apc (stonith:fence_apc): Started z1.example.com Clone Set: locking-clone [locking] Resource Group: locking:0 dlm (ocf::pacemaker:controld): Started z1.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z1.example.com Resource Group: locking:1 dlm (ocf::pacemaker:controld): Started z2.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z2.example.com Started: [ z1.example.com z2.example.com ]", "vgcreate --shared shared_vg1 /dev/sda1 Physical volume \"/dev/sda1\" successfully created. Volume group \"shared_vg1\" successfully created VG shared_vg1 starting dlm lockspace Starting locking. Waiting until locks are ready", "lvmdevices --adddev /dev/sda1", "vgchange --lockstart shared_vg1 VG shared_vg1 starting dlm lockspace Starting locking. Waiting until locks are ready", "lvcreate --activate sy -L5G -n shared_lv1 shared_vg1 Logical volume \"shared_lv1\" created.", "pcs resource create sharedlv1 --group shared_vg1 ocf:heartbeat:LVM-activate lvname=shared_lv1 vgname=shared_vg1 activation_mode=shared vg_access_mode=lvmlockd", "pcs resource clone shared_vg1 interleave=true", "pcs constraint order start locking-clone then shared_vg1-clone Adding locking-clone shared_vg1-clone (kind: Mandatory) (Options: first-action=start then-action=start)", "pcs constraint colocation add shared_vg1-clone with locking-clone", "lvs LV VG Attr LSize shared_lv1 shared_vg1 -wi-a----- 5.00g lvs LV VG Attr LSize shared_lv1 shared_vg1 -wi-a----- 5.00g", "touch /etc/crypt_keyfile chmod 600 /etc/crypt_keyfile", "dd if=/dev/urandom bs=4K count=1 of=/etc/crypt_keyfile 1+0 records in 1+0 records out 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306202 s, 13.4 MB/s scp /etc/crypt_keyfile [email protected]:/etc/", "scp -p /etc/crypt_keyfile [email protected]:/etc/", "cryptsetup luksFormat /dev/shared_vg1/shared_lv1 --type luks2 --key-file=/etc/crypt_keyfile WARNING! ======== This will overwrite data on /dev/shared_vg1/shared_lv1 irrevocably. Are you sure? (Type 'yes' in capital letters): YES", "pcs resource create crypt --group shared_vg1 ocf:heartbeat:crypt crypt_dev=\"luks_lv1\" crypt_type=luks2 key_file=/etc/crypt_keyfile encrypted_dev=\"/dev/shared_vg1/shared_lv1\"", "ls -l /dev/mapper/ lrwxrwxrwx 1 root root 7 Mar 4 09:52 luks_lv1 -> ../dm-3", "mkfs.gfs2 -j3 -p lock_dlm -t my_cluster:gfs2-demo1 /dev/mapper/luks_lv1 /dev/mapper/luks_lv1 is a symbolic link to /dev/dm-3 This will destroy any data on /dev/dm-3 Are you sure you want to proceed? [y/n] y Discarding device contents (may take a while on large devices): Done Adding journals: Done Building resource groups: Done Creating quota file: Done Writing superblock and syncing: Done Device: /dev/mapper/luks_lv1 Block size: 4096 Device size: 4.98 GB (1306624 blocks) Filesystem size: 4.98 GB (1306622 blocks) Journals: 3 Journal size: 16MB Resource groups: 23 Locking protocol: \"lock_dlm\" Lock table: \"my_cluster:gfs2-demo1\" UUID: de263f7b-0f12-4d02-bbb2-56642fade293", "pcs resource create sharedfs1 --group shared_vg1 ocf:heartbeat:Filesystem device=\"/dev/mapper/luks_lv1\" directory=\"/mnt/gfs1\" fstype=\"gfs2\" options=noatime op monitor interval=10s on-fail=fence", "mount | grep gfs2 /dev/mapper/luks_lv1 on /mnt/gfs1 type gfs2 (rw,noatime,seclabel) mount | grep gfs2 /dev/mapper/luks_lv1 on /mnt/gfs1 type gfs2 (rw,noatime,seclabel)", "pcs status --full Cluster name: my_cluster [...] Full list of resources: smoke-apc (stonith:fence_apc): Started z1.example.com Clone Set: locking-clone [locking] Resource Group: locking:0 dlm (ocf::pacemaker:controld): Started z2.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z2.example.com Resource Group: locking:1 dlm (ocf::pacemaker:controld): Started z1.example.com lvmlockd (ocf::heartbeat:lvmlockd): Started z1.example.com Started: [ z1.example.com z2.example.com ] Clone Set: shared_vg1-clone [shared_vg1] Resource Group: shared_vg1:0 sharedlv1 (ocf::heartbeat:LVM-activate): Started z2.example.com crypt (ocf::heartbeat:crypt) Started z2.example.com sharedfs1 (ocf::heartbeat:Filesystem): Started z2.example.com Resource Group: shared_vg1:1 sharedlv1 (ocf::heartbeat:LVM-activate): Started z1.example.com crypt (ocf::heartbeat:crypt) Started z1.example.com sharedfs1 (ocf::heartbeat:Filesystem): Started z1.example.com Started: [z1.example.com z2.example.com ]", "vgchange --lock-type none --lock-opt force upgrade_gfs_vg Forcibly change VG lock type to none? [y/n]: y Volume group \"upgrade_gfs_vg\" successfully changed", "vgchange --lock-type dlm upgrade_gfs_vg Volume group \"upgrade_gfs_vg\" successfully changed", "vgchange --lockstart upgrade_gfs_vg VG upgrade_gfs_vg starting dlm lockspace Starting locking. Waiting until locks are ready vgchange --lockstart upgrade_gfs_vg VG upgrade_gfs_vg starting dlm lockspace Starting locking. Waiting until locks are ready" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_high_availability_clusters/assembly_configuring-gfs2-in-a-cluster-configuring-and-managing-high-availability-clusters
Networking
Networking OpenShift Container Platform 4.9 Configuring and managing cluster networking Red Hat OpenShift Documentation Team
[ "ssh -i <ssh-key-path> core@<master-hostname>", "oc get -n openshift-network-operator deployment/network-operator", "NAME READY UP-TO-DATE AVAILABLE AGE network-operator 1/1 1 1 56m", "oc get clusteroperator/network", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.5.4 True False False 50m", "oc describe network.config/cluster", "Name: cluster Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: Network Metadata: Self Link: /apis/config.openshift.io/v1/networks/cluster Spec: 1 Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Network Type: OpenShiftSDN Service Network: 172.30.0.0/16 Status: 2 Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Cluster Network MTU: 8951 Network Type: OpenShiftSDN Service Network: 172.30.0.0/16 Events: <none>", "oc describe clusteroperators/network", "oc logs --namespace=openshift-network-operator deployment/network-operator", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: 1 - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: 2 - 172.30.0.0/16 defaultNetwork: 3 type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 0s", "oc get -n openshift-dns-operator deployment/dns-operator", "NAME READY UP-TO-DATE AVAILABLE AGE dns-operator 1/1 1 1 23h", "oc get clusteroperator/dns", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE dns 4.1.0-0.11 True False False 92m", "patch dns.operator.openshift.io default --type merge --patch '{\"spec\":{\"managementState\":\"Unmanaged\"}}'", "oc edit dns.operator/default", "spec: nodePlacement: nodeSelector: node-role.kubernetes.io/worker: \"\"", "oc edit dns.operator/default", "spec: nodePlacement: tolerations: - effect: NoExecute key: \"dns-only\" operators: Equal value: abc tolerationSeconds: 3600 1", "oc describe dns.operator/default", "Name: default Namespace: Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: DNS Status: Cluster Domain: cluster.local 1 Cluster IP: 172.30.0.10 2", "oc get networks.config/cluster -o jsonpath='{USD.status.serviceNetwork}'", "[172.30.0.0/16]", "oc edit dns.operator/default", "apiVersion: operator.openshift.io/v1 kind: DNS metadata: name: default spec: servers: - name: foo-server 1 zones: 2 - example.com forwardPlugin: upstreams: 3 - 1.1.1.1 - 2.2.2.2:5353 - name: bar-server zones: - bar.com - example.com forwardPlugin: upstreams: - 3.3.3.3 - 4.4.4.4:5454", "oc get configmap/dns-default -n openshift-dns -o yaml", "apiVersion: v1 data: Corefile: | example.com:5353 { forward . 1.1.1.1 2.2.2.2:5353 } bar.com:5353 example.com:5353 { forward . 3.3.3.3 4.4.4.4:5454 1 } .:5353 { errors health kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure upstream fallthrough in-addr.arpa ip6.arpa } prometheus :9153 forward . /etc/resolv.conf { policy sequential } cache 30 reload } kind: ConfigMap metadata: labels: dns.operator.openshift.io/owning-dns: default name: dns-default namespace: openshift-dns", "oc describe clusteroperators/dns", "oc logs -n openshift-dns-operator deployment/dns-operator -c dns-operator", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.openshiftdemos.com", "nodePlacement: nodeSelector: matchLabels: kubernetes.io/os: linux tolerations: - effect: NoSchedule operator: Exists", "httpCaptureCookies: - matchType: Exact maxLength: 128 name: MYCOOKIE", "httpCaptureHeaders: request: - maxLength: 256 name: Connection - maxLength: 128 name: User-Agent response: - maxLength: 256 name: Content-Type - maxLength: 256 name: Content-Length", "apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: old: {} type: Old", "oc edit IngressController default -n openshift-ingress-operator", "apiVersion: operator.openshift.io/v1 kind: IngressController spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11", "oc describe IngressController default -n openshift-ingress-operator", "Name: default Namespace: openshift-ingress-operator Labels: <none> Annotations: <none> API Version: operator.openshift.io/v1 Kind: IngressController Spec: Tls Security Profile: Custom: Ciphers: ECDHE-ECDSA-CHACHA20-POLY1305 ECDHE-RSA-CHACHA20-POLY1305 ECDHE-RSA-AES128-GCM-SHA256 ECDHE-ECDSA-AES128-GCM-SHA256 Min TLS Version: VersionTLS11 Type: Custom", "oc create configmap router-ca-certs-default --from-file=ca-bundle.pem=client-ca.crt -n openshift-config", "oc edit IngressController default -n openshift-ingress-operator", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: clientTLS: clientCertificatePolicy: Required clientCA: name: router-ca-certs-default allowedSubjectPatterns: - \"^/CN=example.com/ST=NC/C=US/O=Security/OU=OpenShiftUSD\"", "oc describe --namespace=openshift-ingress-operator ingresscontroller/default", "oc describe clusteroperators/ingress", "oc logs --namespace=openshift-ingress-operator deployments/ingress-operator", "oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>", "oc --namespace openshift-ingress-operator get ingresscontrollers", "NAME AGE default 10m", "oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key", "oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default --patch '{\"spec\":{\"defaultCertificate\":{\"name\":\"custom-certs-default\"}}}'", "echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate", "subject=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = *.apps.example.com issuer=C = US, ST = NC, L = Raleigh, O = RH, OU = OCP4, CN = example.com notAfter=May 10 08:32:45 2022 GM", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: defaultCertificate: name: custom-certs-default", "oc patch -n openshift-ingress-operator ingresscontrollers/default --type json -p USD'- op: remove\\n path: /spec/defaultCertificate'", "echo Q | openssl s_client -connect console-openshift-console.apps.<domain>:443 -showcerts 2>/dev/null | openssl x509 -noout -subject -issuer -enddate", "subject=CN = *.apps.<domain> issuer=CN = ingress-operator@1620633373 notAfter=May 10 10:44:36 2023 GMT", "oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'", "2", "oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"replicas\": 3}}' --type=merge", "ingresscontroller.operator.openshift.io/default patched", "oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{USD.status.availableReplicas}'", "3", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 3 1", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Container", "oc -n openshift-ingress logs deployment.apps/router-default -c logs", "2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 \"GET / HTTP/1.1\"", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: destination: type: Syslog syslog: address: 1.2.3.4 port: 10514 httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV'", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: replicas: 2 logging: access: null", "oc -n openshift-ingress-operator patch ingresscontroller/default --type=merge -p '{\"spec\":{\"tuningOptions\": {\"threadCount\": 8}}}'", "cat router-internal.yaml apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" routeSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: \"\" selfLink: \"\"", "oc apply -f router-internal.yaml", "cat router-internal.yaml", "apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" namespaceSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: \"\" selfLink: \"\"", "oc apply -f router-internal.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal 3", "oc create -f <name>-ingress-controller.yaml 1", "oc --all-namespaces=true get ingresscontrollers", "oc -n openshift-ingress-operator edit ingresscontroller/default", "spec: endpointPublishingStrategy: loadBalancer: providerParameters: gcp: clientAccess: Global 1 type: GCP scope: Internal type: LoadBalancerService", "oc -n openshift-ingress edit svc/router-default -o yaml", "oc replace --force --wait --filename - <<EOF apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: default spec: endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: Internal EOF", "oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge", "spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed", "oc edit IngressController", "spec: routeAdmission: wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed", "oc edit IngressController", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: forwardedHeaderPolicy: Append", "oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true", "oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster annotations: ingress.operator.openshift.io/default-enable-http2: \"true\"", "oc -n openshift-ingress-operator edit ingresscontroller/default", "spec: endpointPublishingStrategy: hostNetwork: protocol: PROXY type: HostNetwork", "spec: endpointPublishingStrategy: nodePort: protocol: PROXY type: NodePortService", "oc edit ingresses.config/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: apps.example.com 1 appsDomain: <test.example.com> 2", "oc expose service hello-openshift route.route.openshift.io/hello-openshift exposed", "oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hello-openshift hello_openshift-<my_project>.test.example.com hello-openshift 8080-tcp None", "oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{\"spec\":{\"httpHeaders\":{\"headerNameCaseAdjustments\":[\"Host\"]}}}'", "oc annotate routes/my-application haproxy.router.openshift.io/h1-adjust-case=true", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: httpHeaders: headerNameCaseAdjustments: - Host", "apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/h1-adjust-case: true 1 name: my-application namespace: my-application spec: to: kind: Service name: my-application", "oc -n openshift-config create configmap my-custom-error-code-pages --from-file=error-page-503.http --from-file=error-page-404.http", "oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{\"spec\":{\"httpErrorCodePages\":{\"name\":\"my-custom-error-code-pages\"}}}' --type=merge", "oc get cm default-errorpages -n openshift-ingress", "NAME DATA AGE default-errorpages 2 25s 1", "oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-503.http", "oc -n openshift-ingress rsh <router_pod> cat /var/lib/haproxy/conf/error_code_pages/error-page-404.http", "oc new-project test-ingress", "oc new-app django-psql-example", "curl -vk <route_hostname>", "curl -vk <route_hostname>", "oc -n openshift-ingress rsh <router> cat /var/lib/haproxy/conf/haproxy.config | grep errorfile", "oc get podnetworkconnectivitycheck -n openshift-network-diagnostics", "NAME AGE network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 73m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-default-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-external 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-load-balancer-api-internal 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-c-n8mbf 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-ci-ln-x5sv9rb-f76d1-4rzrp-worker-d-4hnrz 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-network-check-target-service-cluster 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-1 75m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-2 74m network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-openshift-apiserver-service-cluster 75m", "oc get podnetworkconnectivitycheck <name> -n openshift-network-diagnostics -o yaml", "apiVersion: controlplane.operator.openshift.io/v1alpha1 kind: PodNetworkConnectivityCheck metadata: name: network-check-source-ci-ln-x5sv9rb-f76d1-4rzrp-worker-b-6xdmh-to-kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0 namespace: openshift-network-diagnostics spec: sourcePod: network-check-source-7c88f6d9f-hmg2f targetEndpoint: 10.0.0.4:6443 tlsClientCert: name: \"\" status: conditions: - lastTransitionTime: \"2021-01-13T20:11:34Z\" message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnectSuccess status: \"True\" type: Reachable failures: - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" outages: - end: \"2021-01-13T20:11:34Z\" endLogs: - latency: 2.032018ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T20:11:34Z\" - latency: 2.241775ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:10:34Z\" - latency: 2.582129ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:09:34Z\" - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" message: Connectivity restored after 2m59.999789186s start: \"2021-01-13T20:08:34Z\" startLogs: - latency: 3.483578ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: failed to establish a TCP connection to 10.0.0.4:6443: dial tcp 10.0.0.4:6443: connect: connection refused' reason: TCPConnectError success: false time: \"2021-01-13T20:08:34Z\" successes: - latency: 2.845865ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:14:34Z\" - latency: 2.926345ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:13:34Z\" - latency: 2.895796ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:12:34Z\" - latency: 2.696844ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:11:34Z\" - latency: 1.502064ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:10:34Z\" - latency: 1.388857ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:09:34Z\" - latency: 1.906383ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:08:34Z\" - latency: 2.089073ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:07:34Z\" - latency: 2.156994ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:06:34Z\" - latency: 1.777043ms message: 'kubernetes-apiserver-endpoint-ci-ln-x5sv9rb-f76d1-4rzrp-master-0: tcp connection to 10.0.0.4:6443 succeeded' reason: TCPConnect success: true time: \"2021-01-13T21:05:34Z\"", "oc patch network.config.openshift.io cluster --type=merge -p '{ \"spec\": { \"serviceNodePortRange\": \"30000-<port>\" } }'", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNodePortRange: \"30000-<port>\"", "network.config.openshift.io/cluster patched", "oc get configmaps -n openshift-kube-apiserver config -o jsonpath=\"{.data['config\\.yaml']}\" | grep -Eo '\"service-node-port-range\":[\"[[:digit:]]+-[[:digit:]]+\"]'", "\"service-node-port-range\":[\"30000-33000\"]", "oc create sa ipfailover", "oc adm policy add-scc-to-user privileged -z ipfailover oc adm policy add-scc-to-user hostnetwork -z ipfailover", "apiVersion: apps/v1 kind: Deployment metadata: name: ipfailover-keepalived 1 labels: ipfailover: hello-openshift spec: strategy: type: Recreate replicas: 2 selector: matchLabels: ipfailover: hello-openshift template: metadata: labels: ipfailover: hello-openshift spec: serviceAccountName: ipfailover privileged: true hostNetwork: true nodeSelector: node-role.kubernetes.io/worker: \"\" containers: - name: openshift-ipfailover image: quay.io/openshift/origin-keepalived-ipfailover ports: - containerPort: 63000 hostPort: 63000 imagePullPolicy: IfNotPresent securityContext: privileged: true volumeMounts: - name: lib-modules mountPath: /lib/modules readOnly: true - name: host-slash mountPath: /host readOnly: true mountPropagation: HostToContainer - name: etc-sysconfig mountPath: /etc/sysconfig readOnly: true - name: config-volume mountPath: /etc/keepalive env: - name: OPENSHIFT_HA_CONFIG_NAME value: \"ipfailover\" - name: OPENSHIFT_HA_VIRTUAL_IPS 2 value: \"1.1.1.1-2\" - name: OPENSHIFT_HA_VIP_GROUPS 3 value: \"10\" - name: OPENSHIFT_HA_NETWORK_INTERFACE 4 value: \"ens3\" #The host interface to assign the VIPs - name: OPENSHIFT_HA_MONITOR_PORT 5 value: \"30060\" - name: OPENSHIFT_HA_VRRP_ID_OFFSET 6 value: \"0\" - name: OPENSHIFT_HA_REPLICA_COUNT 7 value: \"2\" #Must match the number of replicas in the deployment - name: OPENSHIFT_HA_USE_UNICAST value: \"false\" #- name: OPENSHIFT_HA_UNICAST_PEERS #value: \"10.0.148.40,10.0.160.234,10.0.199.110\" - name: OPENSHIFT_HA_IPTABLES_CHAIN 8 value: \"INPUT\" #- name: OPENSHIFT_HA_NOTIFY_SCRIPT 9 # value: /etc/keepalive/mynotifyscript.sh - name: OPENSHIFT_HA_CHECK_SCRIPT 10 value: \"/etc/keepalive/mycheckscript.sh\" - name: OPENSHIFT_HA_PREEMPTION 11 value: \"preempt_delay 300\" - name: OPENSHIFT_HA_CHECK_INTERVAL 12 value: \"2\" livenessProbe: initialDelaySeconds: 10 exec: command: - pgrep - keepalived volumes: - name: lib-modules hostPath: path: /lib/modules - name: host-slash hostPath: path: / - name: etc-sysconfig hostPath: path: /etc/sysconfig # config-volume contains the check script # created with `oc create configmap keepalived-checkscript --from-file=mycheckscript.sh` - configMap: defaultMode: 0755 name: keepalived-checkscript name: config-volume imagePullSecrets: - name: openshift-pull-secret 13", "#!/bin/bash # Whatever tests are needed # E.g., send request and verify response exit 0", "oc create configmap mycustomcheck --from-file=mycheckscript.sh", "oc set env deploy/ipfailover-keepalived OPENSHIFT_HA_CHECK_SCRIPT=/etc/keepalive/mycheckscript.sh", "oc set volume deploy/ipfailover-keepalived --add --overwrite --name=config-volume --mount-path=/etc/keepalive --source='{\"configMap\": { \"name\": \"mycustomcheck\", \"defaultMode\": 493}}'", "oc edit deploy ipfailover-keepalived", "spec: containers: - env: - name: OPENSHIFT_HA_CHECK_SCRIPT 1 value: /etc/keepalive/mycheckscript.sh volumeMounts: 2 - mountPath: /etc/keepalive name: config-volume dnsPolicy: ClusterFirst volumes: 3 - configMap: defaultMode: 0755 4 name: customrouter name: config-volume", "oc edit deploy ipfailover-keepalived", "spec: containers: - env: - name: OPENSHIFT_HA_PREEMPTION 1 value: preempt_delay 300", "spec: env: - name: OPENSHIFT_HA_VIP_GROUPS 1 value: \"3\"", "oc get pod -l ipfailover -o jsonpath=\" {range .items[?(@.spec.volumes[*].configMap)]} {'Namespace: '}{.metadata.namespace} {'Pod: '}{.metadata.name} {'Volumes that use config maps:'} {range .spec.volumes[?(@.configMap)]} {'volume: '}{.name} {'configMap: '}{.configMap.name}{'\\n'}{end} {end}\"", "Namespace: default Pod: keepalived-worker-59df45db9c-2x9mn Volumes that use config maps: volume: config-volume configMap: mycustomcheck", "oc delete configmap <configmap_name>", "oc get deployment -l ipfailover", "NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE default ipfailover 2/2 2 2 105d", "oc delete deployment <ipfailover_deployment_name>", "oc delete sa ipfailover", "apiVersion: batch/v1 kind: Job metadata: generateName: remove-ipfailover- labels: app: remove-ipfailover spec: template: metadata: name: remove-ipfailover spec: containers: - name: remove-ipfailover image: quay.io/openshift/origin-keepalived-ipfailover:4.9 command: [\"/var/lib/ipfailover/keepalived/remove-failover.sh\"] nodeSelector: kubernetes.io/hostname: <host_name> <.> restartPolicy: Never", "oc create -f remove-ipfailover-job.yaml", "job.batch/remove-ipfailover-2h8dm created", "oc logs job/remove-ipfailover-2h8dm", "remove-failover.sh: OpenShift IP Failover service terminating. - Removing ip_vs module - Cleaning up - Releasing VIPs (interface eth0)", "apiVersion: v1 kind: Pod metadata: namespace: project1 name: example-pod spec: containers: - name: example-pod ports: - containerPort: 30100 name: sctpserver protocol: SCTP", "apiVersion: v1 kind: Service metadata: namespace: project1 name: sctpserver spec: ports: - name: sctpserver protocol: SCTP port: 30100 targetPort: 30100 type: ClusterIP", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-sctp-on-http spec: podSelector: matchLabels: role: web ingress: - ports: - protocol: SCTP port: 80", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: name: load-sctp-module labels: machineconfiguration.openshift.io/role: worker spec: config: ignition: version: 3.2.0 storage: files: - path: /etc/modprobe.d/sctp-blacklist.conf mode: 0644 overwrite: true contents: source: data:, - path: /etc/modules-load.d/sctp-load.conf mode: 0644 overwrite: true contents: source: data:,sctp", "oc create -f load-sctp-module.yaml", "oc get nodes", "apiVersion: v1 kind: Pod metadata: name: sctpserver labels: app: sctpserver spec: containers: - name: sctpserver image: registry.access.redhat.com/ubi8/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"] ports: - containerPort: 30102 name: sctpserver protocol: SCTP", "oc create -f sctp-server.yaml", "apiVersion: v1 kind: Service metadata: name: sctpservice labels: app: sctpserver spec: type: NodePort selector: app: sctpserver ports: - name: sctpserver protocol: SCTP port: 30102 targetPort: 30102", "oc create -f sctp-service.yaml", "apiVersion: v1 kind: Pod metadata: name: sctpclient labels: app: sctpclient spec: containers: - name: sctpclient image: registry.access.redhat.com/ubi8/ubi command: [\"/bin/sh\", \"-c\"] args: [\"dnf install -y nc && sleep inf\"]", "oc apply -f sctp-client.yaml", "oc rsh sctpserver", "nc -l 30102 --sctp", "oc get services sctpservice -o go-template='{{.spec.clusterIP}}{{\"\\n\"}}'", "oc rsh sctpclient", "nc <cluster_IP> 30102 --sctp", "cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: name: openshift-ptp annotations: workload.openshift.io/allowed: management labels: name: openshift-ptp openshift.io/cluster-monitoring: \"true\" EOF", "cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: ptp-operators namespace: openshift-ptp spec: targetNamespaces: - openshift-ptp EOF", "OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | grep -o '[0-9]*[.][0-9]*' | head -1)", "cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: ptp-operator-subscription namespace: openshift-ptp spec: channel: \"USD{OC_VERSION}\" name: ptp-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc get csv -n openshift-ptp -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase ptp-operator.4.4.0-202006160135 Succeeded", "apiVersion: ptp.openshift.io/v1 kind: NodePtpDevice metadata: creationTimestamp: \"2019-11-15T08:57:11Z\" generation: 1 name: dev-worker-0 1 namespace: openshift-ptp 2 resourceVersion: \"487462\" selfLink: /apis/ptp.openshift.io/v1/namespaces/openshift-ptp/nodeptpdevices/dev-worker-0 uid: 08d133f7-aae2-403f-84ad-1fe624e5ab3f spec: {} status: devices: 3 - name: eno1 - name: eno2 - name: ens787f0 - name: ens787f1 - name: ens801f0 - name: ens801f1 - name: ens802f0 - name: ens802f1 - name: ens803", "oc get NodePtpDevice -n openshift-ptp -o yaml", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: ordinary-clock-ptp-config 1 namespace: openshift-ptp spec: profile: 2 - name: \"profile1\" 3 interface: \"ens787f1\" 4 ptp4lOpts: \"-s -2\" 5 phc2sysOpts: \"-a -r\" 6 ptp4lConf: \"\" 7 ptpSchedulingPolicy: SCHED_OTHER 8 ptpSchedulingPriority: 10 9 recommend: 10 - profile: \"profile1\" 11 priority: 10 12 match: 13 - nodeLabel: \"node-role.kubernetes.io/worker\" 14 nodeName: \"compute-0.example.com\" 15", "oc create -f ordinary-clock-ptp-config.yaml", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com", "oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container", "I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: ens787f1 I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -s -2 I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: boundary-clock-ptp-config 1 namespace: openshift-ptp spec: profile: 2 - name: \"profile1\" 3 interface: \"\" 4 ptp4lOpts: \"-2\" 5 ptp4lConf: | 6 [ens1f0] 7 masterOnly 0 [ens1f3] 8 masterOnly 1 [global] # # Default Data Set # twoStepFlag 1 #slaveOnly 1 priority1 128 priority2 128 domainNumber 24 #utc_offset 37 clockClass 248 clockAccuracy 0xFE offsetScaledLogVariance 0xFFFF free_running 0 freq_est_interval 1 dscp_event 0 dscp_general 0 dataset_comparison G.8275.x G.8275.defaultDS.localPriority 128 # # Port Data Set # logAnnounceInterval -3 logSyncInterval -4 logMinDelayReqInterval -4 logMinPdelayReqInterval -4 announceReceiptTimeout 3 syncReceiptTimeout 0 delayAsymmetry 0 fault_reset_interval 4 neighborPropDelayThresh 20000000 masterOnly 0 G.8275.portDS.localPriority 128 # # Run time options # assume_two_step 0 logging_level 6 path_trace_enabled 0 follow_up_info 0 hybrid_e2e 0 inhibit_multicast_service 0 net_sync_monitor 0 tc_spanning_tree 0 tx_timestamp_timeout 10 #was 1 (default !) unicast_listen 0 unicast_master_table 0 unicast_req_duration 3600 use_syslog 1 verbose 0 summary_interval -4 kernel_leap 1 check_fup_sync 0 # # Servo Options # pi_proportional_const 0.0 pi_integral_const 0.0 pi_proportional_scale 0.0 pi_proportional_exponent -0.3 pi_proportional_norm_max 0.7 pi_integral_scale 0.0 pi_integral_exponent 0.4 pi_integral_norm_max 0.3 step_threshold 2.0 first_step_threshold 0.00002 max_frequency 900000000 clock_servo pi sanity_freq_limit 200000000 ntpshm_segment 0 # # Transport options # transportSpecific 0x0 ptp_dst_mac 01:1B:19:00:00:00 p2p_dst_mac 01:80:C2:00:00:0E udp_ttl 1 udp6_scope 0x0E uds_address /var/run/ptp4l # # Default interface options # clock_type BC network_transport UDPv4 delay_mechanism E2E time_stamping hardware tsproc_mode filter delay_filter moving_median delay_filter_length 10 egressLatency 0 ingressLatency 0 boundary_clock_jbod 0 9 # # Clock description # productDescription ;; revisionData ;; manufacturerIdentity 00:00:00 userDescription ; timeSource 0xA0 phc2sysOpts: \"-a -r\" 10 ptpSchedulingPolicy: SCHED_OTHER 11 ptpSchedulingPriority: 10 12 recommend: 13 - profile: \"profile1\" 14 priority: 10 15 match: 16 - nodeLabel: \"node-role.kubernetes.io/worker\" 17 nodeName: \"compute-0.example.com\" 18", "oc create -f boundary-clock-ptp-config.yaml", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com", "oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container", "I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to: I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------ I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1 I1115 09:41:17.117616 4143292 daemon.go:102] Interface: I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------", "oc edit PtpConfig -n openshift-ptp", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: <ptp_config_name> namespace: openshift-ptp spec: profile: - name: \"profile1\" ptpSchedulingPolicy: SCHED_FIFO 1 ptpSchedulingPriority: 10 2", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com", "oc -n openshift-ptp logs linuxptp-daemon-lgm55 -c linuxptp-daemon-container|grep chrt", "I1216 19:24:57.091872 1600715 daemon.go:285] /bin/chrt -f 65 /usr/sbin/ptp4l -f /var/run/ptp4l.0.config -2 --summary_interval -4 -m", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com", "oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io", "NAME AGE control-plane-0.example.com 10d control-plane-1.example.com 10d compute-0.example.com 10d compute-1.example.com 10d compute-2.example.com 10d", "oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io <node_name> -o yaml", "apiVersion: ptp.openshift.io/v1 kind: NodePtpDevice metadata: creationTimestamp: \"2021-09-14T16:52:33Z\" generation: 1 name: compute-0.example.com namespace: openshift-ptp resourceVersion: \"177400\" uid: 30413db0-4d8d-46da-9bef-737bacd548fd spec: {} status: devices: - name: eno1 - name: eno2 - name: eno3 - name: eno4 - name: enp5s0f0 - name: enp5s0f1", "oc get pods -n openshift-ptp -o wide", "NAME READY STATUS RESTARTS AGE IP NODE linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com", "oc rsh -n openshift-ptp -c linuxptp-daemon-container <linux_daemon_container>", "pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'", "sending: GET PORT_DATA_SET 40a6b7.fffe.166ef0-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET portIdentity 40a6b7.fffe.166ef0-1 portState SLAVE logMinDelayReqInterval -4 peerMeanPathDelay 0 logAnnounceInterval -3 announceReceiptTimeout 3 logSyncInterval -4 delayMechanism 1 logMinPdelayReqInterval -4 versionNumber 2", "oc get pods -n amq-interconnect", "NAME READY STATUS RESTARTS AGE amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h", "oc get pods -n openshift-ptp", "NAME READY STATUS RESTARTS AGE linuxptp-daemon-2t78p 3/3 Running 0 12h linuxptp-daemon-k8n88 3/3 Running 0 12h", "oc edit PtpOperatorConfig default -n openshift-ptp", "spec: daemonNodeSelector: node-role.kubernetes.io/worker: \"\" ptpEventConfig: enableEventPublisher: true 1 transportHost: amqp://<instance_name>.<namespace>.svc.cluster.local 2", "apiVersion: ptp.openshift.io/v1 kind: PtpConfig metadata: name: example-ptpconfig namespace: openshift-ptp spec: profile: - name: \"profile1\" interface: \"enp5s0f0\" ptp4lOpts: \"-2 -s --summary_interval -4\" 1 phc2sysOpts: \"-a -r -m -n 24 -N 8 -R 16\" 2 ptp4lConf: \"\" 3 ptpClockThreshold: 4 holdOverTimeout: 5 maxOffsetThreshold: 100 minOffsetThreshold: -100", "[ { \"id\": \"75b1ad8f-c807-4c23-acf5-56f4b7ee3826\", \"endpointUri\": \"http://localhost:9089/event\", \"uriLocation\": \"http://localhost:8089/api/cloudNotifications/v1/subscriptions/75b1ad8f-c807-4c23-acf5-56f4b7ee3826\", \"resource\": \"/cluster/node/compute-1.example.com/ptp\" } ]", "{ \"uriLocation\": \"http://localhost:8089/api/cloudNotifications/v1/subscriptions\", \"resource\": \"/cluster/node/compute-1.example.com/ptp\" }", "{ \"id\":\"48210fb3-45be-4ce0-aa9b-41a0e58730ab\", \"endpointUri\": \"http://localhost:9089/event\", \"uriLocation\":\"http://localhost:8089/api/cloudNotifications/v1/subscriptions/48210fb3-45be-4ce0-aa9b-41a0e58730ab\", \"resource\":\"/cluster/node/compute-1.example.com/ptp\" }", "{\"status\":\"ping sent\"}", "OK", "oc get pods -n openshift-ptp", "NAME READY STATUS RESTARTS AGE linuxptp-daemon-2t78p 3/3 Running 0 8h linuxptp-daemon-k8n88 3/3 Running 0 8h", "oc exec -it <linuxptp-daemon> -n openshift-ptp -c cloud-event-proxy -- curl 127.0.0.1:9091/metrics", "HELP cne_amqp_events_published Metric to get number of events published by the transport TYPE cne_amqp_events_published gauge cne_amqp_events_published{address=\"/cluster/node/compute-1.example.com/ptp/status\",status=\"success\"} 1041 HELP cne_amqp_events_received Metric to get number of events received by the transport TYPE cne_amqp_events_received gauge cne_amqp_events_received{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"success\"} 1019 HELP cne_amqp_receiver Metric to get number of receiver created TYPE cne_amqp_receiver gauge cne_amqp_receiver{address=\"/cluster/node/mock\",status=\"active\"} 1 cne_amqp_receiver{address=\"/cluster/node/compute-1.example.com/ptp\",status=\"active\"} 1 cne_amqp_receiver{address=\"/cluster/node/compute-1.example.com/redfish/event\",status=\"active\"}", "oc exec -it <linuxptp_daemon_pod> -n openshift-ptp -c cloud-event-proxy -- curl 127.0.0.1:9091/metrics", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods", "kind: Namespace apiVersion: v1 metadata: name: example1 annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"info\", \"allow\": \"info\" }", "2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_deny-all\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0", "oc edit network.operator.openshift.io/cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: policyAuditConfig: destination: \"null\" maxFileSize: 50 rateLimit: 20 syslogFacility: local0", "cat <<EOF| oc create -f - kind: Namespace apiVersion: v1 metadata: name: verify-audit-logging annotations: k8s.ovn.org/acl-logging: '{ \"deny\": \"alert\", \"allow\": \"alert\" }' EOF", "namespace/verify-audit-logging created", "oc annotate namespace verify-audit-logging k8s.ovn.org/acl-logging='{ \"deny\": \"alert\", \"allow\": \"alert\" }'", "namespace/verify-audit-logging annotated", "cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: deny-all spec: podSelector: matchLabels: policyTypes: - Ingress - Egress --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} policyTypes: - Ingress - Egress ingress: - from: - podSelector: {} egress: - to: - namespaceSelector: matchLabels: namespace: verify-audit-logging EOF", "networkpolicy.networking.k8s.io/deny-all created networkpolicy.networking.k8s.io/allow-from-same-namespace created", "cat <<EOF| oc create -n default -f - apiVersion: v1 kind: Pod metadata: name: client spec: containers: - name: client image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF", "for name in client server; do cat <<EOF| oc create -n verify-audit-logging -f - apiVersion: v1 kind: Pod metadata: name: USD{name} spec: containers: - name: USD{name} image: registry.access.redhat.com/rhel7/rhel-tools command: [\"/bin/sh\", \"-c\"] args: [\"sleep inf\"] EOF done", "pod/client created pod/server created", "POD_IP=USD(oc get pods server -n verify-audit-logging -o jsonpath='{.status.podIP}')", "oc exec -it client -n default -- /bin/ping -c 2 USDPOD_IP", "PING 10.128.2.55 (10.128.2.55) 56(84) bytes of data. --- 10.128.2.55 ping statistics --- 2 packets transmitted, 0 received, 100% packet loss, time 2041ms", "oc exec -it client -n verify-audit-logging -- /bin/ping -c 2 USDPOD_IP", "PING 10.128.0.86 (10.128.0.86) 56(84) bytes of data. 64 bytes from 10.128.0.86: icmp_seq=1 ttl=64 time=2.21 ms 64 bytes from 10.128.0.86: icmp_seq=2 ttl=64 time=0.440 ms --- 10.128.0.86 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.440/1.329/2.219/0.890 ms", "for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done", "Defaulting container name to ovn-controller. Use 'oc describe pod/ovnkube-node-hdb8v -n openshift-ovn-kubernetes' to see all of the containers in this pod. 2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_deny-all\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:33:12.614Z|00006|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_deny-all\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:10.037Z|00007|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_allow-from-same-namespace_0\", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0 2021-06-13T19:44:11.037Z|00008|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_allow-from-same-namespace_0\", verdict=allow, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:3b,dl_dst=0a:58:0a:80:02:3a,nw_src=10.128.2.59,nw_dst=10.128.2.58,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0", "oc annotate namespace <namespace> k8s.ovn.org/acl-logging='{ \"deny\": \"alert\", \"allow\": \"notice\" }'", "kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: |- { \"deny\": \"alert\", \"allow\": \"notice\" }", "namespace/verify-audit-logging annotated", "for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node --no-headers=true | awk '{ print USD1 }') ; do oc exec -it USDpod -n openshift-ovn-kubernetes -- tail -4 /var/log/ovn/acl-audit-log.log done", "2021-06-13T19:33:11.590Z|00005|acl_log(ovn_pinctrl0)|INFO|name=\"verify-audit-logging_deny-all\", verdict=drop, severity=alert: icmp,vlan_tci=0x0000,dl_src=0a:58:0a:80:02:39,dl_dst=0a:58:0a:80:02:37,nw_src=10.128.2.57,nw_dst=10.128.2.55,nw_tos=0,nw_ecn=0,nw_ttl=64,icmp_type=8,icmp_code=0", "oc annotate --overwrite namespace <namespace> k8s.ovn.org/acl-logging={}", "kind: Namespace apiVersion: v1 metadata: name: <namespace> annotations: k8s.ovn.org/acl-logging: null", "namespace/verify-audit-logging annotated", "touch <policy_name>.yaml", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: []", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}", "oc apply -f <policy_name>.yaml -n <namespace>", "networkpolicy.networking.k8s.io/default-deny created", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "oc get networkpolicy", "oc describe networkpolicy <policy_name> -n <namespace>", "oc describe networkpolicy allow-same-namespace", "Name: allow-same-namespace Namespace: ns1 Created on: 2021-05-24 22:28:56 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: PodSelector: <none> Not affecting egress traffic Policy Types: Ingress", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "oc get networkpolicy", "oc apply -n <namespace> -f <policy_file>.yaml", "oc edit networkpolicy <policy_name> -n <namespace>", "oc describe networkpolicy <policy_name> -n <namespace>", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "oc delete networkpolicy <policy_name> -n <namespace>", "networkpolicy.networking.k8s.io/default-deny deleted", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "oc create -f template.yaml -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>", "oc edit template <project_template> -n openshift-config", "objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress", "oc new-project <project> 1", "oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF", "oc describe networkpolicy", "Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: 1 - name: <name> 2 namespace: <namespace> 3 rawCNIConfig: |- 4 { } type: Raw", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name> 1 spec: config: |- 2 { }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"type\": \"bridge\", \"isGateway\": true, \"vlan\": 2, \"ipam\": { \"type\": \"dhcp\" } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"type\": \"host-device\", \"device\": \"eth1\" }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"mode\": \"l3\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.10.10/24\" } ] } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-net\", \"type\": \"macvlan\", \"master\": \"eth1\", \"mode\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #", "{ \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: - name: tertiary-net namespace: project2 type: Raw rawCNIConfig: |- { \"cniVersion\": \"0.3.1\", \"name\": \"tertiary-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"mode\": \"l2\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.1.23/24\" } ] } }", "oc get network-attachment-definitions -n <namespace>", "NAME AGE test-network-1 14m", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: next-net spec: config: |- { \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"type\": \"host-device\", \"device\": \"eth1\", \"ipam\": { \"type\": \"dhcp\" } }", "oc apply -f <file>.yaml", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: annotations: k8s.v1.cni.cncf.io/policy-for: <network_name>", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: useMultiNetworkPolicy: true", "oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml", "network.operator.openshift.io/cluster patched", "touch <policy_name>.yaml", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: []", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: - from: - podSelector: {}", "oc apply -f <policy_name>.yaml -n <namespace>", "multinetworkpolicy.k8s.cni.cncf.io/default-deny created", "oc get multi-networkpolicy", "oc apply -n <namespace> -f <policy_file>.yaml", "oc edit multi-networkpolicy <policy_name> -n <namespace>", "oc describe multi-networkpolicy <policy_name> -n <namespace>", "oc get multi-networkpolicy", "oc describe multi-networkpolicy <policy_name> -n <namespace>", "oc delete multi-networkpolicy <policy_name> -n <namespace>", "multinetworkpolicy.k8s.cni.cncf.io/default-deny deleted", "metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1", "metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]", "oc create -f <name>.yaml", "oc get pod <name> -o yaml", "oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/networks-status: |- 1 [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:", "oc edit pod <name>", "metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1", "apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: ' { \"name\": \"net1\" }, { \"name\": \"net2\", 1 \"default-route\": [\"192.0.2.1\"] 2 }' spec: containers: - name: example-pod command: [\"/bin/bash\", \"-c\", \"sleep 2000000000000\"] image: centos/tools", "oc exec -it <pod_name> -- ip route", "oc edit networks.operator.openshift.io cluster", "name: <name> 1 namespace: <namespace> 2 rawCNIConfig: '{ 3 }' type: Raw", "{ \"cniVersion\": \"0.3.1\", \"name\": \"<name>\", 1 \"plugins\": [{ 2 \"type\": \"macvlan\", \"capabilities\": { \"ips\": true }, 3 \"master\": \"eth0\", 4 \"mode\": \"bridge\", \"ipam\": { \"type\": \"static\" } }, { \"capabilities\": { \"mac\": true }, 5 \"type\": \"tuning\" }] }", "oc edit pod <name>", "apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"<name>\", 1 \"ips\": [ \"192.0.2.205/24\" ], 2 \"mac\": \"CA:FE:C0:FF:EE:00\" 3 } ]'", "oc exec -it <pod_name> -- ip a", "oc delete pod <name> -n <namespace>", "oc edit networks.operator.openshift.io cluster", "oc get network-attachment-definitions <network-name> -o yaml", "oc get network-attachment-definitions net1 -o go-template='{{printf \"%s\\n\" .spec.config}}' { \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens5\", \"mode\": \"bridge\", \"ipam\": {\"type\":\"static\",\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.128.2.1\"}],\"addresses\":[{\"address\":\"10.128.2.100/23\",\"gateway\":\"10.128.2.1\"}],\"dns\":{\"nameservers\":[\"172.30.0.10\"],\"domain\":\"us-west-2.compute.internal\",\"search\":[\"us-west-2.compute.internal\"]}} }", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: [] 1", "oc get network-attachment-definition --all-namespaces", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-vrf\", \"plugins\": [ 1 { \"type\": \"macvlan\", 2 \"master\": \"eth1\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.23/24\" } ] } }, { \"type\": \"vrf\", \"vrfname\": \"example-vrf-name\", 3 \"table\": 1001 4 }] }'", "oc create -f additional-network-attachment.yaml", "oc get network-attachment-definitions -n <namespace>", "NAME AGE additional-network-1 14m", "ip vrf show", "Name Table ----------------------- red 10", "ip link", "5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode", "oc label node <node_name> feature.node.kubernetes.io/network-sriov.capable=\"true\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodeState metadata: name: node-25 1 namespace: openshift-sriov-network-operator ownerReferences: - apiVersion: sriovnetwork.openshift.io/v1 blockOwnerDeletion: true controller: true kind: SriovNetworkNodePolicy name: default spec: dpConfigVersion: \"39824\" status: interfaces: 2 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f0 pciAddress: \"0000:18:00.0\" totalvfs: 8 vendor: 15b3 - deviceID: \"1017\" driver: mlx5_core mtu: 1500 name: ens785f1 pciAddress: \"0000:18:00.1\" totalvfs: 8 vendor: 15b3 - deviceID: 158b driver: i40e mtu: 1500 name: ens817f0 pciAddress: 0000:81:00.0 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens817f1 pciAddress: 0000:81:00.1 totalvfs: 64 vendor: \"8086\" - deviceID: 158b driver: i40e mtu: 1500 name: ens803f0 pciAddress: 0000:86:00.0 totalvfs: 64 vendor: \"8086\" syncStatus: Succeeded", "apiVersion: v1 kind: Pod metadata: name: rdma-app annotations: k8s.v1.cni.cncf.io/networks: sriov-rdma-mlnx spec: containers: - name: testpmd image: <RDMA_image> imagePullPolicy: IfNotPresent securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] command: [\"sleep\", \"infinity\"]", "apiVersion: v1 kind: Pod metadata: name: dpdk-app annotations: k8s.v1.cni.cncf.io/networks: sriov-dpdk-net spec: containers: - name: testpmd image: <DPDK_image> securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] volumeMounts: - mountPath: /dev/hugepages name: hugepage resources: limits: memory: \"1Gi\" cpu: \"2\" hugepages-1Gi: \"4Gi\" requests: memory: \"1Gi\" cpu: \"2\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "cat << EOF| oc create -f - apiVersion: v1 kind: Namespace metadata: name: openshift-sriov-network-operator annotations: workload.openshift.io/allowed: management EOF", "cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: sriov-network-operators namespace: openshift-sriov-network-operator spec: targetNamespaces: - openshift-sriov-network-operator EOF", "OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | grep -o '[0-9]*[.][0-9]*' | head -1)", "cat << EOF| oc create -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: sriov-network-operator-subscription namespace: openshift-sriov-network-operator spec: channel: \"USD{OC_VERSION}\" name: sriov-network-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc get csv -n openshift-sriov-network-operator -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase sriov-network-operator.4.9.0-202110121402 Succeeded", "oc annotate ns/openshift-sriov-network-operator workload.openshift.io/allowed=management", "oc get pods -n openshift-sriov-network-operator", "NAME READY STATUS RESTARTS AGE network-resources-injector-5cz5p 1/1 Running 0 10m network-resources-injector-dwqpx 1/1 Running 0 10m network-resources-injector-lktz5 1/1 Running 0 10m", "oc get pods -n openshift-sriov-network-operator", "NAME READY STATUS RESTARTS AGE operator-webhook-9jkw6 1/1 Running 0 16m operator-webhook-kbr5p 1/1 Running 0 16m operator-webhook-rpfrl 1/1 Running 0 16m", "oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"enableInjector\": <value> } }'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: enableInjector: <value>", "oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"enableOperatorWebhook\": <value> } }'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: enableOperatorWebhook: <value>", "oc patch sriovoperatorconfig default --type=json -n openshift-sriov-network-operator --patch '[{ \"op\": \"replace\", \"path\": \"/spec/configDaemonNodeSelector\", \"value\": {<node_label>} }]'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: configDaemonNodeSelector: <node_label>", "oc patch sriovoperatorconfig default --type=merge -n openshift-sriov-network-operator --patch '{ \"spec\": { \"disableDrain\": true } }'", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovOperatorConfig metadata: name: default namespace: openshift-sriov-network-operator spec: disableDrain: true", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" 4 priority: <priority> 5 mtu: <mtu> 6 needVhostNet: false 7 numVfs: <num> 8 nicSelector: 9 vendor: \"<vendor_code>\" 10 deviceID: \"<device_id>\" 11 pfNames: [\"<pf_name>\", ...] 12 rootDevices: [\"<pci_bus_id>\", ...] 13 netFilter: \"<filter_string>\" 14 deviceType: <device_type> 15 isRdma: false 16 linkType: <link_type> 17", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-ib-net-1 namespace: openshift-sriov-network-operator spec: resourceName: ibnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 4 nicSelector: vendor: \"15b3\" deviceID: \"101b\" rootDevices: - \"0000:19:00.0\" linkType: ib isRdma: true", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-sriov-net-openstack-1 namespace: openshift-sriov-network-operator spec: resourceName: sriovnic1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 1 1 nicSelector: vendor: \"15b3\" deviceID: \"101b\" netFilter: \"openstack/NetworkID:ea24bd04-8674-4f69-b0ee-fa0b3bd20509\" 2", "pfNames: [\"netpf0#2-7\"]", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1 namespace: openshift-sriov-network-operator spec: resourceName: net1 nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 16 nicSelector: pfNames: [\"netpf0#0-0\"] deviceType: netdevice", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-net-1-dpdk namespace: openshift-sriov-network-operator spec: resourceName: net1dpdk nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 16 nicSelector: pfNames: [\"netpf0#8-15\"] deviceType: vfio-pci", "oc create -f <name>-sriov-node-network.yaml", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name> -o jsonpath='{.status.syncStatus}'", "oc get sriovnetworknodestates -n openshift-sriov-network-operator <node_name>", "\"lastSyncError\": \"write /sys/bus/pci/devices/0000:3b:00.1/sriov_numvfs: cannot allocate memory\"", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: example-network namespace: additional-sriov-network-1 spec: ipam: | { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [{ \"dst\": \"0.0.0.0/0\" }], \"gateway\": \"10.56.217.1\" } vlan: 0 resourceName: intelnics metaPlugins : | { \"type\": \"vrf\", 1 \"vrfname\": \"example-vrf-name\" 2 }", "oc create -f sriov-network-attachment.yaml", "oc get network-attachment-definitions -n <namespace> 1", "NAME AGE additional-sriov-network-1 14m", "ip vrf show", "Name Table ----------------------- red 10", "ip link", "5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 vlan: <vlan> 5 spoofChk: \"<spoof_check>\" 6 ipam: |- 7 {} linkState: <link_state> 8 maxTxRate: <max_tx_rate> 9 minTxRate: <min_tx_rate> 10 vlanQoS: <vlan_qos> 11 trust: \"<trust_vf>\" 12 capabilities: <capabilities> 13", "{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #", "{ \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"gateway\": \"10.56.217.1\" }", "oc create -f <name>.yaml", "oc get net-attach-def -n <namespace>", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: <name> 1 namespace: openshift-sriov-network-operator 2 spec: resourceName: <sriov_resource_name> 3 networkNamespace: <target_namespace> 4 ipam: |- 5 {} linkState: <link_state> 6 capabilities: <capabilities> 7", "{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #", "{ \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovIBNetwork metadata: name: attach1 namespace: openshift-sriov-network-operator spec: resourceName: net1 networkNamespace: project2 ipam: |- { \"type\": \"host-local\", \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"gateway\": \"10.56.217.1\" }", "oc create -f <name>.yaml", "oc get net-attach-def -n <namespace>", "[ { \"name\": \"<name>\", 1 \"mac\": \"<mac_address>\", 2 \"ips\": [\"<cidr_range>\"] 3 } ]", "apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"net1\", \"mac\": \"20:04:0f:f1:88:01\", \"ips\": [\"192.168.10.1/24\", \"2001::1/64\"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]", "[ { \"name\": \"<network_attachment>\", 1 \"infiniband-guid\": \"<guid>\", 2 \"ips\": [\"<cidr_range>\"] 3 } ]", "apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"ib1\", \"infiniband-guid\": \"c2:11:22:33:44:55:66:77\", \"ips\": [\"192.168.10.1/24\", \"2001::1/64\"] } ] spec: containers: - name: sample-container image: <image> imagePullPolicy: IfNotPresent command: [\"sleep\", \"infinity\"]", "metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1", "metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]", "oc create -f <name>.yaml", "oc get pod <name> -o yaml", "oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/networks-status: |- 1 [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:", "apiVersion: v1 kind: Pod metadata: name: sample-pod annotations: k8s.v1.cni.cncf.io/networks: <name> 1 spec: containers: - name: sample-container image: <image> 2 command: [\"sleep\", \"infinity\"] resources: limits: memory: \"1Gi\" 3 cpu: \"2\" 4 requests: memory: \"1Gi\" cpu: \"2\"", "oc create -f <filename> 1", "oc describe pod sample-pod", "oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus", "oc exec sample-pod -- cat /sys/fs/cgroup/cpuset/cpuset.cpus", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: policy-example namespace: openshift-sriov-network-operator spec: resourceName: example nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" numVfs: 4 nicSelector: vendor: \"8086\" pfNames: ['ens803f0'] rootDevices: ['0000:86:00.0']", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: net-example namespace: openshift-sriov-network-operator spec: networkNamespace: default ipam: | 1 { \"type\": \"host-local\", 2 \"subnet\": \"10.56.217.0/24\", \"rangeStart\": \"10.56.217.171\", \"rangeEnd\": \"10.56.217.181\", \"routes\": [ {\"dst\": \"224.0.0.0/5\"}, {\"dst\": \"232.0.0.0/5\"} ], \"gateway\": \"10.56.217.1\" } resourceName: example", "apiVersion: v1 kind: Pod metadata: name: testpmd namespace: default annotations: k8s.v1.cni.cncf.io/networks: nic1 spec: containers: - name: example image: rhel7:latest securityContext: capabilities: add: [\"NET_ADMIN\"] 1 command: [ \"sleep\", \"infinity\"]", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: intel-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: intelnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"8086\" deviceID: \"158b\" pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: vfio-pci 1", "oc create -f intel-dpdk-node-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: intel-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- ... 1 vlan: <vlan> resourceName: intelnics", "oc create -f intel-dpdk-network.yaml", "apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: intel-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /dev/hugepages 4 name: hugepage resources: limits: openshift.io/intelnics: \"1\" 5 memory: \"1Gi\" cpu: \"4\" 6 hugepages-1Gi: \"4Gi\" 7 requests: openshift.io/intelnics: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "oc create -f intel-dpdk-pod.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-dpdk-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"15b3\" deviceID: \"1015\" 1 pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: netdevice 2 isRdma: true 3", "oc create -f mlx-dpdk-node-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-dpdk-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 vlan: <vlan> resourceName: mlxnics", "oc create -f mlx-dpdk-network.yaml", "apiVersion: v1 kind: Pod metadata: name: dpdk-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-dpdk-network spec: containers: - name: testpmd image: <DPDK_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /dev/hugepages 4 name: hugepage resources: limits: openshift.io/mlxnics: \"1\" 5 memory: \"1Gi\" cpu: \"4\" 6 hugepages-1Gi: \"4Gi\" 7 requests: openshift.io/mlxnics: \"1\" memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "oc create -f mlx-dpdk-pod.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: mlx-rdma-node-policy namespace: openshift-sriov-network-operator spec: resourceName: mlxnics nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\" priority: <priority> numVfs: <num> nicSelector: vendor: \"15b3\" deviceID: \"1015\" 1 pfNames: [\"<pf_name>\", ...] rootDevices: [\"<pci_bus_id>\", \"...\"] deviceType: netdevice 2 isRdma: true 3", "oc create -f mlx-rdma-node-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: mlx-rdma-network namespace: openshift-sriov-network-operator spec: networkNamespace: <target_namespace> ipam: |- 1 vlan: <vlan> resourceName: mlxnics", "oc create -f mlx-rdma-network.yaml", "apiVersion: v1 kind: Pod metadata: name: rdma-app namespace: <target_namespace> 1 annotations: k8s.v1.cni.cncf.io/networks: mlx-rdma-network spec: containers: - name: testpmd image: <RDMA_image> 2 securityContext: runAsUser: 0 capabilities: add: [\"IPC_LOCK\",\"SYS_RESOURCE\",\"NET_RAW\"] 3 volumeMounts: - mountPath: /dev/hugepages 4 name: hugepage resources: limits: memory: \"1Gi\" cpu: \"4\" 5 hugepages-1Gi: \"4Gi\" 6 requests: memory: \"1Gi\" cpu: \"4\" hugepages-1Gi: \"4Gi\" command: [\"sleep\", \"infinity\"] volumes: - name: hugepage emptyDir: medium: HugePages", "oc create -f mlx-rdma-pod.yaml", "oc delete sriovnetwork -n openshift-sriov-network-operator --all", "oc delete sriovnetworknodepolicy -n openshift-sriov-network-operator --all", "oc delete sriovibnetwork -n openshift-sriov-network-operator --all", "oc delete crd sriovibnetworks.sriovnetwork.openshift.io", "oc delete crd sriovnetworknodepolicies.sriovnetwork.openshift.io", "oc delete crd sriovnetworknodestates.sriovnetwork.openshift.io", "oc delete crd sriovnetworkpoolconfigs.sriovnetwork.openshift.io", "oc delete crd sriovnetworks.sriovnetwork.openshift.io", "oc delete crd sriovoperatorconfigs.sriovnetwork.openshift.io", "oc delete mutatingwebhookconfigurations network-resources-injector-config", "oc delete MutatingWebhookConfiguration sriov-operator-webhook-config", "oc delete ValidatingWebhookConfiguration sriov-operator-webhook-config", "oc delete namespace openshift-sriov-network-operator", "oc patch netnamespace <project_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\" ] }'", "oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\"]}' oc patch netnamespace project2 --type=merge -p '{\"egressIPs\": [\"192.168.1.101\"]}'", "oc patch hostsubnet <node_name> --type=merge -p '{ \"egressCIDRs\": [ \"<ip_address_range>\", \"<ip_address_range>\" ] }'", "oc patch hostsubnet node1 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}' oc patch hostsubnet node2 --type=merge -p '{\"egressCIDRs\": [\"192.168.1.0/24\"]}'", "oc patch netnamespace <project_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\" ] }'", "oc patch netnamespace project1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\",\"192.168.1.101\"]}'", "oc patch hostsubnet <node_name> --type=merge -p '{ \"egressIPs\": [ \"<ip_address>\", \"<ip_address>\" ] }'", "oc patch hostsubnet node1 --type=merge -p '{\"egressIPs\": [\"192.168.1.100\", \"192.168.1.101\", \"192.168.1.102\"]}'", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow - to: cidrSelector: 0.0.0.0/0 3 type: Deny", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: <name> 1 spec: egress: 2", "egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Allow to: dnsName: www.example.com - type: Deny to: cidrSelector: 0.0.0.0/0", "oc create -f <policy_name>.yaml -n <project>", "oc create -f default.yaml -n project1", "egressnetworkpolicy.network.openshift.io/v1 created", "oc get egressnetworkpolicy --all-namespaces", "oc describe egressnetworkpolicy <policy_name>", "Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0", "oc get -n <project> egressnetworkpolicy", "oc get -n <project> egressnetworkpolicy <name> -o yaml > <filename>.yaml", "oc replace -f <filename>.yaml", "oc get -n <project> egressnetworkpolicy", "oc delete -n <project> egressnetworkpolicy <name>", "openstack port set --allowed-address ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>", "apiVersion: apps/v1 kind: Deployment metadata: name: egress-demo-controller spec: replicas: 1 1 selector: matchLabels: name: egress-router template: metadata: name: egress-router labels: name: egress-router annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: 2 initContainers: containers:", "apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress_router> - name: EGRESS_GATEWAY 3 value: <egress_gateway> - name: EGRESS_DESTINATION 4 value: <egress_destination> - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod", "apiVersion: v1 kind: Pod metadata: name: egress-multi labels: name: egress-multi annotations: pod.network.openshift.io/assign-macvlan: \"true\" spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE value: 192.168.12.99/24 - name: EGRESS_GATEWAY value: 192.168.12.1 - name: EGRESS_DESTINATION value: | 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27 - name: EGRESS_ROUTER_MODE value: init containers: - name: egress-router-wait image: registry.redhat.io/openshift4/ose-pod", "80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 203.0.113.27", "apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http port: 80 - name: https port: 443 type: ClusterIP selector: name: egress-1", "apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: http-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-http-proxy env: - name: EGRESS_HTTP_PROXY_DESTINATION 4 value: |-", "!*.example.com !192.168.1.0/24 192.168.2.1 *", "apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: http-proxy port: 8080 1 type: ClusterIP selector: name: egress-1", "apiVersion: v1 kind: Pod metadata: name: app-1 labels: name: app-1 spec: containers: env: - name: http_proxy value: http://egress-1:8080/ 1 - name: https_proxy value: http://egress-1:8080/", "apiVersion: v1 kind: Pod metadata: name: egress-1 labels: name: egress-1 annotations: pod.network.openshift.io/assign-macvlan: \"true\" 1 spec: initContainers: - name: egress-router image: registry.redhat.io/openshift4/ose-egress-router securityContext: privileged: true env: - name: EGRESS_SOURCE 2 value: <egress-router> - name: EGRESS_GATEWAY 3 value: <egress-gateway> - name: EGRESS_ROUTER_MODE value: dns-proxy containers: - name: egress-router-pod image: registry.redhat.io/openshift4/ose-egress-dns-proxy securityContext: privileged: true env: - name: EGRESS_DNS_PROXY_DESTINATION 4 value: |- - name: EGRESS_DNS_PROXY_DEBUG 5 value: \"1\"", "80 172.16.12.11 100 example.com", "8080 192.168.60.252 80 8443 web.example.com 443", "apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: type: ClusterIP selector: name: egress-dns-proxy", "apiVersion: v1 kind: Service metadata: name: egress-dns-svc spec: ports: - name: con1 protocol: TCP port: 80 targetPort: 80 - name: con2 protocol: TCP port: 100 targetPort: 100 type: ClusterIP selector: name: egress-dns-proxy", "oc create -f egress-router-service.yaml", "Egress routes for Project \"Test\", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 Fallback 203.0.113.27", "oc delete configmap egress-routes --ignore-not-found", "oc create configmap egress-routes --from-file=destination=my-egress-destination.txt", "apiVersion: v1 kind: ConfigMap metadata: name: egress-routes data: destination: | # Egress routes for Project \"Test\", version 3 80 tcp 203.0.113.25 8080 tcp 203.0.113.26 80 8443 tcp 203.0.113.26 443 # Fallback 203.0.113.27", "env: - name: EGRESS_DESTINATION valueFrom: configMapKeyRef: name: egress-routes key: destination", "oc annotate netnamespace <namespace> netnamespace.network.openshift.io/multicast-enabled=true", "oc project <project>", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi8 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi8 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF", "POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')", "oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname", "CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')", "oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"", "mlistener", "oc annotate netnamespace <namespace> \\ 1 netnamespace.network.openshift.io/multicast-enabled-", "oc adm pod-network join-projects --to=<project1> <project2> <project3>", "oc get netnamespaces", "oc adm pod-network isolate-projects <project1> <project2>", "oc adm pod-network make-projects-global <project1> <project2>", "oc edit network.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: [\"30s\"]", "oc get networks.operator.openshift.io -o yaml", "apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OpenShiftSDN kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 30s serviceNetwork: - 172.30.0.0/16 status: {} kind: List", "oc get clusteroperator network", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.1.0-0.9 True False False 1m", "I1006 16:09:50.985852 60651 helper_linux.go:73] Found default gateway interface br-ex 192.168.127.1 I1006 16:09:50.985923 60651 helper_linux.go:73] Found default gateway interface ens4 fe80::5054:ff:febe:bcd4 F1006 16:09:50.985939 60651 ovnkube.go:130] multiple gateway interfaces detected: br-ex ens4", "I0512 19:07:17.589083 108432 helper_linux.go:74] Found default gateway interface br-ex 192.168.123.1 F0512 19:07:17.589141 108432 ovnkube.go:133] failed to get default gateway interface", "oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": {\"networkType\": \"OVNKubernetes\" } } }'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":<mtu>, \"genevePort\":<port> }}}}'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"ovnKubernetesConfig\":{ \"mtu\":1200 }}}}'", "oc get mcp", "oc describe node | egrep \"hostname|machineconfig\"", "kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done", "oc get machineconfig <config_name> -o yaml | grep ExecStart", "ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes", "oc get pod -n openshift-machine-config-operator", "NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h", "oc logs <pod> -n openshift-machine-config-operator", "oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OVNKubernetes\" } }'", "oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"clusterNetwork\": [ { \"cidr\": \"<cidr>\", \"hostPrefix\": <prefix> } ], \"networkType\": \"OVNKubernetes\" } }'", "oc -n openshift-multus rollout status daemonset/multus", "Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out", "#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done", "oc get network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'", "oc get nodes", "oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'", "oc get co", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"defaultNetwork\": { \"openshiftSDNConfig\": null } } }'", "oc delete namespace openshift-sdn", "oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": true } }'", "oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\":{ \"paused\" :true } }'", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": { \"networkType\": \"OpenShiftSDN\" } } }' oc patch Network.config.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"networkType\": \"OpenShiftSDN\" } }'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":<mtu>, \"vxlanPort\":<port> }}}}'", "oc patch Network.operator.openshift.io cluster --type=merge --patch '{ \"spec\":{ \"defaultNetwork\":{ \"openshiftSDNConfig\":{ \"mtu\":1200 }}}}'", "oc -n openshift-multus rollout status daemonset/multus", "Waiting for daemon set \"multus\" rollout to finish: 1 out of 6 new pods have been updated Waiting for daemon set \"multus\" rollout to finish: 5 of 6 updated pods are available daemon set \"multus\" successfully rolled out", "#!/bin/bash for ip in USD(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type==\"InternalIP\")].address}') do echo \"reboot node USDip\" ssh -o StrictHostKeyChecking=no core@USDip sudo shutdown -r -t 3 done", "oc patch MachineConfigPool master --type='merge' --patch '{ \"spec\": { \"paused\": false } }'", "oc patch MachineConfigPool worker --type='merge' --patch '{ \"spec\": { \"paused\": false } }'", "oc describe node | egrep \"hostname|machineconfig\"", "kubernetes.io/hostname=master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/desiredConfig: rendered-master-c53e221d9d24e1c8bb6ee89dd3d8ad7b machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done", "oc get machineconfig <config_name> -o yaml", "oc get network.config/cluster -o jsonpath='{.status.networkType}{\"\\n\"}'", "oc get nodes", "oc get pod -n openshift-machine-config-operator", "NAME READY STATUS RESTARTS AGE machine-config-controller-75f756f89d-sjp8b 1/1 Running 0 37m machine-config-daemon-5cf4b 2/2 Running 0 43h machine-config-daemon-7wzcd 2/2 Running 0 43h machine-config-daemon-fc946 2/2 Running 0 43h machine-config-daemon-g2v28 2/2 Running 0 43h machine-config-daemon-gcl4f 2/2 Running 0 43h machine-config-daemon-l5tnv 2/2 Running 0 43h machine-config-operator-79d9c55d5-hth92 1/1 Running 0 37m machine-config-server-bsc8h 1/1 Running 0 43h machine-config-server-hklrm 1/1 Running 0 43h machine-config-server-k9rtx 1/1 Running 0 43h", "oc logs <pod> -n openshift-machine-config-operator", "oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}'", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"migration\": null } }'", "oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ \"spec\": { \"defaultNetwork\": { \"ovnKubernetesConfig\":null } } }'", "oc delete namespace openshift-ovn-kubernetes", "- op: add path: /spec/clusterNetwork/- value: 1 cidr: fd01::/48 hostPrefix: 64 - op: add path: /spec/serviceNetwork/- value: fd02::/112 2", "oc patch network.config.openshift.io cluster --type='json' --patch-file <file>.yaml", "network.config.openshift.io/cluster patched", "oc describe network", "Status: Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Cidr: fd01::/48 Host Prefix: 64 Cluster Network MTU: 1400 Network Type: OVNKubernetes Service Network: 172.30.0.0/16 fd02::/112", "apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default namespace: <namespace> 1 spec: egress: - to: cidrSelector: <api_server_address_range> 2 type: Allow - to: cidrSelector: 0.0.0.0/0 3 type: Deny", "apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: <name> 1 spec: egress: 2", "egress: - type: <type> 1 to: 2 cidrSelector: <cidr> 3 dnsName: <dns_name> 4 ports: 5", "ports: - port: <port> 1 protocol: <protocol> 2", "apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: 1 - type: Allow to: cidrSelector: 1.2.3.0/24 - type: Deny to: cidrSelector: 0.0.0.0/0", "apiVersion: k8s.ovn.org/v1 kind: EgressFirewall metadata: name: default spec: egress: - type: Deny to: cidrSelector: 172.16.1.1 ports: - port: 80 protocol: TCP - port: 443", "oc create -f <policy_name>.yaml -n <project>", "oc create -f default.yaml -n project1", "egressfirewall.k8s.ovn.org/v1 created", "oc get egressfirewall --all-namespaces", "oc describe egressfirewall <policy_name>", "Name: default Namespace: project1 Created: 20 minutes ago Labels: <none> Annotations: <none> Rule: Allow to 1.2.3.0/24 Rule: Allow to www.example.com Rule: Deny to 0.0.0.0/0", "oc get -n <project> egressfirewall", "oc get -n <project> egressfirewall <name> -o yaml > <filename>.yaml", "oc replace -f <filename>.yaml", "oc get -n <project> egressfirewall", "oc delete -n <project> egressfirewall <name>", "apiVersion: v1 kind: Namespace metadata: name: namespace1 labels: env: prod --- apiVersion: v1 kind: Namespace metadata: name: namespace2 labels: env: prod", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egressips-prod spec: egressIPs: - 192.168.126.10 - 192.168.126.102 namespaceSelector: matchLabels: env: prod status: items: - node: node1 egressIP: 192.168.126.10 - node: node3 egressIP: 192.168.126.102", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: <name> 1 spec: egressIPs: 2 - <ip_address> namespaceSelector: 3 podSelector: 4", "namespaceSelector: 1 matchLabels: <label_name>: <label_value>", "podSelector: 1 matchLabels: <label_name>: <label_value>", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group1 spec: egressIPs: - 192.168.126.11 - 192.168.126.102 podSelector: matchLabels: app: web namespaceSelector: matchLabels: env: prod", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-group2 spec: egressIPs: - 192.168.127.30 - 192.168.127.40 namespaceSelector: matchExpressions: - key: environment operator: NotIn values: - development", "oc label nodes <node_name> k8s.ovn.org/egress-assignable=\"\" 1", "apiVersion: v1 kind: Node metadata: labels: k8s.ovn.org/egress-assignable: \"\" name: <node_name>", "apiVersion: k8s.ovn.org/v1 kind: EgressIP metadata: name: egress-project1 spec: egressIPs: - 192.168.127.10 - 192.168.127.11 namespaceSelector: matchLabels: env: qa", "oc apply -f <egressips_name>.yaml 1", "egressips.k8s.ovn.org/<egressips_name> created", "oc label ns <namespace> env=qa 1", "openstack port set --allowed-address ip_address=<ip_address>,mac_address=<mac_address> <neutron_port_uuid>", "apiVersion: v1 kind: Service metadata: name: app-egress spec: ports: - name: tcp-8080 protocol: TCP port: 8080 - name: tcp-8443 protocol: TCP port: 8443 - name: udp-80 protocol: UDP port: 80 type: ClusterIP selector: app: egress-router-cni", "apiVersion: network.operator.openshift.io/v1 kind: EgressRouter metadata: name: <egress_router_name> namespace: <namespace> <.> spec: addresses: [ <.> { ip: \"<egress_router>\", <.> gateway: \"<egress_gateway>\" <.> } ] mode: Redirect redirect: { redirectRules: [ <.> { destinationIP: \"<egress_destination>\", port: <egress_router_port>, targetPort: <target_port>, <.> protocol: <network_protocol> <.> }, ], fallbackIP: \"<egress_destination>\" <.> }", "apiVersion: network.operator.openshift.io/v1 kind: EgressRouter metadata: name: egress-router-redirect spec: networkInterface: { macvlan: { mode: \"Bridge\" } } addresses: [ { ip: \"192.168.12.99/24\", gateway: \"192.168.12.1\" } ] mode: Redirect redirect: { redirectRules: [ { destinationIP: \"10.0.0.99\", port: 80, protocol: UDP }, { destinationIP: \"203.0.113.26\", port: 8080, targetPort: 80, protocol: TCP }, { destinationIP: \"203.0.113.27\", port: 8443, targetPort: 443, protocol: TCP } ] }", "apiVersion: v1 kind: Service metadata: name: egress-1 spec: ports: - name: web-app protocol: TCP port: 8080 type: ClusterIP selector: app: egress-router-cni <.>", "oc get network-attachment-definition egress-router-cni-nad", "NAME AGE egress-router-cni-nad 18m", "oc get deployment egress-router-cni-deployment", "NAME READY UP-TO-DATE AVAILABLE AGE egress-router-cni-deployment 1/1 1 1 18m", "oc get pods -l app=egress-router-cni", "NAME READY STATUS RESTARTS AGE egress-router-cni-deployment-575465c75c-qkq6m 1/1 Running 0 18m", "POD_NODENAME=USD(oc get pod -l app=egress-router-cni -o jsonpath=\"{.items[0].spec.nodeName}\")", "oc debug node/USDPOD_NODENAME", "chroot /host", "cat /tmp/egress-router-log", "2021-04-26T12:27:20Z [debug] Called CNI ADD 2021-04-26T12:27:20Z [debug] Gateway: 192.168.12.1 2021-04-26T12:27:20Z [debug] IP Source Addresses: [192.168.12.99/24] 2021-04-26T12:27:20Z [debug] IP Destinations: [80 UDP 10.0.0.99/30 8080 TCP 203.0.113.26/30 80 8443 TCP 203.0.113.27/30 443] 2021-04-26T12:27:20Z [debug] Created macvlan interface 2021-04-26T12:27:20Z [debug] Renamed macvlan to \"net1\" 2021-04-26T12:27:20Z [debug] Adding route to gateway 192.168.12.1 on macvlan interface 2021-04-26T12:27:20Z [debug] deleted default route {Ifindex: 3 Dst: <nil> Src: <nil> Gw: 10.128.10.1 Flags: [] Table: 254} 2021-04-26T12:27:20Z [debug] Added new default route with gateway 192.168.12.1 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p UDP --dport 80 -j DNAT --to-destination 10.0.0.99 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8080 -j DNAT --to-destination 203.0.113.26:80 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat PREROUTING -i eth0 -p TCP --dport 8443 -j DNAT --to-destination 203.0.113.27:443 2021-04-26T12:27:20Z [debug] Added iptables rule: iptables -t nat -o net1 -j SNAT --to-source 192.168.12.99", "crictl ps --name egress-router-cni-pod | awk '{print USD1}'", "CONTAINER bac9fae69ddb6", "crictl inspect -o yaml bac9fae69ddb6 | grep 'pid:' | awk '{print USD2}'", "68857", "nsenter -n -t 68857", "ip route", "default via 192.168.12.1 dev net1 10.128.10.0/23 dev eth0 proto kernel scope link src 10.128.10.18 192.168.12.0/24 dev net1 proto kernel scope link src 192.168.12.99 192.168.12.1 dev net1", "oc annotate namespace <namespace> k8s.ovn.org/multicast-enabled=true", "apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: \"true\"", "oc project <project>", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: mlistener labels: app: multicast-verify spec: containers: - name: mlistener image: registry.access.redhat.com/ubi8 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat hostname && sleep inf\"] ports: - containerPort: 30102 name: mlistener protocol: UDP EOF", "cat <<EOF| oc create -f - apiVersion: v1 kind: Pod metadata: name: msender labels: app: multicast-verify spec: containers: - name: msender image: registry.access.redhat.com/ubi8 command: [\"/bin/sh\", \"-c\"] args: [\"dnf -y install socat && sleep inf\"] EOF", "POD_IP=USD(oc get pods mlistener -o jsonpath='{.status.podIP}')", "oc exec mlistener -i -t -- socat UDP4-RECVFROM:30102,ip-add-membership=224.1.0.1:USDPOD_IP,fork EXEC:hostname", "CIDR=USD(oc get Network.config.openshift.io cluster -o jsonpath='{.status.clusterNetwork[0].cidr}')", "oc exec msender -i -t -- /bin/bash -c \"echo | socat STDIO UDP4-DATAGRAM:224.1.0.1:30102,range=USDCIDR,ip-multicast-ttl=64\"", "mlistener", "oc annotate namespace <namespace> \\ 1 k8s.ovn.org/multicast-enabled-", "apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: k8s.ovn.org/multicast-enabled: null", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056", "spec: exportNetworkFlows: netFlow: collectors: - 192.168.1.99:2056", "oc patch network.operator cluster --type merge -p \"USD(cat <file_name>.yaml)\"", "network.operator.openshift.io/cluster patched", "oc get network.operator cluster -o jsonpath=\"{.spec.exportNetworkFlows}\"", "{\"netFlow\":{\"collectors\":[\"192.168.1.99:2056\"]}}", "for pod in USD(oc get pods -n openshift-ovn-kubernetes -l app=ovnkube-node -o jsonpath='{[email protected][*]}{.metadata.name}{\"\\n\"}{end}'); do ; echo; echo USDpod; oc -n openshift-ovn-kubernetes exec -c ovnkube-node USDpod -- bash -c 'for type in ipfix sflow netflow ; do ovs-vsctl find USDtype ; done'; done", "ovnkube-node-xrn4p _uuid : a4d2aaca-5023-4f3d-9400-7275f92611f9 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : [\"192.168.1.99:2056\"] ovnkube-node-z4vq9 _uuid : 61d02fdb-9228-4993-8ff5-b27f01a29bd6 active_timeout : 60 add_id_to_interface : false engine_id : [] engine_type : [] external_ids : {} targets : [\"192.168.1.99:2056\"]-", "oc patch network.operator cluster --type='json' -p='[{\"op\":\"remove\", \"path\":\"/spec/exportNetworkFlows\"}]'", "network.operator.openshift.io/cluster patched", "./openshift-install create manifests --dir <installation_directory>", "cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2", "oc new-project hello-openshift", "oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json", "oc expose pod/hello-openshift", "oc expose svc hello-openshift", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: hello-openshift-hello-openshift.<Ingress_Domain> 1 port: targetPort: 8080 2 to: kind: Service name: hello-openshift", "oc get ingresses.config/cluster -o jsonpath={.spec.domain}", "oc annotate route <route_name> --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1", "oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s", "oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000;\\ 1 includeSubDomains;preload\"", "apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3 spec: host: def.abc.com tls: termination: \"reencrypt\" wildcardPolicy: \"Subdomain\"", "oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"", "metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=0", "oc annotate <route> --all -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"", "oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'", "Name: routename HSTS: max-age=0", "oc edit ingresses.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: 'hello-openshift-default.apps.username.devcluster.openshift.com' requiredHSTSPolicies: 1 - domainPatterns: 2 - '*hello-openshift-default.apps.username.devcluster.openshift.com' - '*hello-openshift-default2.apps.username.devcluster.openshift.com' namespaceSelector: 3 matchLabels: myPolicy: strict maxAge: 4 smallestMaxAge: 1 largestMaxAge: 31536000 preloadPolicy: RequirePreload 5 includeSubDomainsPolicy: RequireIncludeSubDomains 6 - domainPatterns: 7 - 'abc.example.com' - '*xyz.example.com' namespaceSelector: matchLabels: {} maxAge: {} preloadPolicy: NoOpinion includeSubDomainsPolicy: RequireNoIncludeSubDomains", "oc annotate route --all --all-namespaces --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000\"", "oc annotate route --all -n my-namespace --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000\"", "oc get clusteroperator/ingress -n openshift-ingress-operator -o jsonpath='{range .spec.requiredHSTSPolicies[*]}{.spec.requiredHSTSPolicies.maxAgePolicy.largestMaxAge}{\"\\n\"}{end}'", "oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'", "Name: <_routename_> HSTS: max-age=31536000;preload;includeSubDomains", "tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1", "tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789", "oc annotate route <route_name> router.openshift.io/cookie_name=\"<cookie_name>\"", "oc annotate route my_route router.openshift.io/cookie_name=\"my_cookie\"", "ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}')", "curl USDROUTE_NAME -k -c /tmp/cookie_jar", "curl USDROUTE_NAME -k -b /tmp/cookie_jar", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: \"/test\" 1 to: kind: Service name: service-name", "apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms 1", "metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10", "metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12", "metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24", "metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8", "apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / 1", "oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge", "spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed", "apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" 1 spec: rules: - host: www.example.com http: paths: - backend: service: name: frontend port: number: 443 path: / pathType: Prefix tls: - hosts: - www.example.com secretName: example-com-tls-certificate", "spec: rules: - host: www.example.com http: paths: - path: '' pathType: ImplementationSpecific backend: service: name: frontend port: number: 443", "oc apply -f ingress.yaml", "oc get routes", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-gnztq ownerReferences: - apiVersion: networking.k8s.io/v1 controller: true kind: Ingress name: frontend uid: 4e6c59cc-704d-4f44-b390-617d879033b6 spec: host: www.example.com path: / port: targetPort: https tls: certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- insecureEdgeTerminationPolicy: Redirect key: | -----BEGIN RSA PRIVATE KEY----- [...] -----END RSA PRIVATE KEY----- termination: reencrypt to: kind: Service name: frontend", "apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend spec: rules: tls: - {} 1", "oc create -f example-ingress.yaml", "oc get routes -o yaml", "apiVersion: v1 items: - apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-j9sdd 1 spec: tls: 2 insecureEdgeTerminationPolicy: Redirect termination: edge 3", "apiVersion: v1 kind: Service metadata: creationTimestamp: yyyy-mm-ddT00:00:00Z labels: name: <service_name> manager: kubectl-create operation: Update time: yyyy-mm-ddT00:00:00Z name: <service_name> namespace: <namespace_name> resourceVersion: \"<resource_version_number>\" selfLink: \"/api/v1/namespaces/<namespace_name>/services/<service_name>\" uid: <uid_number> spec: clusterIP: 172.30.0.0/16 clusterIPs: 1 - 172.30.0.0/16 - <second_IP_address> ipFamilies: 2 - IPv4 - IPv6 ipFamilyPolicy: RequireDualStack 3 ports: - port: 8080 protocol: TCP targetport: 8080 selector: name: <namespace_name> sessionAffinity: None type: ClusterIP status: loadbalancer: {}", "oc get endpoints", "oc get endpointslices", "openssl rsa -in password_protected_tls.key -out tls.key", "oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: reencrypt key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----", "openssl rsa -in password_protected_tls.key -out tls.key", "oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----", "oc create route passthrough route-passthrough-secured --service=frontend --port=8080", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: www.example.com port: targetPort: 8080 tls: termination: passthrough 2 insecureEdgeTerminationPolicy: None 3 to: kind: Service name: frontend", "apiVersion: v1 kind: Service metadata: name: http-service spec: clusterIP: 172.30.163.110 externalIPs: - 192.168.132.253 externalTrafficPolicy: Cluster ports: - name: highport nodePort: 31903 port: 30102 protocol: TCP targetPort: 30102 selector: app: web sessionAffinity: None type: LoadBalancer status: loadBalancer: ingress: - ip: 192.168.132.253", "{ \"policy\": { \"allowedCIDRs\": [], \"rejectedCIDRs\": [] } }", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: {}", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 172.16.66.10/23 rejectedCIDRs: - 172.16.66.10/24", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: null", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: [] 1 policy: 2", "policy: allowedCIDRs: [] 1 rejectedCIDRs: [] 2", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: autoAssignCIDRs: - 192.168.132.254/29", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: policy: allowedCIDRs: - 192.168.132.0/29 - 192.168.132.8/29 rejectedCIDRs: - 192.168.132.7/32", "oc describe networks.config cluster", "oc edit networks.config cluster", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: externalIP: 1", "oc get networks.config cluster -o go-template='{{.spec.externalIP}}{{\"\\n\"}}'", "oc adm policy add-cluster-role-to-user cluster-admin username", "oc new-project myproject", "oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git", "oc get svc -n myproject", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s", "oc project myproject", "oc expose service nodejs-ex", "route.route.openshift.io/nodejs-ex exposed", "oc get route", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None", "curl --head nodejs-ex-myproject.example.com", "HTTP/1.1 200 OK", "cat router-internal.yaml apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" routeSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: \"\" selfLink: \"\"", "oc apply -f router-internal.yaml", "cat router-internal.yaml", "apiVersion: v1 items: - apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: sharded namespace: openshift-ingress-operator spec: domain: <apps-sharded.basedomain.example.net> nodePlacement: nodeSelector: matchLabels: node-role.kubernetes.io/worker: \"\" namespaceSelector: matchLabels: type: sharded status: {} kind: List metadata: resourceVersion: \"\" selfLink: \"\"", "oc apply -f router-internal.yaml", "oc adm policy add-cluster-role-to-user cluster-admin username", "oc new-project myproject", "oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git", "oc get svc -n myproject", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s", "oc project myproject", "oc expose service nodejs-ex", "route.route.openshift.io/nodejs-ex exposed", "oc get route", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD nodejs-ex nodejs-ex-myproject.example.com nodejs-ex 8080-tcp None", "curl --head nodejs-ex-myproject.example.com", "HTTP/1.1 200 OK", "oc project project1", "apiVersion: v1 kind: Service metadata: name: egress-2 1 spec: ports: - name: db port: 3306 2 loadBalancerIP: loadBalancerSourceRanges: 3 - 10.0.0.0/8 - 192.168.0.0/16 type: LoadBalancer 4 selector: name: mysql 5", "oc create -f <file-name>", "oc create -f mysql-lb.yaml", "oc get svc", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE egress-2 LoadBalancer 172.30.22.226 ad42f5d8b303045-487804948.example.com 3306:30357/TCP 15m", "curl <public-ip>:<port>", "curl 172.29.121.74:3306", "mysql -h 172.30.131.89 -u admin -p", "Enter password: Welcome to the MariaDB monitor. Commands end with ; or \\g. MySQL [(none)]>", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService", "oc replace --force --wait -f ingresscontroller.yml", "oc get infrastructure/cluster -o jsonpath='{.status.platformStatus.type}' AWS", "cat ingresscontroller-aws-nlb.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: USDmy_ingress_controller 1 namespace: openshift-ingress-operator spec: domain: USDmy_unique_ingress_domain 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 providerParameters: type: AWS aws: type: NLB", "oc create -f ingresscontroller-aws-nlb.yaml", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService", "oc get networks.config cluster -o jsonpath='{.spec.externalIP}{\"\\n\"}'", "apiVersion: v1 kind: Service metadata: name: svc-with-externalip spec: externalIPs: - 192.174.120.10", "oc patch svc <name> -p '{ \"spec\": { \"externalIPs\": [ \"<ip_address>\" ] } }'", "oc patch svc mysql-55-rhel7 -p '{\"spec\":{\"externalIPs\":[\"192.174.120.10\"]}}'", "\"mysql-55-rhel7\" patched", "oc get svc", "NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE mysql-55-rhel7 172.30.131.89 192.174.120.10 3306/TCP 13m", "oc adm policy add-cluster-role-to-user cluster-admin <user_name>", "oc new-project myproject", "oc new-app nodejs:12~https://github.com/sclorg/nodejs-ex.git", "oc get svc -n myproject", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.197.157 <none> 8080/TCP 70s", "oc project myproject", "oc expose service nodejs-ex --type=NodePort --name=nodejs-ex-nodeport --generator=\"service/v2\"", "service/nodejs-ex-nodeport exposed", "oc get svc -n myproject", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nodejs-ex ClusterIP 172.30.217.127 <none> 3306/TCP 9m44s nodejs-ex-ingress NodePort 172.30.107.72 <none> 3306:31345/TCP 39s", "oc delete svc nodejs-ex", "oc get nns", "oc get nns node01 -o yaml", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkState metadata: name: node01 1 status: currentState: 2 dns-resolver: interfaces: route-rules: routes: lastSuccessfulUpdateTime: \"2020-01-31T12:14:00Z\" 3", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: br1 description: Linux bridge with eth1 as a port 4 type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: eth1", "oc apply -f <br1-eth1-policy.yaml> 1", "oc get nncp", "oc get nncp <policy> -o yaml", "oc get nnce", "oc get nnce <node>.<policy> -o yaml", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: <br1-eth1-policy> 1 spec: nodeSelector: 2 node-role.kubernetes.io/worker: \"\" 3 desiredState: interfaces: - name: br1 type: linux-bridge state: absent 4 - name: eth1 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9", "oc apply -f <br1-eth1-policy.yaml> 1", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: br1-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: br1 4 description: Linux bridge with eth1 as a port 5 type: linux-bridge 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 bridge: options: stp: enabled: false 10 port: - name: eth1 11", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: vlan-eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1.102 4 description: VLAN using eth1 5 type: vlan 6 state: up 7 vlan: base-iface: eth1 8 id: 102 9", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: bond0-eth1-eth2-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: bond0 4 description: Bond with ports eth1 and eth2 5 type: bond 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9 link-aggregation: mode: active-backup 10 options: miimon: '140' 11 port: 12 - eth1 - eth2 mtu: 1450 13", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: eth1-policy 1 spec: nodeSelector: 2 kubernetes.io/hostname: <node01> 3 desiredState: interfaces: - name: eth1 4 description: Configuring eth1 on node01 5 type: ethernet 6 state: up 7 ipv4: dhcp: true 8 enabled: true 9", "# interfaces: - name: bond10 description: Bonding eth2 and eth3 for Linux bridge type: bond state: up link-aggregation: port: - eth2 - eth3 - name: br1 description: Linux bridge on bond type: linux-bridge state: up bridge: port: - name: bond10 #", "interfaces: - name: eth1 description: static IP on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.168.122.250 1 prefix-length: 24 enabled: true", "interfaces: - name: eth1 description: No IP on eth1 type: ethernet state: up ipv4: enabled: false", "interfaces: - name: eth1 description: DHCP on eth1 type: ethernet state: up ipv4: dhcp: true enabled: true", "interfaces: - name: eth1 description: DHCP without gateway or DNS on eth1 type: ethernet state: up ipv4: dhcp: true auto-gateway: false auto-dns: false enabled: true", "interfaces: dns-resolver: config: search: - example.com - example.org server: - 8.8.8.8", "interfaces: - name: eth1 description: Static routing on eth1 type: ethernet state: up ipv4: dhcp: false address: - ip: 192.0.2.251 1 prefix-length: 24 enabled: true routes: config: - destination: 198.51.100.0/24 metric: 150 next-hop-address: 192.0.2.1 2 next-hop-interface: eth1 table-id: 254", "apiVersion: nmstate.io/v1beta1 kind: NodeNetworkConfigurationPolicy metadata: name: ens01-bridge-testfail spec: desiredState: interfaces: - name: br1 description: Linux bridge with the wrong port type: linux-bridge state: up ipv4: dhcp: true enabled: true bridge: options: stp: enabled: false port: - name: ens01", "oc apply -f ens01-bridge-testfail.yaml", "nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created", "oc get nncp", "NAME STATUS ens01-bridge-testfail FailedToConfigure", "oc get nnce", "NAME STATUS control-plane-1.ens01-bridge-testfail FailedToConfigure control-plane-2.ens01-bridge-testfail FailedToConfigure control-plane-3.ens01-bridge-testfail FailedToConfigure compute-1.ens01-bridge-testfail FailedToConfigure compute-2.ens01-bridge-testfail FailedToConfigure compute-3.ens01-bridge-testfail FailedToConfigure", "oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type==\"Failing\")].message}'", "error reconciling NodeNetworkConfigurationPolicy at desired state apply: , failed to execute nmstatectl set --no-commit --timeout 480: 'exit status 1' '' libnmstate.error.NmstateVerificationError: desired ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: - name: ens01 description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 current ======= --- name: br1 type: linux-bridge state: up bridge: options: group-forward-mask: 0 mac-ageing-time: 300 multicast-snooping: true stp: enabled: false forward-delay: 15 hello-time: 2 max-age: 20 priority: 32768 port: [] description: Linux bridge with the wrong port ipv4: address: [] auto-dns: true auto-gateway: true auto-routes: true dhcp: true enabled: true ipv6: enabled: false mac-address: 01-23-45-67-89-AB mtu: 1500 difference ========== --- desired +++ current @@ -13,8 +13,7 @@ hello-time: 2 max-age: 20 priority: 32768 - port: - - name: ens01 + port: [] description: Linux bridge with the wrong port ipv4: address: [] line 651, in _assert_interfaces_equal\\n current_state.interfaces[ifname],\\nlibnmstate.error.NmstateVerificationError:", "oc get nns control-plane-1 -o yaml", "- ipv4: name: ens1 state: up type: ethernet", "oc edit nncp ens01-bridge-testfail", "port: - name: ens1", "oc get nncp", "NAME STATUS ens01-bridge-testfail SuccessfullyConfigured", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:", "apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4", "oc create -f user-ca-bundle.yaml", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: {}", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:", "apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4", "oc create -f user-ca-bundle.yaml", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5", "config.openshift.io/inject-trusted-cabundle=\"true\"", "apiVersion: v1 data: {} kind: ConfigMap metadata: labels: config.openshift.io/inject-trusted-cabundle: \"true\" name: ca-inject 1 namespace: apache", "apiVersion: apps/v1 kind: Deployment metadata: name: my-example-custom-ca-deployment namespace: my-example-custom-ca-ns spec: spec: containers: - name: my-container-that-needs-custom-ca volumeMounts: - name: trusted-ca mountPath: /etc/pki/ca-trust/extracted/pem readOnly: true volumes: - name: trusted-ca configMap: name: trusted-ca items: - key: ca-bundle.crt 1 path: tls-ca-bundle.pem 2", "oc -n openshift-kuryr edit cm kuryr-config", "kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: default 1", "oc -n openshift-kuryr edit cm kuryr-config", "kind: ConfigMap metadata: annotations: networkoperator.openshift.io/kuryr-octavia-provider: ovn", "openstack loadbalancer list | grep amphora", "a4db683b-2b7b-4988-a582-c39daaad7981 | ostest-7mbj6-kuryr-api-loadbalancer | 84c99c906edd475ba19478a9a6690efd | 172.30.0.1 | ACTIVE | amphora", "openstack loadbalancer list | grep ovn", "2dffe783-98ae-4048-98d0-32aa684664cc | openshift-apiserver-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.167.119 | ACTIVE | ovn 0b1b2193-251f-4243-af39-2f99b29d18c5 | openshift-etcd/etcd | 84c99c906edd475ba19478a9a6690efd | 172.30.143.226 | ACTIVE | ovn f05b07fc-01b7-4673-bd4d-adaa4391458e | openshift-dns-operator/metrics | 84c99c906edd475ba19478a9a6690efd | 172.30.152.27 | ACTIVE | ovn", "openstack loadbalancer create --name API_OCP_CLUSTER --vip-subnet-id <id_of_worker_vms_subnet>", "openstack loadbalancer listener create --name API_OCP_CLUSTER_6443 --protocol HTTPS--protocol-port 6443 API_OCP_CLUSTER", "openstack loadbalancer pool create --name API_OCP_CLUSTER_pool_6443 --lb-algorithm ROUND_ROBIN --session-persistence type=<source_IP_address> --listener API_OCP_CLUSTER_6443 --protocol HTTPS", "openstack loadbalancer healthmonitor create --delay 5 --max-retries 4 --timeout 10 --type TCP API_OCP_CLUSTER_pool_6443", "for SERVER in USD(MASTER-0-IP MASTER-1-IP MASTER-2-IP) do openstack loadbalancer member create --address USDSERVER --protocol-port 6443 API_OCP_CLUSTER_pool_6443 done", "openstack floating ip unset USDAPI_FIP", "openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value API_OCP_CLUSTER) USDAPI_FIP", "openstack floating ip unset USDAPI_FIP", "openstack floating ip set --port USD(openstack loadbalancer show -c <vip_port_id> -f value USD{OCP_CLUSTER}-kuryr-api-loadbalancer) USDAPI_FIP", "oc -n openshift-ingress get svc router-internal-default -o yaml > external_router.yaml", "apiVersion: v1 kind: Service metadata: labels: ingresscontroller.operator.openshift.io/owning-ingresscontroller: default name: router-external-default 1 namespace: openshift-ingress spec: ports: - name: http port: 80 protocol: TCP targetPort: http - name: https port: 443 protocol: TCP targetPort: https - name: metrics port: 1936 protocol: TCP targetPort: 1936 selector: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default sessionAffinity: None type: LoadBalancer 2", "oc apply -f external_router.yaml", "oc -n openshift-ingress get svc", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-external-default LoadBalancer 172.30.235.33 10.46.22.161 80:30112/TCP,443:32359/TCP,1936:30317/TCP 3m38s router-internal-default ClusterIP 172.30.115.123 <none> 80/TCP,443/TCP,1936/TCP 22h", "openstack loadbalancer list | grep router-external", "| 21bf6afe-b498-4a16-a958-3229e83c002c | openshift-ingress/router-external-default | 66f3816acf1b431691b8d132cc9d793c | 172.30.235.33 | ACTIVE | octavia |", "openstack floating ip list | grep 172.30.235.33", "| e2f80e97-8266-4b69-8636-e58bacf1879e | 10.46.22.161 | 172.30.235.33 | 655e7122-806a-4e0a-a104-220c6e17bda6 | a565e55a-99e7-4d15-b4df-f9d7ee8c9deb | 66f3816acf1b431691b8d132cc9d793c |", "listen my-cluster-api-6443 bind 0.0.0.0:6443 mode tcp balance roundrobin server my-cluster-master-2 192.0.2.2:6443 check server my-cluster-master-0 192.0.2.3:6443 check server my-cluster-master-1 192.0.2.1:6443 check listen my-cluster-apps-443 bind 0.0.0.0:443 mode tcp balance roundrobin server my-cluster-worker-0 192.0.2.6:443 check server my-cluster-worker-1 192.0.2.5:443 check server my-cluster-worker-2 192.0.2.4:443 check listen my-cluster-apps-80 bind 0.0.0.0:80 mode tcp balance roundrobin server my-cluster-worker-0 192.0.2.7:80 check server my-cluster-worker-1 192.0.2.9:80 check server my-cluster-worker-2 192.0.2.8:80 check", "<load_balancer_ip_address> api.<cluster_name>.<base_domain> <load_balancer_ip_address> apps.<cluster_name>.<base_domain>", "curl https://<loadbalancer_ip_address>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "\"event\":\"ipAllocated\",\"ip\":\"172.22.0.201\",\"msg\":\"IP address assigned by controller", "oc get packagemanifests -n openshift-marketplace metallb-operator", "NAME CATALOG AGE metallb-operator Red Hat Operators 9h", "cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: metallb-system EOF", "cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system spec: targetNamespaces: - metallb-system EOF", "oc get operatorgroup -n metallb-system", "NAME AGE metallb-operator 14m", "OC_VERSION=USD(oc version -o yaml | grep openshiftVersion | grep -o '[0-9]*[.][0-9]*' | head -1)", "cat << EOF| oc apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: \"USD{OC_VERSION}\" name: metallb-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc get installplan -n metallb-system", "NAME CSV APPROVAL APPROVED install-wzg94 metallb-operator.4.9.0-nnnnnnnnnnnn Automatic true", "oc get clusterserviceversion -n metallb-system -o custom-columns=Name:.metadata.name,Phase:.status.phase", "Name Phase metallb-operator.4.9.0-nnnnnnnnnnnn Succeeded", "cat << EOF | oc apply -f - apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system EOF", "oc get deployment -n metallb-system controller", "NAME READY UP-TO-DATE AVAILABLE AGE controller 1/1 1 1 11m", "oc get daemonset -n metallb-system speaker", "NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE speaker 6 6 6 6 6 kubernetes.io/os=linux 18m", "apiVersion: metallb.io/v1alpha1 kind: AddressPool metadata: namespace: metallb-system name: doc-example spec: protocol: layer2 addresses: - 203.0.113.1-203.0.113.10 - 203.0.113.65-203.0.113.75", "oc apply -f addresspool.yaml", "oc describe -n metallb-system addresspool doc-example", "Name: doc-example Namespace: metallb-system Labels: <none> Annotations: <none> API Version: metallb.io/v1alpha1 Kind: AddressPool Metadata: Spec: Addresses: 203.0.113.1-203.0.113.10 203.0.113.65-203.0.113.75 Auto Assign: true Protocol: layer2 Events: <none>", "apiVersion: metallb.io/v1beta1 kind: AddressPool metadata: name: doc-example-cidr namespace: metallb-system spec: protocol: layer2 addresses: - 192.168.100.0/24 - 192.168.200.0/24 - 192.168.255.1-192.168.255.5", "apiVersion: metallb.io/v1beta1 kind: AddressPool metadata: name: doc-example-reserved namespace: metallb-system spec: protocol: layer2 addresses: - 10.0.100.0/28 autoAssign: false", "apiVersion: metallb.io/v1beta1 kind: AddressPool metadata: name: doc-example-ipv6 namespace: metallb-system spec: protocol: layer2 addresses: - 2002:2:2::1-2002:2:2::100", "apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerIP: <ip_address>", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for \"default/invalid-request\": \"4.3.2.1\" is not allowed in config", "apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer", "apiVersion: v1 kind: Service metadata: name: <service_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer", "apiVersion: v1 kind: Service metadata: name: service-http annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: \"web-server-svc\" 1 spec: ports: - name: http port: 80 2 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 3 type: LoadBalancer loadBalancerIP: 172.31.249.7 4 --- apiVersion: v1 kind: Service metadata: name: service-https annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: \"web-server-svc\" 5 spec: ports: - name: https port: 443 6 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 7 type: LoadBalancer loadBalancerIP: 172.31.249.7 8", "oc apply -f <service_name>.yaml", "service/<service_name> created", "oc describe service <service_name>", "Name: <service_name> Namespace: default Labels: <none> Annotations: metallb.universe.tf/address-pool: doc-example <.> Selector: app=service_name Type: LoadBalancer <.> IP Family Policy: SingleStack IP Families: IPv4 IP: 10.105.237.254 IPs: 10.105.237.254 LoadBalancer Ingress: 192.168.100.5 <.> Port: <unset> 80/TCP TargetPort: 8080/TCP NodePort: <unset> 30550/TCP Endpoints: 10.244.0.50:8080 Session Affinity: None External Traffic Policy: Cluster Events: <.> Type Reason Age From Message ---- ------ ---- ---- ------- Normal nodeAssigned 32m (x2 over 32m) metallb-speaker announcing from node \"<node_name>\"", "pod_network_name_info{interface=\"net0\",namespace=\"namespacename\",network_name=\"nadnamespace/firstNAD\",pod=\"podname\"} 0", "(container_network_receive_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_receive_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_bytes_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_errors_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_total) + on(namespace,pod,interface) group_left(network_name) ( pod_network_name_info ) (container_network_transmit_packets_dropped_total) + on(namespace,pod,interface) group_left(network_name)" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html-single/networking/index
Hammer CLI Guide
Hammer CLI Guide Red Hat Satellite 6.11 A guide to using Hammer, the Satellite CLI tool Red Hat Satellite Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/hammer_cli_guide/index
2.2.2. Kickstart
2.2.2. Kickstart This section describes what behaviors have changed in automated installations (Kickstart). 2.2.2.1. Behavioral Changes Previously, a Kickstart file that did not have a network line resulted in the assumption that DHCP is used to configure the network. This was inconsistent with the rest of Kickstart in that all other missing lines mean installation will halt and prompt for input. Now, having no network line means that installation will halt and prompt for input if network access is required. If you want to continue using DHCP without interruption, add network --bootproto=dhcp to your Kickstart file. Also, the --bootproto=query option is deprecated. If you want to prompt for network configuration in the first stage of installation, use the asknetwork option. In versions of Red Hat Enterprise Linux, the -server DHCP option was used to specify an NFS server containing Kickstart files when the ks option is passed to the system without a value. This DHCP option has changed to server-name in Red Hat Enterprise Linux 6. Traditionally, disks have been referred to throughout Kickstart by a device node name (such as sda ). The Linux kernel has moved to a more dynamic method where device names are not guaranteed to be consistent across reboots, so this can complicate usage in Kickstart scripts. To accommodate stable device naming, you can use any item from /dev/disk in place of a device node name. For example, instead of: You could use an entry similar to one of the following: This provides a consistent way to refer to disks that is more meaningful than just sda . This is especially useful in large storage environments. You can also use shell-like entries to refer to multiple disks. This is primarily intended to make it easier to use the clearpart and ignoredisk commands in large storage environments. For example, instead of: You could use an entry similar to the following: Kickstart will halt with an error in more cases than versions. For example, if you refer to a disk that does not exist, the installation will halt and inform you of the error. This is designed to help detect errors in Kickstart files before they lead to larger problems. As a side-effect, files that are designed to be generic across different machine configurations can fail more frequently. These must be dealt with on a case-by-case basis. The /tmp/netinfo file used for Kickstart network information has been removed. Anaconda now uses NetworkManager for interface configuration by default, and stores configuration in the ifcfg files in /etc/sysconfig/network-scripts/ . It is possible to use this new location as a source of network settings for %pre and %post scripts.
[ "part / --fstype=ext4 --onpart=sda1", "part / --fstype=ext4 --onpart=/dev/disk/by-path/pci-0000:00:05.0-scsi-0:0:0:0-part1 part / --fstype=ext4 --onpart=/dev/disk/by-id/ata-ST3160815AS_6RA0C882-part1", "ignoredisk --drives=sdaa,sdab,sdac", "ignoredisk --drives=/dev/disk/by-path/pci-0000:00:05.0-scsi-*" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-migration_guide-installation-graphical_installer-kickstart
2.5. Turning on Packet Forwarding
2.5. Turning on Packet Forwarding In order for the LVS router to forward network packets properly to the real servers, each LVS router node must have IP forwarding turned on in the kernel. Log in as root and change the line which reads net.ipv4.ip_forward = 0 in /etc/sysctl.conf to the following: The changes take effect when you reboot the system. To check if IP forwarding is turned on, issue the following command as root: /sbin/sysctl net.ipv4.ip_forward If the above command returns a 1 , then IP forwarding is enabled. If it returns a 0 , then you can turn it on manually using the following command: /sbin/sysctl -w net.ipv4.ip_forward=1
[ "net.ipv4.ip_forward = 1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-lvs-forwarding-vsa
Chapter 3. Configuring the Date and Time
Chapter 3. Configuring the Date and Time Modern operating systems distinguish between the following two types of clocks: A real-time clock ( RTC ), commonly referred to as a hardware clock , (typically an integrated circuit on the system board) that is completely independent of the current state of the operating system and runs even when the computer is shut down. A system clock , also known as a software clock , that is maintained by the kernel and its initial value is based on the real-time clock. Once the system is booted and the system clock is initialized, the system clock is completely independent of the real-time clock. The system time is always kept in Coordinated Universal Time ( UTC ) and converted in applications to local time as needed. Local time is the actual time in your current time zone, taking into account daylight saving time ( DST ). The real-time clock can use either UTC or local time. UTC is recommended. Red Hat Enterprise Linux 7 offers three command line tools that can be used to configure and display information about the system date and time: The timedatectl utility, which is new in Red Hat Enterprise Linux 7 and is part of systemd . The traditional date command. The hwclock utility for accessing the hardware clock. 3.1. Using the timedatectl Command The timedatectl utility is distributed as part of the systemd system and service manager and allows you to review and change the configuration of the system clock. You can use this tool to change the current date and time, set the time zone, or enable automatic synchronization of the system clock with a remote server. For information on how to display the current date and time in a custom format, see also Section 3.2, "Using the date Command" . 3.1.1. Displaying the Current Date and Time To display the current date and time along with detailed information about the configuration of the system and hardware clock, run the timedatectl command with no additional command line options: This displays the local and universal time, the currently used time zone, the status of the Network Time Protocol ( NTP ) configuration, and additional information related to DST. Example 3.1. Displaying the Current Date and Time The following is an example output of the timedatectl command on a system that does not use NTP to synchronize the system clock with a remote server: Important Changes to the status of chrony or ntpd will not be immediately noticed by timedatectl . If changes to the configuration or status of these tools is made, enter the following command: 3.1.2. Changing the Current Time To change the current time, type the following at a shell prompt as root : Replace HH with an hour, MM with a minute, and SS with a second, all typed in two-digit form. This command updates both the system time and the hardware clock. The result it is similar to using both the date --set and hwclock --systohc commands. The command will fail if an NTP service is enabled. See Section 3.1.5, "Synchronizing the System Clock with a Remote Server" to temporally disable the service. Example 3.2. Changing the Current Time To change the current time to 11:26 p.m., run the following command as root : By default, the system is configured to use UTC. To configure your system to maintain the clock in the local time, run the timedatectl command with the set-local-rtc option as root : To configure your system to maintain the clock in the local time, replace boolean with yes (or, alternatively, y , true , t , or 1 ). To configure the system to use UTC, replace boolean with no (or, alternatively, n , false , f , or 0 ). The default option is no . 3.1.3. Changing the Current Date To change the current date, type the following at a shell prompt as root : Replace YYYY with a four-digit year, MM with a two-digit month, and DD with a two-digit day of the month. Note that changing the date without specifying the current time results in setting the time to 00:00:00. Example 3.3. Changing the Current Date To change the current date to 2 June 2017 and keep the current time (11:26 p.m.), run the following command as root : 3.1.4. Changing the Time Zone To list all available time zones, type the following at a shell prompt: To change the currently used time zone, type as root : Replace time_zone with any of the values listed by the timedatectl list-timezones command. Example 3.4. Changing the Time Zone To identify which time zone is closest to your present location, use the timedatectl command with the list-timezones command line option. For example, to list all available time zones in Europe, type: To change the time zone to Europe/Prague , type as root : 3.1.5. Synchronizing the System Clock with a Remote Server As opposed to the manual adjustments described in the sections, the timedatectl command also allows you to enable automatic synchronization of your system clock with a group of remote servers using the NTP protocol. Enabling NTP enables the chronyd or ntpd service, depending on which of them is installed. The NTP service can be enabled and disabled using a command as follows: To enable your system to synchronize the system clock with a remote NTP server, replace boolean with yes (the default option). To disable this feature, replace boolean with no . Example 3.5. Synchronizing the System Clock with a Remote Server To enable automatic synchronization of the system clock with a remote server, type: The command will fail if an NTP service is not installed. See Section 18.3.1, "Installing chrony" for more information. 3.2. Using the date Command The date utility is available on all Linux systems and allows you to display and configure the current date and time. It is frequently used in scripts to display detailed information about the system clock in a custom format. For information on how to change the time zone or enable automatic synchronization of the system clock with a remote server, see Section 3.1, "Using the timedatectl Command" . 3.2.1. Displaying the Current Date and Time To display the current date and time, run the date command with no additional command line options: This displays the day of the week followed by the current date, local time, abbreviated time zone, and year. By default, the date command displays the local time. To display the time in UTC, run the command with the --utc or -u command line option: You can also customize the format of the displayed information by providing the +" format " option on the command line: Replace format with one or more supported control sequences as illustrated in Example 3.6, "Displaying the Current Date and Time" . See Table 3.1, "Commonly Used Control Sequences" for a list of the most frequently used formatting options, or the date (1) manual page for a complete list of these options. Table 3.1. Commonly Used Control Sequences Control Sequence Description %H The hour in the HH format (for example, 17 ). %M The minute in the MM format (for example, 30 ). %S The second in the SS format (for example, 24 ). %d The day of the month in the DD format (for example, 16 ). %m The month in the MM format (for example, 09 ). %Y The year in the YYYY format (for example, 2016 ). %Z The time zone abbreviation (for example, CEST ). %F The full date in the YYYY-MM-DD format (for example, 2016-09-16 ). This option is equal to %Y-%m-%d . %T The full time in the HH:MM:SS format (for example, 17:30:24). This option is equal to %H:%M:%S Example 3.6. Displaying the Current Date and Time To display the current date and local time, type the following at a shell prompt: To display the current date and time in UTC, type the following at a shell prompt: To customize the output of the date command, type: 3.2.2. Changing the Current Time To change the current time, run the date command with the --set or -s option as root : Replace HH with an hour, MM with a minute, and SS with a second, all typed in two-digit form. By default, the date command sets the system clock to the local time. To set the system clock in UTC, run the command with the --utc or -u command line option: Example 3.7. Changing the Current Time To change the current time to 11:26 p.m., run the following command as root : 3.2.3. Changing the Current Date To change the current date, run the date command with the --set or -s option as root : Replace YYYY with a four-digit year, MM with a two-digit month, and DD with a two-digit day of the month. Note that changing the date without specifying the current time results in setting the time to 00:00:00. Example 3.8. Changing the Current Date To change the current date to 2 June 2017 and keep the current time (11:26 p.m.), run the following command as root : 3.3. Using the hwclock Command hwclock is a utility for accessing the hardware clock, also referred to as the Real Time Clock (RTC). The hardware clock is independent of the operating system you use and works even when the machine is shut down. This utility is used for displaying the time from the hardware clock. hwclock also contains facilities for compensating for systematic drift in the hardware clock. The hardware clock stores the values of: year, month, day, hour, minute, and second. It is not able to store the time standard, local time or Coordinated Universal Time (UTC), nor set the Daylight Saving Time (DST). The hwclock utility saves its settings in the /etc/adjtime file, which is created with the first change you make, for example, when you set the time manually or synchronize the hardware clock with the system time. Note For the changes in the hwclock behaviour between Red Hat Enterprise Linux version 6 and 7, see Red Hat Enterprise Linux 7 Migration Planning Guide guide. 3.3.1. Displaying the Current Date and Time Running hwclock with no command line options as the root user returns the date and time in local time to standard output. Note that using the --utc or --localtime options with the hwclock command does not mean you are displaying the hardware clock time in UTC or local time. These options are used for setting the hardware clock to keep time in either of them. The time is always displayed in local time. Additionally, using the hwclock --utc or hwclock --local commands does not change the record in the /etc/adjtime file. This command can be useful when you know that the setting saved in /etc/adjtime is incorrect but you do not want to change the setting. On the other hand, you may receive misleading information if you use the command an incorrect way. See the hwclock (8) manual page for more details. Example 3.9. Displaying the Current Date and Time To display the current date and the current local time from the hardware clock, run as root : CEST is a time zone abbreviation and stands for Central European Summer Time. For information on how to change the time zone, see Section 3.1.4, "Changing the Time Zone" . 3.3.2. Setting the Date and Time Besides displaying the date and time, you can manually set the hardware clock to a specific time. When you need to change the hardware clock date and time, you can do so by appending the --set and --date options along with your specification: Replace dd with a day (a two-digit number), mmm with a month (a three-letter abbreviation), yyyy with a year (a four-digit number), HH with an hour (a two-digit number), MM with a minute (a two-digit number). At the same time, you can also set the hardware clock to keep the time in either UTC or local time by adding the --utc or --localtime options, respectively. In this case, UTC or LOCAL is recorded in the /etc/adjtime file. Example 3.10. Setting the Hardware Clock to a Specific Date and Time If you want to set the date and time to a specific value, for example, to "21:17, October 21, 2016", and keep the hardware clock in UTC, run the command as root in the following format: 3.3.3. Synchronizing the Date and Time You can synchronize the hardware clock and the current system time in both directions. Either you can set the hardware clock to the current system time by using this command: Note that if you use NTP, the hardware clock is automatically synchronized to the system clock every 11 minutes, and this command is useful only at boot time to get a reasonable initial system time. Or, you can set the system time from the hardware clock by using the following command: When you synchronize the hardware clock and the system time, you can also specify whether you want to keep the hardware clock in local time or UTC by adding the --utc or --localtime option. Similarly to using --set , UTC or LOCAL is recorded in the /etc/adjtime file. The hwclock --systohc --utc command is functionally similar to timedatectl set-local-rtc false and the hwclock --systohc --local command is an alternative to timedatectl set-local-rtc true . Example 3.11. Synchronizing the Hardware Clock with System Time To set the hardware clock to the current system time and keep the hardware clock in local time, run the following command as root : To avoid problems with time zone and DST switching, it is recommended to keep the hardware clock in UTC. The shown Example 3.11, "Synchronizing the Hardware Clock with System Time" is useful, for example, in case of a multi boot with a Windows system, which assumes the hardware clock runs in local time by default, and all other systems need to accommodate to it by using local time as well. It may also be needed with a virtual machine; if the virtual hardware clock provided by the host is running in local time, the guest system needs to be configured to use local time, too. 3.4. Additional Resources For more information on how to configure the date and time in Red Hat Enterprise Linux 7, see the resources listed below. Installed Documentation timedatectl (1) - The manual page for the timedatectl command line utility documents how to use this tool to query and change the system clock and its settings. date (1) - The manual page for the date command provides a complete list of supported command line options. hwclock (8) - The manual page for the hwclock command provides a complete list of supported command line options. See Also Chapter 2, System Locale and Keyboard Configuration documents how to configure the keyboard layout. Chapter 6, Gaining Privileges documents how to gain administrative privileges by using the su and sudo commands. Chapter 10, Managing Services with systemd provides more information on systemd and documents how to use the systemctl command to manage system services.
[ "timedatectl", "~]USD timedatectl Local time: Mon 2016-09-16 19:30:24 CEST Universal time: Mon 2016-09-16 17:30:24 UTC Timezone: Europe/Prague (CEST, +0200) NTP enabled: no NTP synchronized: no RTC in local TZ: no DST active: yes Last DST change: DST began at Sun 2016-03-31 01:59:59 CET Sun 2016-03-31 03:00:00 CEST Next DST change: DST ends (the clock jumps one hour backwards) at Sun 2016-10-27 02:59:59 CEST Sun 2016-10-27 02:00:00 CET", "~]# systemctl restart systemd-timedated.service", "timedatectl set-time HH:MM:SS", "~]# timedatectl set-time 23:26:00", "timedatectl set-local-rtc boolean", "timedatectl set-time YYYY-MM-DD", "~]# timedatectl set-time \"2017-06-02 23:26:00\"", "timedatectl list-timezones", "timedatectl set-timezone time_zone", "~]# timedatectl list-timezones | grep Europe Europe/Amsterdam Europe/Andorra Europe/Athens Europe/Belgrade Europe/Berlin Europe/Bratislava ...", "~]# timedatectl set-timezone Europe/Prague", "timedatectl set-ntp boolean", "~]# timedatectl set-ntp yes", "date", "date --utc", "date +\"format\"", "~]USD date Mon Sep 16 17:30:24 CEST 2016", "~]USD date --utc Mon Sep 16 15:30:34 UTC 2016", "~]USD date +\"%Y-%m-%d %H:%M\" 2016-09-16 17:30", "date --set HH:MM:SS", "date --set HH:MM:SS --utc", "~]# date --set 23:26:00", "date --set YYYY-MM-DD", "~]# date --set \"2017-06-02 23:26:00\"", "hwclock", "~]# hwclock Tue 15 Apr 2017 04:23:46 PM CEST -0.329272 seconds", "hwclock --set --date \"dd mmm yyyy HH:MM\"", "~]# hwclock --set --date \"21 Oct 2016 21:17\" --utc", "hwclock --systohc", "hwclock --hctosys", "~]# hwclock --systohc --localtime" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/chap-configuring_the_date_and_time
Registry
Registry OpenShift Container Platform 4.14 Configuring registries for OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/registry/index
Chapter 6. The ext3 File System
Chapter 6. The ext3 File System The default file system is the journaling ext3 file system. 6.1. Features of ext3 The ext3 file system is essentially an enhanced version of the ext2 file system. These improvements provide the following advantages: Availability After an unexpected power failure or system crash (also called an unclean system shutdown ), each mounted ext2 file system on the machine must be checked for consistency by the e2fsck program. This is a time-consuming process that can delay system boot time significantly, especially with large volumes containing a large number of files. During this time, any data on the volumes is unreachable. The journaling provided by the ext3 file system means that this sort of file system check is no longer necessary after an unclean system shutdown. The only time a consistency check occurs using ext3 is in certain rare hardware failure cases, such as hard drive failures. The time to recover an ext3 file system after an unclean system shutdown does not depend on the size of the file system or the number of files; rather, it depends on the size of the journal used to maintain consistency. The default journal size takes about a second to recover, depending on the speed of the hardware. Data Integrity The ext3 file system provides stronger data integrity in the event that an unclean system shutdown occurs. The ext3 file system allows you to choose the type and level of protection that your data receives. By default, the ext3 volumes are configured to keep a high level of data consistency with regard to the state of the file system. Speed Despite writing some data more than once, ext3 has a higher throughput in most cases than ext2 because ext3's journaling optimizes hard drive head motion. You can choose from three journaling modes to optimize speed, but doing so means trade-offs in regards to data integrity. Easy Transition It is easy to migrate from ext2 to ext3 and gain the benefits of a robust journaling file system without reformatting. Refer to Section 6.3, "Converting to an ext3 File System" for more on how to perform this task. The following sections walk you through the steps for creating and tuning ext3 partitions. For ext2 partitions, skip the partitioning and formating sections below and go directly to Section 6.3, "Converting to an ext3 File System" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/The_ext3_File_System
Chapter 8. Storage and File Systems
Chapter 8. Storage and File Systems This chapter outlines supported file systems and configuration options that affect application performance for both I/O and file systems in Red Hat Enterprise Linux 7. Section 8.1, "Considerations" discusses the I/O and file system related factors that affect performance. Section 8.2, "Monitoring and Diagnosing Performance Problems" teaches you how to use Red Hat Enterprise Linux 7 tools to diagnose performance problems related to I/O or file system configuration details. Section 8.4, "Configuration Tools" discusses the tools and strategies you can use to solve I/O and file system related performance problems in Red Hat Enterprise Linux 7. 8.1. Considerations The appropriate settings for storage and file system performance are highly dependent on the purpose of the storage. I/O and file system performance can be affected by any of the following factors: Data write or read patterns Data alignment with underlying geometry Block size File system size Journal size and location Recording access times Ensuring data reliability Pre-fetching data Pre-allocating disk space File fragmentation Resource contention Read this chapter to gain an understanding of the formatting and mount options that affect file system throughput, scalability, responsiveness, resource usage, and availability. 8.1.1. I/O Schedulers The I/O scheduler determines when and for how long I/O operations run on a storage device. It is also known as the I/O elevator. Red Hat Enterprise Linux 7 provides three I/O schedulers. deadline The default I/O scheduler for all block devices, except for SATA disks. Deadline attempts to provide a guaranteed latency for requests from the point at which requests reach the I/O scheduler. This scheduler is suitable for most use cases, but particularly those in which read operations occur more often than write operations. Queued I/O requests are sorted into a read or write batch and then scheduled for execution in increasing LBA order. Read batches take precedence over write batches by default, as applications are more likely to block on read I/O. After a batch is processed, deadline checks how long write operations have been starved of processor time and schedules the read or write batch as appropriate. The number of requests to handle per batch, the number of read batches to issue per write batch, and the amount of time before requests expire are all configurable; see Section 8.4.4, "Tuning the Deadline Scheduler" for details. cfq The default scheduler only for devices identified as SATA disks. The Completely Fair Queueing scheduler, cfq , divides processes into three separate classes: real time, best effort, and idle. Processes in the real time class are always performed before processes in the best effort class, which are always performed before processes in the idle class. This means that processes in the real time class can starve both best effort and idle processes of processor time. Processes are assigned to the best effort class by default. cfq uses historical data to anticipate whether an application will issue more I/O requests in the near future. If more I/O is expected, cfq idles to wait for the new I/O, even if I/O from other processes is waiting to be processed. Because of this tendency to idle, the cfq scheduler should not be used in conjunction with hardware that does not incur a large seek penalty unless it is tuned for this purpose. It should also not be used in conjunction with other non-work-conserving schedulers, such as a host-based hardware RAID controller, as stacking these schedulers tends to cause a large amount of latency. cfq behavior is highly configurable; see Section 8.4.5, "Tuning the CFQ Scheduler" for details. noop The noop I/O scheduler implements a simple FIFO (first-in first-out) scheduling algorithm. Requests are merged at the generic block layer through a simple last-hit cache. This can be the best scheduler for CPU-bound systems using fast storage. For details on setting a different default I/O scheduler, or specifying a different scheduler for a particular device, see Section 8.4, "Configuration Tools" . 8.1.2. File Systems Read this section for details about supported file systems in Red Hat Enterprise Linux 7, their recommended use cases, and the format and mount options available to file systems in general. Detailed tuning recommendations for these file systems are available in Section 8.4.7, "Configuring File Systems for Performance" . 8.1.2.1. XFS XFS is a robust and highly scalable 64-bit file system. It is the default file system in Red Hat Enterprise Linux 7. XFS uses extent-based allocation, and features a number of allocation schemes, including pre-allocation and delayed allocation, both of which reduce fragmentation and aid performance. It also supports metadata journaling, which can facilitate crash recovery. XFS can be defragmented and enlarged while mounted and active, and Red Hat Enterprise Linux 7 supports several XFS-specific backup and restore utilities. As of Red Hat Enterprise Linux 7.0 GA, XFS is supported to a maximum file system size of 500 TB, and a maximum file offset of 8 EB (sparse files). For details about administering XFS, see the Red Hat Enterprise Linux 7 Storage Administration Guide . For assistance tuning XFS for a specific purpose, see Section 8.4.7.1, "Tuning XFS" . 8.1.2.2. Ext4 Ext4 is a scalable extension of the ext3 file system. Its default behavior is optimal for most work loads. However, it is supported only to a maximum file system size of 50 TB, and a maximum file size of 16 TB. For details about administering ext4, see the Red Hat Enterprise Linux 7 Storage Administration Guide . For assistance tuning ext4 for a specific purpose, see Section 8.4.7.2, "Tuning ext4" . 8.1.2.3. Btrfs (Technology Preview) The default file system for Red Hat Enterprise Linux 7 is XFS. Btrfs (B-tree file system), a relatively new copy-on-write (COW) file system, is shipped as a Technology Preview . Some of the unique Btrfs features include: The ability to take snapshots of specific files, volumes or sub-volumes rather than the whole file system; supporting several versions of redundant array of inexpensive disks (RAID); back referencing map I/O errors to file system objects; transparent compression (all files on the partition are automatically compressed); checksums on data and meta-data. Although Btrfs is considered a stable file system, it is under constant development, so some functionality, such as the repair tools, are basic compared to more mature file systems. Currently, selecting Btrfs is suitable when advanced features (such as snapshots, compression, and file data checksums) are required, but performance is relatively unimportant. If advanced features are not required, the risk of failure and comparably weak performance over time make other file systems preferable. Another drawback, compared to other file systems, is the maximum supported file system size of 50 TB. For more information, see Section 8.4.7.3, "Tuning Btrfs" , and the chapter on Btrfs in the Red Hat Enterprise Linux 7 Storage Administration Guide . 8.1.2.4. GFS2 Global File System 2 (GFS2) is part of the High Availability Add-On that provides clustered file system support to Red Hat Enterprise Linux 7. GFS2 provides a consistent file system image across all servers in a cluster, which allows servers to read from and write to a single shared file system. GFS2 is supported to a maximum file system size of 100 TB. For details on administering GFS2, see the Global File System 2 guide or the Red Hat Enterprise Linux 7 Storage Administration Guide . For information on tuning GFS2 for a specific purpose, see Section 8.4.7.4, "Tuning GFS2" . 8.1.3. Generic Tuning Considerations for File Systems This section covers tuning considerations common to all file systems. For tuning recommendations specific to your file system, see Section 8.4.7, "Configuring File Systems for Performance" . 8.1.3.1. Considerations at Format Time Some file system configuration decisions cannot be changed after the device is formatted. This section covers the options available to you for decisions that must be made before you format your storage device. Size Create an appropriately-sized file system for your workload. Smaller file systems have proportionally shorter backup times and require less time and memory for file system checks. However, if your file system is too small, its performance will suffer from high fragmentation. Block size The block is the unit of work for the file system. The block size determines how much data can be stored in a single block, and therefore the smallest amount of data that is written or read at one time. The default block size is appropriate for most use cases. However, your file system will perform better and store data more efficiently if the block size (or the size of multiple blocks) is the same as or slightly larger than amount of data that is typically read or written at one time. A small file will still use an entire block. Files can be spread across multiple blocks, but this can create additional runtime overhead. Additionally, some file systems are limited to a certain number of blocks, which in turn limits the maximum size of the file system. Block size is specified as part of the file system options when formatting a device with the mkfs command. The parameter that specifies the block size varies with the file system; see the mkfs man page for your file system for details. For example, to see the options available when formatting an XFS file system, execute the following command. Geometry File system geometry is concerned with the distribution of data across a file system. If your system uses striped storage, like RAID, you can improve performance by aligning data and metadata with the underlying storage geometry when you format the device. Many devices export recommended geometry, which is then set automatically when the devices are formatted with a particular file system. If your device does not export these recommendations, or you want to change the recommended settings, you must specify geometry manually when you format the device with mkfs . The parameters that specify file system geometry vary with the file system; see the mkfs man page for your file system for details. For example, to see the options available when formatting an ext4 file system, execute the following command. External journals Journaling file systems document the changes that will be made during a write operation in a journal file prior to the operation being executed. This reduces the likelihood that a storage device will become corrupted in the event of a system crash or power failure, and speeds up the recovery process. Metadata-intensive workloads involve very frequent updates to the journal. A larger journal uses more memory, but reduces the frequency of write operations. Additionally, you can improve the seek time of a device with a metadata-intensive workload by placing its journal on dedicated storage that is as fast as, or faster than, the primary storage. Warning Ensure that external journals are reliable. Losing an external journal device will cause file system corruption. External journals must be created at format time, with journal devices being specified at mount time. For details, see the mkfs and mount man pages. 8.1.3.2. Considerations at Mount Time This section covers tuning decisions that apply to most file systems and can be specified as the device is mounted. Barriers File system barriers ensure that file system metadata is correctly written and ordered on persistent storage, and that data transmitted with fsync persists across a power outage. On versions of Red Hat Enterprise Linux, enabling file system barriers could significantly slow applications that relied heavily on fsync , or created and deleted many small files. In Red Hat Enterprise Linux 7, file system barrier performance has been improved such that the performance effects of disabling file system barriers are negligible (less than 3%). For further information, see the Red Hat Enterprise Linux 7 Storage Administration Guide . Access Time Every time a file is read, its metadata is updated with the time at which access occurred ( atime ). This involves additional write I/O. In most cases, this overhead is minimal, as by default Red Hat Enterprise Linux 7 updates the atime field only when the access time was older than the times of last modification ( mtime ) or status change ( ctime ). However, if updating this metadata is time consuming, and if accurate access time data is not required, you can mount the file system with the noatime mount option. This disables updates to metadata when a file is read. It also enables nodiratime behavior, which disables updates to metadata when a directory is read. Read-ahead Read-ahead behavior speeds up file access by pre-fetching data that is likely to be needed soon and loading it into the page cache, where it can be retrieved more quickly than if it were on disk. The higher the read-ahead value, the further ahead the system pre-fetches data. Red Hat Enterprise Linux attempts to set an appropriate read-ahead value based on what it detects about your file system. However, accurate detection is not always possible. For example, if a storage array presents itself to the system as a single LUN, the system detects the single LUN, and does not set the appropriate read-ahead value for an array. Workloads that involve heavy streaming of sequential I/O often benefit from high read-ahead values. The storage-related tuned profiles provided with Red Hat Enterprise Linux 7 raise the read-ahead value, as does using LVM striping, but these adjustments are not always sufficient for all workloads. The parameters that define read-ahead behavior vary with the file system; see the mount man page for details. 8.1.3.3. Maintenance Regularly discarding blocks that are not in use by the file system is a recommended practice for both solid-state disks and thinly-provisioned storage. There are two methods of discarding unused blocks: batch discard and online discard. Batch discard This type of discard is part of the fstrim command. It discards all unused blocks in a file system that match criteria specified by the administrator. Red Hat Enterprise Linux 7 supports batch discard on XFS and ext4 formatted devices that support physical discard operations (that is, on HDD devices where the value of /sys/block/ devname /queue/discard_max_bytes is not zero, and SSD devices where the value of /sys/block/ devname /queue/discard_granularity is not 0 ). Online discard This type of discard operation is configured at mount time with the discard option, and runs in real time without user intervention. However, online discard only discards blocks that are transitioning from used to free. Red Hat Enterprise Linux 7 supports online discard on XFS and ext4 formatted devices. Red Hat recommends batch discard except where online discard is required to maintain performance, or where batch discard is not feasible for the system's workload. Pre-allocation Pre-allocation marks disk space as being allocated to a file without writing any data into that space. This can be useful in limiting data fragmentation and poor read performance. Red Hat Enterprise Linux 7 supports pre-allocating space on XFS, ext4, and GFS2 devices at mount time; see the mount man page for the appropriate parameter for your file system. Applications can also benefit from pre-allocating space by using the fallocate(2) glibc call.
[ "man mkfs.xfs", "man mkfs.ext4", "man mkfs", "man mount", "man mount" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/performance_tuning_guide/chap-red_hat_enterprise_linux-performance_tuning_guide-storage_and_file_systems
9. Clustering
9. Clustering Clusters are multiple computers (nodes) working in concert to increase reliability, scalability, and availability to critical production services. High Availability using Red Hat Enterprise Linux 6 can be deployed in a variety of configurations to suit varying needs for performance, high-availability, load balancing, and file sharing. The following major updates to clustering are available in Red Hat Enterprise Linux 6.1 Rgmanager now supports the concept of critical and non-critical resources System Administrators can now configure and run a cluster using command line tools. This feature provides an alternative to manually editing the cluster.conf configuration file or using the graphical configuration tool, Luci. Red Hat Enterprise Linux High Availability on Red Hat Enterprise Linux KVM hosts is fully supported Comprehensive SNMP Trap support from central cluster daemons and sub-parts Additional watchdog integration allows a node to reboot itself when it loses quorum The development library packages provided in the High Availability, Load Balancer, and Resilient Storage Add-On channels are not considered supported nor are their ABIs or APIs guaranteed to be consistent. Note The Cluster Administration document describes the configuration and management of Red Hat cluster systems for Red Hat Enterprise Linux 6.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.1_release_notes/clustering
Chapter 15. Hardening the Networking service
Chapter 15. Hardening the Networking service The Networking service (neutron) is the software-defined networking (SDN) component of Red Hat OpenStack Platform (RHOSP). The RHOSP Networking service manages internal and external traffic to and from virtual machine instances and provides core services such as routing, segmentation, DHCP, and metadata. It provides the API for virtual networking capabilities and management of switches, routers, ports, and firewalls. For more information about the Red Hat OpenStack Platfrom Networking service, see the Configuring Red Hat OpenStack Platform networking . This section discusses OpenStack Networking configuration good practices as they apply to project network security within your OpenStack deployment. 15.1. Project network services workflow OpenStack Networking provides users self-service configuration of network resources. It is important that cloud architects and operators evaluate their design use cases in providing users the ability to create, update, and destroy available network resources. 15.2. Networking resource policy engine A policy engine and its configuration file ( policy.json ) within OpenStack Networking provides a method to provide finer grained authorization of users on project networking methods and objects. The OpenStack Networking policy definitions affect network availability, network security and overall OpenStack security. Cloud architects and operators should carefully evaluate their policy towards user and project access to administration of network resources. Note It is important to review the default networking resource policy, as this policy can be modified to suit your security posture. If your deployment of OpenStack provides multiple external access points into different security zones it is important that you limit the project's ability to attach multiple vNICs to multiple external access points - this would bridge these security zones and could lead to unforeseen security compromise. You can help mitigate this risk by using the host aggregates functionality provided by Compute, or by splitting the project instances into multiple projects with different virtual network configurations. For more information on host aggregates, see Creating and managing host aggregates . 15.3. Security groups A security group is a collection of security group rules. Security groups and their rules allow administrators and projects the ability to specify the type of traffic and direction (ingress/egress) that is allow ed to pass through a virtual interface port. When a virtual interface port is created in OpenStack Networking it is associated with a security group. Rules can be added to the default security group in order to change the behavior on a per-deployment basis. When using the Compute API to modify security groups, the updated security group applies to all virtual interface ports on an instance. This is due to the Compute security group APIs being instance-based rather than port-based, as found in neutron. 15.4. Mitigate ARP spoofing OpenStack Networking has a built-in feature to help mitigate the threat of ARP spoofing for instances. This should not be disabled unless careful consideration is given to the resulting risks. 15.5. Use a Secure Protocol for Authentication In /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf check that the value of auth_uri under the [keystone_authtoken] section is set to an Identity API endpoint that starts with `https:
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/hardening_red_hat_openstack_platform/assembly_hardening-the-networking-service_security_and_hardening
Chapter 13. Configuring AWS STS for Red Hat Quay
Chapter 13. Configuring AWS STS for Red Hat Quay Support for Amazon Web Services (AWS) Security Token Service (STS) is available for standalone Red Hat Quay deployments and Red Hat Quay on OpenShift Container Platform. AWS STS is a web service for requesting temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users and for users that you authenticate, or federated users . This feature is useful for clusters using Amazon S3 as an object storage, allowing Red Hat Quay to use STS protocols to authenticate with Amazon S3, which can enhance the overall security of the cluster and help to ensure that access to sensitive data is properly authenticated and authorized. Configuring AWS STS is a multi-step process that requires creating an AWS IAM user, creating an S3 role, and configuring your Red Hat Quay config.yaml file to include the proper resources. Use the following procedures to configure AWS STS for Red Hat Quay. 13.1. Creating an IAM user Use the following procedure to create an IAM user. Procedure Log in to the Amazon Web Services (AWS) console and navigate to the Identity and Access Management (IAM) console. In the navigation pane, under Access management click Users . Click Create User and enter the following information: Enter a valid username, for example, quay-user . For Permissions options , click Add user to group . On the review and create page, click Create user . You are redirected to the Users page. Click the username, for example, quay-user . Copy the ARN of the user, for example, arn:aws:iam::123492922789:user/quay-user . On the same page, click the Security credentials tab. Navigate to Access keys . Click Create access key . On the Access key best practices & alternatives page, click Command Line Interface (CLI) , then, check the confirmation box. Then click . Optional. On the Set description tag - optional page, enter a description. Click Create access key . Copy and store the access key and the secret access key. Important This is the only time that the secret access key can be viewed or downloaded. You cannot recover it later. However, you can create a new access key any time. Click Done . 13.2. Creating an S3 role Use the following procedure to create an S3 role for AWS STS. Prerequisites You have created an IAM user and stored the access key and the secret access key. Procedure If you are not already, navigate to the IAM dashboard by clicking Dashboard . In the navigation pane, click Roles under Access management . Click Create role . Click Custom Trust Policy , which shows an editable JSON policy. By default, it shows the following information: { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect": "Allow", "Principal": {}, "Action": "sts:AssumeRole" } ] } Under the Principal configuration field, add your AWS ARN information. For example: { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::123492922789:user/quay-user" }, "Action": "sts:AssumeRole" } ] } Click . On the Add permissions page, type AmazonS3FullAccess in the search box. Check the box to add that policy to the S3 role, then click . On the Name, review, and create page, enter the following information: Enter a role name, for example, example-role . Optional. Add a description. Click the Create role button. You are navigated to the Roles page. Under Role name , the newly created S3 should be available. 13.3. Configuring Red Hat Quay to use AWS STS Use the following procedure to edit your Red Hat Quay config.yaml file to use AWS STS. Procedure Update your config.yaml file for Red Hat Quay to include the following information: # ... DISTRIBUTED_STORAGE_CONFIG: default: - STSS3Storage - sts_role_arn: <role_arn> 1 s3_bucket: <s3_bucket_name> 2 storage_path: <storage_path> 3 s3_region: <region> 4 sts_user_access_key: <s3_user_access_key> 5 sts_user_secret_key: <s3_user_secret_key> 6 # ... 1 The unique Amazon Resource Name (ARN) required when configuring AWS STS 2 The name of your s3 bucket. 3 The storage path for data. Usually /datastorage . 4 Optional. The Amazon Web Services region. Defaults to us-east-1 . 5 The generated AWS S3 user access key required when configuring AWS STS. 6 The generated AWS S3 user secret key required when configuring AWS STS. Restart your Red Hat Quay deployment. Verification Tag a sample image, for example, busybox , that will be pushed to the repository. For example: USD podman tag docker.io/library/busybox <quay-server.example.com>/<organization_name>/busybox:test Push the sample image by running the following command: USD podman push <quay-server.example.com>/<organization_name>/busybox:test Verify that the push was successful by navigating to the Organization that you pushed the image to in your Red Hat Quay registry Tags . Navigate to the Amazon Web Services (AWS) console and locate your s3 bucket. Click the name of your s3 bucket. On the Objects page, click datastorage/ . On the datastorage/ page, the following resources should seen: sha256/ uploads/ These resources indicate that the push was successful, and that AWS STS is properly configured.
[ "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"Statement1\", \"Effect\": \"Allow\", \"Principal\": {}, \"Action\": \"sts:AssumeRole\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"Statement1\", \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::123492922789:user/quay-user\" }, \"Action\": \"sts:AssumeRole\" } ] }", "DISTRIBUTED_STORAGE_CONFIG: default: - STSS3Storage - sts_role_arn: <role_arn> 1 s3_bucket: <s3_bucket_name> 2 storage_path: <storage_path> 3 s3_region: <region> 4 sts_user_access_key: <s3_user_access_key> 5 sts_user_secret_key: <s3_user_secret_key> 6", "podman tag docker.io/library/busybox <quay-server.example.com>/<organization_name>/busybox:test", "podman push <quay-server.example.com>/<organization_name>/busybox:test" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/manage_red_hat_quay/configuring-aws-sts-quay
Chapter 1. Preparing to deploy OpenShift Data Foundation
Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps: Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Kubernetes authentication using KMS . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Follow these steps: Create a KMIP client if one does not exist. From the user interface, select KMIP Client Profile Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP Registration Token New Registration Token . Copy the token for the step. To register the client, navigate to KMIP Registered Clients Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings Interfaces Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster is deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in the Planning guide . Disaster recovery requirements [Technology Preview] Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/deploying_openshift_data_foundation_using_amazon_web_services/preparing_to_deploy_openshift_data_foundation
B.110. yum
B.110. yum B.110.1. RHBA-2010:0846 - yum bug fix update An updated yum package that fixes various bugs is now available. Yum is a utility that can check for and automatically download and install updated RPM packages. Dependencies are obtained and downloaded automatically, prompting the user for permission as necessary. Bug Fixes BZ# 634974 Previously, yum treated packages that provide kernel-modules as install-only packages. With this update, the install-only option has been removed. BZ# 637086 Previously, the "/var/cache/yum/" directory kept accumulating multiple '.sqlite' files and never cleaned them out. With this update, the '.sqlite' are automatically cleaned up. All users of yum are advised to upgrade to this updated package, which resolves these issues.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/yum
Chapter 4. Red Hat Quay organizations overview
Chapter 4. Red Hat Quay organizations overview In = Red Hat Quay an organization is a grouping of users, repositories, and teams. It provides a means to organize and manage access control and permissions within the registry. With organizations, administrators can assign roles and permissions to users and teams. Other useful information about organizations includes the following: You cannot have an organization embedded within another organization. To subdivide an organization, you use teams. Organizations cannot contain users directly. You must first add a team, and then add one or more users to each team. Note Individual users can be added to specific repositories inside of an organization. Consequently, those users are not members of any team on the Repository Settings page. The Collaborators View on the Teams and Memberships page shows users who have direct access to specific repositories within the organization without needing to be part of that organization specifically. Teams can be set up in organizations as just members who use the repositories and associated images, or as administrators with special privileges for managing the Organization. Users can create their own organization to share repositories of container images. This can be done through the Red Hat Quay UI, or by the Red Hat Quay API if you have an OAuth token. 4.1. Creating an organization by using the UI Use the following procedure to create a new organization by using the UI. Procedure Log in to your Red Hat Quay registry. Click Organization in the navigation pane. Click Create Organization . Enter an Organization Name , for example, testorg . Enter an Organization Email . Click Create . Now, your example organization should populate under the Organizations page. 4.2. Creating an organization by using the Red Hat Quay API Use the following procedure to create a new organization using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to create a new organization using the POST /api/v1/organization/ endpoint: USD curl -X POST -H "Authorization: Bearer <bearer_token>" -H "Content-Type: application/json" -d '{ "name": "<new_organization_name>" }' "https://<quay-server.example.com>/api/v1/organization/" Example output "Created" After creation, organization details can be changed, such as adding an email address, with the PUT /api/v1/organization/{orgname} command. For example: USD curl -X PUT "https://<quay-server.example.com>/api/v1/organization/<orgname>" \ -H "Authorization: Bearer <access_token>" \ -H "Content-Type: application/json" \ -d '{ "email": "<org_email>", "invoice_email": <true/false>, "invoice_email_address": "<billing_email>" }' Example output {"name": "test", "email": "[email protected]", "avatar": {"name": "test", "hash": "a15d479002b20f211568fd4419e76686d2b88a4980a5b4c4bc10420776c5f6fe", "color": "#aec7e8", "kind": "user"}, "is_admin": true, "is_member": true, "teams": {"owners": {"name": "owners", "description": "", "role": "admin", "avatar": {"name": "owners", "hash": "6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90", "color": "#c7c7c7", "kind": "team"}, "can_view": true, "repo_count": 0, "member_count": 1, "is_synced": false}}, "ordered_teams": ["owners"], "invoice_email": true, "invoice_email_address": "[email protected]", "tag_expiration_s": 1209600, "is_free_account": true, "quotas": [{"id": 2, "limit_bytes": 10737418240, "limits": [{"id": 1, "type": "Reject", "limit_percent": 90}]}], "quota_report": {"quota_bytes": 0, "configured_quota": 10737418240, "running_backfill": "complete", "backfill_status": "complete"}} 4.3. Organization settings With = Red Hat Quay, some basic organization settings can be adjusted by using the UI. This includes adjusting general settings, such as the e-mail address associated with the organization, and time machine settings, which allows administrators to adjust when a tag is garbage collected after it is permanently deleted. Use the following procedure to alter your organization settings by using the v2 UI. Procedure On the v2 UI, click Organizations . Click the name of the organization that you will create the robot account for, for example, test-org . Click the Settings tab. Optional. Enter the email address associated with the organization. Optional. Set the allotted time for the Time Machine feature to one of the following: A few seconds A day 7 days 14 days A month Click Save . 4.4. Deleting an organization by using the UI Use the following procedure to delete an organization using the v2 UI. Procedure On the Organizations page, select the name of the organization you want to delete, for example, testorg . Click the More Actions drop down menu. Click Delete . Note On the Delete page, there is a Search input box. With this box, users can search for specific organizations to ensure that they are properly scheduled for deletion. For example, if a user is deleting 10 organizations and they want to ensure that a specific organization was deleted, they can use the Search input box to confirm said organization is marked for deletion. Confirm that you want to permanently delete the organization by typing confirm in the box. Click Delete . After deletion, you are returned to the Organizations page. Note You can delete more than one organization at a time by selecting multiple organizations, and then clicking More Actions Delete . 4.5. Deleting an organization by using the Red Hat Quay API Use the following procedure to delete an organization using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to delete an organization using the DELETE /api/v1/organization/{orgname} endpoint: USD curl -X DELETE \ -H "Authorization: Bearer <bearer_token>" \ "https://<quay-server.example.com>/api/v1/organization/<organization_name>" The CLI does not return information when deleting an organization from the CLI. To confirm deletion, you can check the Red Hat Quay UI, or you can enter the GET /api/v1/organization/{orgname} command to see if details are returned for the deleted organization: USD curl -X GET \ -H "Authorization: Bearer <bearer_token>" \ "<quay-server.example.com>/api/v1/organization/<organization_name>" Example output {"detail": "Not Found", "error_message": "Not Found", "error_type": "not_found", "title": "not_found", "type": "http://<quay-server.example.com>/api/v1/error/not_found", "status": 404}
[ "curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"name\": \"<new_organization_name>\" }' \"https://<quay-server.example.com>/api/v1/organization/\"", "\"Created\"", "curl -X PUT \"https://<quay-server.example.com>/api/v1/organization/<orgname>\" -H \"Authorization: Bearer <access_token>\" -H \"Content-Type: application/json\" -d '{ \"email\": \"<org_email>\", \"invoice_email\": <true/false>, \"invoice_email_address\": \"<billing_email>\" }'", "{\"name\": \"test\", \"email\": \"[email protected]\", \"avatar\": {\"name\": \"test\", \"hash\": \"a15d479002b20f211568fd4419e76686d2b88a4980a5b4c4bc10420776c5f6fe\", \"color\": \"#aec7e8\", \"kind\": \"user\"}, \"is_admin\": true, \"is_member\": true, \"teams\": {\"owners\": {\"name\": \"owners\", \"description\": \"\", \"role\": \"admin\", \"avatar\": {\"name\": \"owners\", \"hash\": \"6f0e3a8c0eb46e8834b43b03374ece43a030621d92a7437beb48f871e90f8d90\", \"color\": \"#c7c7c7\", \"kind\": \"team\"}, \"can_view\": true, \"repo_count\": 0, \"member_count\": 1, \"is_synced\": false}}, \"ordered_teams\": [\"owners\"], \"invoice_email\": true, \"invoice_email_address\": \"[email protected]\", \"tag_expiration_s\": 1209600, \"is_free_account\": true, \"quotas\": [{\"id\": 2, \"limit_bytes\": 10737418240, \"limits\": [{\"id\": 1, \"type\": \"Reject\", \"limit_percent\": 90}]}], \"quota_report\": {\"quota_bytes\": 0, \"configured_quota\": 10737418240, \"running_backfill\": \"complete\", \"backfill_status\": \"complete\"}}", "curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"https://<quay-server.example.com>/api/v1/organization/<organization_name>\"", "curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/organization/<organization_name>\"", "{\"detail\": \"Not Found\", \"error_message\": \"Not Found\", \"error_type\": \"not_found\", \"title\": \"not_found\", \"type\": \"http://<quay-server.example.com>/api/v1/error/not_found\", \"status\": 404}" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/use_red_hat_quay/organizations-overview
Chapter 10. Using config maps with applications
Chapter 10. Using config maps with applications Config maps allow you to decouple configuration artifacts from image content to keep containerized applications portable. The following sections define config maps and how to create and use them. 10.1. Understanding config maps Many applications require configuration by using some combination of configuration files, command line arguments, and environment variables. In OpenShift Container Platform, these configuration artifacts are decoupled from image content to keep containerized applications portable. The ConfigMap object provides mechanisms to inject containers with configuration data while keeping containers agnostic of OpenShift Container Platform. A config map can be used to store fine-grained information like individual properties or coarse-grained information like entire configuration files or JSON blobs. The ConfigMap object holds key-value pairs of configuration data that can be consumed in pods or used to store configuration data for system components such as controllers. For example: ConfigMap Object Definition kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2 1 Contains the configuration data. 2 Points to a file that contains non-UTF8 data, for example, a binary Java keystore file. Enter the file data in Base 64. Note You can use the binaryData field when you create a config map from a binary file, such as an image. Configuration data can be consumed in pods in a variety of ways. A config map can be used to: Populate environment variable values in containers Set command-line arguments in a container Populate configuration files in a volume Users and system components can store configuration data in a config map. A config map is similar to a secret, but designed to more conveniently support working with strings that do not contain sensitive information. Config map restrictions A config map must be created before its contents can be consumed in pods. Controllers can be written to tolerate missing configuration data. Consult individual components configured by using config maps on a case-by-case basis. ConfigMap objects reside in a project. They can only be referenced by pods in the same project. The Kubelet only supports the use of a config map for pods it gets from the API server. This includes any pods created by using the CLI, or indirectly from a replication controller. It does not include pods created by using the OpenShift Container Platform node's --manifest-url flag, its --config flag, or its REST API because these are not common ways to create pods. Additional resources Creating and using config maps 10.2. Use cases: Consuming config maps in pods The following sections describe some uses cases when consuming ConfigMap objects in pods. 10.2.1. Populating environment variables in containers by using config maps You can use config maps to populate individual environment variables in containers or to populate environment variables in containers from all keys that form valid environment variable names. As an example, consider the following config map: ConfigMap with two environment variables apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4 1 Name of the config map. 2 The project in which the config map resides. Config maps can only be referenced by pods in the same project. 3 4 Environment variables to inject. ConfigMap with one environment variable apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2 1 Name of the config map. 2 Environment variable to inject. Procedure You can consume the keys of this ConfigMap in a pod using configMapKeyRef sections. Sample Pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 restartPolicy: Never 1 Stanza to pull the specified environment variables from a ConfigMap . 2 Name of a pod environment variable that you are injecting a key's value into. 3 5 Name of the ConfigMap to pull specific environment variables from. 4 6 Environment variable to pull from the ConfigMap . 7 Makes the environment variable optional. As optional, the pod will be started even if the specified ConfigMap and keys do not exist. 8 Stanza to pull all environment variables from a ConfigMap . 9 Name of the ConfigMap to pull all environment variables from. When this pod is run, the pod logs will include the following output: Note SPECIAL_TYPE_KEY=charm is not listed in the example output because optional: true is set. 10.2.2. Setting command-line arguments for container commands with config maps You can use a config map to set the value of the commands or arguments in a container by using the Kubernetes substitution syntax USD(VAR_NAME) . As an example, consider the following config map: apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure To inject values into a command in a container, you must consume the keys you want to use as environment variables. Then you can refer to them in a container's command using the USD(VAR_NAME) syntax. Sample pod specification configured to inject specific environment variables apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type restartPolicy: Never 1 Inject the values into a command in a container using the keys you want to use as environment variables. When this pod is run, the output from the echo command run in the test-container container is as follows: 10.2.3. Injecting content into a volume by using config maps You can inject content into a volume by using config maps. Example ConfigMap custom resource (CR) apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm Procedure You have a couple different options for injecting content into a volume by using config maps. The most basic way to inject content into a volume by using a config map is to populate the volume with files where the key is the file name and the content of the file is the value of the key: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/special.how" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never 1 File containing key. When this pod is run, the output of the cat command will be: You can also control the paths within the volume where config map keys are projected: apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "cat", "/etc/config/path/to/special-key" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never 1 Path to config map key. When this pod is run, the output of the cat command will be:
[ "kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2", "apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4", "apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 restartPolicy: Never", "SPECIAL_LEVEL_KEY=very log_level=INFO", "apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type restartPolicy: Never", "very charm", "apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never", "very", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never", "very" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/building_applications/config-maps
Chapter 8. Direct Migration Requirements
Chapter 8. Direct Migration Requirements Direct Migration is available with Migration Toolkit for Containers (MTC) 1.4.0 or later. There are two parts of the Direct Migration: Direct Volume Migration Direct Image Migration Direct Migration enables the migration of persistent volumes and internal images directly from the source cluster to the destination cluster without an intermediary replication repository (object storage). 8.1. Prerequisites Expose the internal registries for both clusters (source and destination) involved in the migration for external traffic. Ensure the remote source and destination clusters can communicate using OpenShift Container Platform routes on port 443. Configure the exposed registry route in the source and destination MTC clusters; do this by specifying the spec.exposedRegistryPath field or from the MTC UI. Note If the destination cluster is the same as the host cluster (where a migration controller exists), there is no need to configure the exposed registry route for that particular MTC cluster. The spec.exposedRegistryPath is required only for Direct Image Migration and not Direct Volume Migration. Ensure the two spec flags in MigPlan custom resource (CR) indirectImageMigration and indirectVolumeMigration are set to false for Direct Migration to be performed. The default value for these flags is false . The Direct Migration feature of MTC uses the Rsync utility. 8.2. Rsync configuration for direct volume migration Direct Volume Migration (DVM) in MTC uses Rsync to synchronize files between the source and the target persistent volumes (PVs), using a direct connection between the two PVs. Rsync is a command-line tool that allows you to transfer files and directories to local and remote destinations. The rsync command used by DVM is optimized for clusters functioning as expected. The MigrationController CR exposes the following variables to configure rsync_options in Direct Volume Migration: Variable Type Default value Description rsync_opt_bwlimit int Not set When set to a positive integer, --bwlimit=<int> option is added to Rsync command. rsync_opt_archive bool true Sets the --archive option in the Rsync command. rsync_opt_partial bool true Sets the --partial option in the Rsync command. rsync_opt_delete bool true Sets the --delete option in the Rsync command. rsync_opt_hardlinks bool true Sets the --hard-links option is the Rsync command. rsync_opt_info string COPY2 DEL2 REMOVE2 SKIP2 FLIST2 PROGRESS2 STATS2 Enables detailed logging in Rsync Pod. rsync_opt_extras string Empty Reserved for any other arbitrary options. Setting the options set through the variables above are global for all migrations. The configuration will take effect for all future migrations as soon as the Operator successfully reconciles the MigrationController CR. Any ongoing migration can use the updated settings depending on which step it currently is in. Therefore, it is recommended that the settings be applied before running a migration. The users can always update the settings as needed. Use the rsync_opt_extras variable with caution. Any options passed using this variable are appended to the rsync command, with addition. Ensure you add white spaces when specifying more than one option. Any error in specifying options can lead to a failed migration. However, you can update MigrationController CR as many times as you require for future migrations. Customizing the rsync_opt_info flag can adversely affect the progress reporting capabilities in MTC. However, removing progress reporting can have a performance advantage. This option should only be used when the performance of Rsync operation is observed to be unacceptable. Note The default configuration used by DVM is tested in various environments. It is acceptable for most production use cases provided the clusters are healthy and performing well. These configuration variables should be used in case the default settings do not work and the Rsync operation fails. 8.2.1. Resource limit configurations for Rsync pods The MigrationController CR exposes following variables to configure resource usage requirements and limits on Rsync: Variable Type Default Description source_rsync_pod_cpu_limits string 1 Source rsync pod's CPU limit source_rsync_pod_memory_limits string 1Gi Source rsync pod's memory limit source_rsync_pod_cpu_requests string 400m Source rsync pod's cpu requests source_rsync_pod_memory_requests string 1Gi Source rsync pod's memory requests target_rsync_pod_cpu_limits string 1 Target rsync pod's cpu limit target_rsync_pod_cpu_requests string 400m Target rsync pod's cpu requests target_rsync_pod_memory_limits string 1Gi Target rsync pod's memory limit target_rsync_pod_memory_requests string 1Gi Target rsync pod's memory requests 8.2.1.1. Supplemental group configuration for Rsync pods If Persistent Volume Claims (PVC) are using a shared storage, the access to storage can be configured by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Variable Type Default Description src_supplemental_groups string Not Set Comma separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not Set Comma separated list of supplemental groups for target Rsync Pods For example, the MigrationController CR can be updated to set the values: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 8.2.1.2. Rsync retry configuration With Migration Toolkit for Containers (MTC) 1.4.3 and later, a new ability of retrying a failed Rsync operation is introduced. By default, the migration controller retries Rsync until all of the data is successfully transferred from the source to the target volume or a specified number of retries is met. The default retry limit is set to 20 . For larger volumes, a limit of 20 retries may not be sufficient. You can increase the retry limit by using the following variable in the MigrationController CR: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_backoff_limit: 40 In this example, the retry limit is increased to 40 . 8.2.1.3. Running Rsync as either root or non-root OpenShift Container Platform environments have the PodSecurityAdmission controller enabled by default. This controller requires cluster administrators to enforce Pod Security Standards by means of namespace labels. All workloads in the cluster are expected to run one of the following Pod Security Standard levels: Privileged , Baseline or Restricted . Every cluster has its own default policy set. To guarantee successful data transfer in all environments, Migration Toolkit for Containers (MTC) 1.7.5 introduced changes in Rsync pods, including running Rsync pods as non-root user by default. This ensures that data transfer is possible even for workloads that do not necessarily require higher privileges. This change was made because it is best to run workloads with the lowest level of privileges possible. 8.2.1.3.1. Manually overriding default non-root operation for data transfer Although running Rsync pods as non-root user works in most cases, data transfer might fail when you run workloads as root user on the source side. MTC provides two ways to manually override default non-root operation for data transfer: Configure all migrations to run an Rsync pod as root on the destination cluster for all migrations. Run an Rsync pod as root on the destination cluster per migration. In both cases, you must set the following labels on the source side of any namespaces that are running workloads with higher privileges before migration: enforce , audit , and warn. To learn more about Pod Security Admission and setting values for labels, see Controlling pod security admission synchronization . 8.2.1.3.2. Configuring the MigrationController CR as root or non-root for all migrations By default, Rsync runs as non-root. On the destination cluster, you can configure the MigrationController CR to run Rsync as root. Procedure Configure the MigrationController CR as follows: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true This configuration will apply to all future migrations. 8.2.1.3.3. Configuring the MigMigration CR as root or non-root per migration On the destination cluster, you can configure the MigMigration CR to run Rsync as root or non-root, with the following non-root options: As a specific user ID (UID) As a specific group ID (GID) Procedure To run Rsync as root, configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true To run Rsync as a specific User ID (UID) or as a specific Group ID (GID), configure the MigMigration CR according to this example: apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3 8.2.2. MigCluster Configuration For every MigCluster resource created in Migration Toolkit for Containers (MTC), a ConfigMap named migration-cluster-config is created in the Migration Operator's namespace on the cluster which MigCluster resource represents. The migration-cluster-config allows you to configure MigCluster specific values. The Migration Operator manages the migration-cluster-config . You can configure every value in the ConfigMap using the variables exposed in the MigrationController CR: Variable Type Required Description migration_stage_image_fqin string No Image to use for Stage Pods (applicable only to IndirectVolumeMigration) migration_registry_image_fqin string No Image to use for Migration Registry rsync_endpoint_type string No Type of endpoint for data transfer ( Route , ClusterIP , NodePort ) rsync_transfer_image_fqin string No Image to use for Rsync Pods (applicable only to DirectVolumeMigration) migration_rsync_privileged bool No Whether to run Rsync Pods as privileged or not migration_rsync_super_privileged bool No Whether to run Rsync Pods as super privileged containers ( spc_t SELinux context) or not cluster_subdomain string No Cluster's subdomain migration_registry_readiness_timeout int No Readiness timeout (in seconds) for Migration Registry Deployment migration_registry_liveness_timeout int No Liveness timeout (in seconds) for Migration Registry Deployment exposed_registry_validation_path string No Subpath to validate exposed registry in a MigCluster (for example /v2) 8.3. Direct migration known issues 8.3.1. Applying the Skip SELinux relabel workaround with spc_t automatically on workloads running on OpenShift Container Platform When attempting to migrate a namespace with Migration Toolkit for Containers (MTC) and a substantial volume associated with it, the rsync-server may become frozen without any further information to troubleshoot the issue. 8.3.1.1. Diagnosing the need for the Skip SELinux relabel workaround Search for an error of Unable to attach or mount volumes for pod... timed out waiting for the condition in the kubelet logs from the node where the rsync-server for the Direct Volume Migration (DVM) runs. Example kubelet log kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition" pod="caboodle-preprod/rsync-server" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] "Error syncing pod, skipping" err="unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition" pod="caboodle-preprod/rsync-server" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29 8.3.1.2. Resolving using the Skip SELinux relabel workaround To resolve this issue, set the migration_rsync_super_privileged parameter to true in both the source and destination MigClusters using the MigrationController custom resource (CR). Example MigrationController CR apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: "" cluster_name: host mig_namespace_limit: "10" mig_pod_limit: "100" mig_pv_limit: "100" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3 1 The value of the migration_rsync_super_privileged parameter indicates whether or not to run Rsync Pods as super privileged containers ( spc_t selinux context ). Valid settings are true or false .
[ "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] rsync_backoff_limit: 40", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] migration_rsync_privileged: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsRoot: true", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: migration-controller namespace: openshift-migration spec: [...] runAsUser: 10010001 runAsGroup: 3", "kubenswrapper[3879]: W0326 16:30:36.749224 3879 volume_linux.go:49] Setting volume ownership for /var/lib/kubelet/pods/8905d88e-6531-4d65-9c2a-eff11dc7eb29/volumes/kubernetes.io~csi/pvc-287d1988-3fd9-4517-a0c7-22539acd31e6/mount and fsGroup set. If the volume has a lot of files then setting volume ownership could be slow, see https://github.com/kubernetes/kubernetes/issues/69699 kubenswrapper[3879]: E0326 16:32:02.706363 3879 kubelet.go:1841] \"Unable to attach or mount volumes for pod; skipping pod\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" kubenswrapper[3879]: E0326 16:32:02.706496 3879 pod_workers.go:965] \"Error syncing pod, skipping\" err=\"unmounted volumes=[8db9d5b032dab17d4ea9495af12e085a], unattached volumes=[crane2-rsync-server-secret 8db9d5b032dab17d4ea9495af12e085a kube-api-access-dlbd2 crane2-stunnel-server-config crane2-stunnel-server-secret crane2-rsync-server-config]: timed out waiting for the condition\" pod=\"caboodle-preprod/rsync-server\" podUID=8905d88e-6531-4d65-9c2a-eff11dc7eb29", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: migration_rsync_super_privileged: true 1 azure_resource_group: \"\" cluster_name: host mig_namespace_limit: \"10\" mig_pod_limit: \"100\" mig_pv_limit: \"100\" migration_controller: true migration_log_reader: true migration_ui: true migration_velero: true olm_managed: true restic_timeout: 1h version: 1.8.3" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/migration_toolkit_for_containers/mtc-direct-migration-requirements
Appendix A. Applying custom configuration to Red Hat Satellite
Appendix A. Applying custom configuration to Red Hat Satellite When you install and configure Satellite for the first time using satellite-installer , you can specify that the DNS and DHCP configuration files are not to be managed by Puppet using the installer flags --foreman-proxy-dns-managed=false and --foreman-proxy-dhcp-managed=false . If these flags are not specified during the initial installer run, rerunning of the installer overwrites all manual changes, for example, rerun for upgrade purposes. If changes are overwritten, you must run the restore procedure to restore the manual changes. For more information, see Restoring Manual Changes Overwritten by a Puppet Run . To view all installer flags available for custom configuration, run satellite-installer --scenario satellite --full-help . Some Puppet classes are not exposed to the Satellite installer. To manage them manually and prevent the installer from overwriting their values, specify the configuration values by adding entries to configuration file /etc/foreman-installer/custom-hiera.yaml . This configuration file is in YAML format, consisting of one entry per line in the format of <puppet class>::<parameter name>: <value> . Configuration values specified in this file persist across installer reruns. Common examples include: For Apache, to set the ServerTokens directive to only return the Product name: To turn off the Apache server signature entirely: The Puppet modules for the Satellite installer are stored under /usr/share/foreman-installer/modules . Check the .pp files (for example: moduleName /manifests/ example .pp) to look up the classes, parameters, and values. Alternatively, use the grep command to do keyword searches. Setting some values may have unintended consequences that affect the performance or functionality of Red Hat Satellite. Consider the impact of the changes before you apply them, and test the changes in a non-production environment first. If you do not have a non-production Satellite environment, run the Satellite installer with the --noop and --verbose options. If your changes cause problems, remove the offending lines from custom-hiera.yaml and rerun the Satellite installer. If you have any specific questions about whether a particular value is safe to alter, contact Red Hat support.
[ "apache::server_tokens: Prod", "apache::server_signature: Off" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/installing_satellite_server_in_a_disconnected_network_environment/applying-custom-configuration_satellite
21.4. Configuring the Red Hat Virtualization Manager to Send SNMP Traps
21.4. Configuring the Red Hat Virtualization Manager to Send SNMP Traps Configure your Red Hat Virtualization Manager to send Simple Network Management Protocol traps to one or more external SNMP managers. SNMP traps contain system event information; they are used to monitor your Red Hat Virtualization environment. The number and type of traps sent to the SNMP manager can be defined within the Red Hat Virtualization Manager. This procedure assumes that you have configured one or more external SNMP managers to receive traps, and that you have the following details: The IP addresses or fully qualified domain names of machines that will act as SNMP managers. Optionally, determine the port through which Manager receives trap notifications; by default, this is UDP port 162. The SNMP community. Multiple SNMP managers can belong to a single community. Management systems and agents can communicate only if they are within the same community. The default community is public . The trap object identifier for alerts. The Red Hat Virtualization Manager provides a default OID of 1.3.6.1.4.1.2312.13.1.1. All trap types are sent, appended with event information, to the SNMP manager when this OID is defined. Note that changing the default trap prevents generated traps from complying with the Manager's management information base. Note The Red Hat Virtualization Manager provides management information bases at /usr/share/doc/ovirt-engine/mibs/OVIRT-MIB.txt and /usr/share/doc/ovirt-engine/mibs/REDHAT-MIB.txt . Load the MIBs in your SNMP manager before proceeding. Default SNMP configuration values exist on the Manager in the events notification daemon configuration file /usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.conf . The values outlined in the following procedure are based on the default or example values provided in that file. It is recommended that you define an override file, rather than edit the ovirt-engine-notifier.conf file, to persist your configuration options after changes such as system upgrades. Configuring SNMP Traps on the Manager On the Manager, create the SNMP configuration file: Specify the SNMP manager(s), the SNMP community, and the OID in the following format: Define which events to send to the SNMP manager: Example 21.1. Event Examples Send all events to the default SNMP profile: Send all events with the severity ERROR or ALERT to the default SNMP profile: Send events for VDC_START to the specified email address: Send events for everything but VDC_START to the default SNMP profile: This the default filter defined in ovirt-engine-notifier.conf ; if you do not disable this filter or apply overriding filters, no notifications will be sent: VDC_START is an example of the audit log messages available. A full list of audit log messages can be found in /usr/share/doc/ovirt-engine/AuditLogMessages.properties . Alternatively, filter results within your SNMP manager. Save the file. Start the ovirt-engine-notifier service, and ensure that this service starts on boot: Check your SNMP manager to ensure that traps are being received. Note SNMP_MANAGERS , MAIL_SERVER , or both must be properly defined in /usr/share/ovirt-engine/services/ovirt-engine-notifier/ovirt-engine-notifier.conf or in an override file in order for the notifier service to run.
[ "vi /etc/ovirt-engine/notifier/notifier.conf.d/20-snmp.conf", "SNMP_MANAGERS=\" manager1.example.com manager2.example.com:162\" SNMP_COMMUNITY=public SNMP_OID=1.3.6.1.4.1.2312.13.1.1", "FILTER=\"include:*(snmp:) USD{FILTER}\"", "FILTER=\"include:*:ERROR(snmp:) USD{FILTER}\"", "FILTER=\"include:*:ALERT(snmp:) USD{FILTER}\"", "FILTER=\"include: VDC_START (snmp: [email protected] ) USD{FILTER}\"", "FILTER=\"exclude: VDC_START include:*(snmp:) USD{FILTER}\"", "FILTER=\"exclude:*\"", "systemctl start ovirt-engine-notifier.service systemctl enable ovirt-engine-notifier.service" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/configuring_the_red_hat_enterprise_virtualization_manager_to_send_snmp_traps
Chapter 1. How to Upgrade
Chapter 1. How to Upgrade An in-place upgrade is the recommended and supported way to upgrade your system to the major version of RHEL. 1.1. How to upgrade from Red Hat Enterprise Linux 6 The Upgrading from RHEL 6 to RHEL 7 guide describes steps for an in-place upgrade from RHEL 6 to RHEL 7. The supported in-place upgrade path is from RHEL 6.10 to RHEL 7.9. If you are using SAP HANA, follow How do I upgrade from RHEL 6 to RHEL 7 with SAP HANA instead. Note that the upgrade path for RHEL with SAP HANA might differ. The process of upgrading from RHEL 6 to RHEL 7 consists of the following steps: Check that Red Hat supports the upgrade of your system. Prepare your system for the upgrade by installing required repositories and packages and by removing unsupported packages. Check your system for problems that might affect your upgrade using the Preupgrade Assistant. Upgrade your system by running the Red Hat Upgrade Tool.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/migration_planning_guide/chap-Red_Hat_Enterprise_Linux-Migration_Planning_Guide-Upgrading
Chapter 5. Configuring log settings for Serving and Eventing
Chapter 5. Configuring log settings for Serving and Eventing You can configure logging for OpenShift Serverless Serving and OpenShift Serverless Eventing using the KnativeServing and KnativeEventing custom resource (CR). The level of logging is determined by the specified loglevel value. 5.1. Supported log levels The following loglevel values are supported: Table 5.1. Supported log levels Log level Description debug Fine-grained debugging info Normal logging warn Unexpected but non-critical errors error Critical errors; unexpected during normal operation dpanic In debug mode, trigger a panic (crash) Warning Using the debug level for production might negatively affect performance. 5.2. Configuring log settings You can configure logging for Serving and Eventing in the KnativeServing custom resource (CR) and KnativeEventing CR. Procedure Configure the log settings for Serving and Eventing by setting or modifying the loglevel value in the KnativeServing and KnativeEventing CR respectively. Here are two example configurations with all possible logging options set to level info : KnativeServing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: logging: loglevel.controller: "info" loglevel.autoscaler: "info" loglevel.queueproxy: "info" loglevel.webhook: "info" loglevel.activator: "info" loglevel.hpaautoscaler: "info" loglevel.net-certmanager-controller: "info" loglevel.net-istio-controller: "info" loglevel.net-kourier-controller: "info" KnativeEventing CR apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: logging: loglevel.controller: "info" loglevel.eventing-webhook: "info" loglevel.inmemorychannel-dispatcher: "info" loglevel.inmemorychannel-webhook: "info" loglevel.mt-broker-controller: "info" loglevel.mt_broker_filter: "info" loglevel.mt_broker_ingress: "info" loglevel.pingsource-mt-adapter: "info"
[ "apiVersion: operator.knative.dev/v1beta1 kind: KnativeServing metadata: name: knative-serving namespace: knative-serving spec: config: logging: loglevel.controller: \"info\" loglevel.autoscaler: \"info\" loglevel.queueproxy: \"info\" loglevel.webhook: \"info\" loglevel.activator: \"info\" loglevel.hpaautoscaler: \"info\" loglevel.net-certmanager-controller: \"info\" loglevel.net-istio-controller: \"info\" loglevel.net-kourier-controller: \"info\"", "apiVersion: operator.knative.dev/v1beta1 kind: KnativeEventing metadata: name: knative-eventing namespace: knative-eventing spec: config: logging: loglevel.controller: \"info\" loglevel.eventing-webhook: \"info\" loglevel.inmemorychannel-dispatcher: \"info\" loglevel.inmemorychannel-webhook: \"info\" loglevel.mt-broker-controller: \"info\" loglevel.mt_broker_filter: \"info\" loglevel.mt_broker_ingress: \"info\" loglevel.pingsource-mt-adapter: \"info\"" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.35/html/observability/serverless-config-log-setting