title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 3. Installing AMQ Broker
|
Chapter 3. Installing AMQ Broker AMQ Broker is distributed as a platform-independent archive file. To install AMQ Broker on your system, you must download the archive and extract the contents. You should also understand the directories included in the archive. Prerequisites The host on which you are installing AMQ Broker must meet the AMQ Broker supported configurations. For more information, see Red Hat AMQ 7 Supported Configurations . 3.1. Downloading the AMQ Broker archive AMQ Broker is distributed as a platform-independent archive file. You can download it from the Red Hat Customer Portal. Prerequisites You must have a Red Hat subscription. For more information, see Using your Subscription . Procedure In a web browser, navigate to https://access.redhat.com/downloads/ and log in. The Product Downloads page is displayed. In the JBoss Integration and Automation section, click the Red Hat AMQ Broker link. The Software Downloads page is displayed. Select the desired AMQ Broker version from the Version drop-down menu. On the Releases tab, click the Download link for the specific AMQ Broker file you want to download. 3.2. Extracting the AMQ Broker archive on Linux If you are installing AMQ Broker on Red Hat Enterprise Linux, create a new user account for AMQ Broker, and then extract the contents from the installation archive. Procedure Create a new user named amq-broker and provide it a password. USD sudo useradd amq-broker USD sudo passwd amq-broker Create the directory /opt/redhat/amq-broker and make the new amq-broker user and group the owners of it. USD sudo mkdir /opt/redhat USD sudo mkdir /opt/redhat/amq-broker USD sudo chown -R amq-broker:amq-broker /opt/redhat/amq-broker Change the owner of the archive to the new user. USD sudo chown amq-broker:amq-broker amq-broker-7.x.x-bin.zip Move the installation archive to the directory you just created. USD sudo mv amq-broker-7.x.x-bin.zip /opt/redhat/amq-broker As the new amq-broker user, extract the contents by using the unzip command. USD su - amq-broker USD cd /opt/redhat/amq-broker USD unzip <archive_name> .zip USD exit A directory named something similar to apache-artemis-2.33.0.redhat-00016 is created. In the documentation, this location is referred to as <install_dir> . 3.3. Extracting the AMQ Broker archive on Windows systems If you are installing AMQ Broker on a Windows system, create a new directory folder for AMQ Broker, and then extract the contents there. Procedure Use Windows Explorer to create the directory folder \redhat\amq-broker on the desired drive letter. For example: C:\redhat\amq-broker Use Windows Explorer to move the installation archive to the directory you just created. In the \redhat\amq-broker directory, right-click the installation archive zip file and select Extract All . A directory named something similar to apache-artemis-2.33.0.redhat-00016 is created. In the documentation, this location is referred to as <install_dir> . 3.4. Understanding the AMQ Broker installation archive contents The directory created by extracting the archive is the top-level directory for the AMQ Broker installation. This directory is referred to as <install_dir> , and includes the following contents: Table 3.1. Contents of AMQ Broker installation directory This directory... Contains... <install_dir> /web/api API documentation. <install_dir> /bin Binaries and scripts needed to run AMQ Broker. <install_dir> /etc Configuration files. <install_dir> /lib JARs and libraries needed to run AMQ Broker. <install_dir> /schema XML schemas used to validate AMQ Broker configuration. <install_dir> /web The web context loaded when AMQ Broker runs.
|
[
"sudo useradd amq-broker sudo passwd amq-broker",
"sudo mkdir /opt/redhat sudo mkdir /opt/redhat/amq-broker sudo chown -R amq-broker:amq-broker /opt/redhat/amq-broker",
"sudo chown amq-broker:amq-broker amq-broker-7.x.x-bin.zip",
"sudo mv amq-broker-7.x.x-bin.zip /opt/redhat/amq-broker",
"su - amq-broker cd /opt/redhat/amq-broker unzip <archive_name> .zip exit"
] |
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/getting_started_with_amq_broker/installing-broker-getting-started
|
Chapter 4. Deploy standalone Multicloud Object Gateway
|
Chapter 4. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway Important The Multicloud Object Gateway only has a single copy of the database (NooBaa DB). This means if NooBaa DB PVC gets corrupted and we are unable to recover it, can result in total data loss of applicative data residing on the Multicloud Object Gateway. Because of this, Red Hat recommends taking a backup of NooBaa DB PVC regularly. If NooBaa DB fails and cannot be recovered, then you can revert to the latest backed-up version. For instructions on backing up your NooBaa DB, follow the steps in this knowledgabase article . 4.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. Each node should include one disk and requires 3 disks (PVs). However, one PV remains eventually unused by default. This is an expected behavior. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.14 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 4.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node)
|
[
"oc annotate namespace openshift-storage openshift.io/node-selector="
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_using_microsoft_azure/deploy-standalone-multicloud-object-gateway
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/scaling_storage/making-open-source-more-inclusive
|
25.13. iSCSI Discovery Configuration
|
25.13. iSCSI Discovery Configuration The default iSCSI configuration file is /etc/iscsi/iscsid.conf . This file contains iSCSI settings used by iscsid and iscsiadm . During target discovery, the iscsiadm tool uses the settings in /etc/iscsi/iscsid.conf to create two types of records: Node records in /var/lib/iscsi/nodes When logging into a target, iscsiadm uses the settings in this file. Discovery records in /var/lib/iscsi/ discovery_type When performing discovery to the same destination, iscsiadm uses the settings in this file. Before using different settings for discovery, delete the current discovery records (i.e. /var/lib/iscsi/ discovery_type ) first. To do this, use the following command: [5] Here, discovery_type can be either sendtargets , isns , or fw . For details on different types of discovery, refer to the DISCOVERY TYPES section of the iscsiadm (8) man page. There are two ways to reconfigure discovery record settings: Edit the /etc/iscsi/iscsid.conf file directly prior to performing a discovery. Discovery settings use the prefix discovery ; to view them, run: Alternatively, iscsiadm can also be used to directly change discovery record settings, as in: Refer to the iscsiadm (8) man page for more information on available setting options and valid value options for each. After configuring discovery settings, any subsequent attempts to discover new targets will use the new settings. Refer to Section 25.15, "Scanning iSCSI Interconnects" for details on how to scan for new iSCSI targets. For more information on configuring iSCSI target discovery, refer to the man pages of iscsiadm and iscsid . The /etc/iscsi/iscsid.conf file also contains examples on proper configuration syntax. [5] The target_IP and port variables refer to the IP address and port combination of a target/portal, respectively. For more information, refer to Section 25.7.1, "iSCSI API" and Section 25.15, "Scanning iSCSI Interconnects" .
|
[
"iscsiadm -m discovery -t discovery_type -p target_IP : port -o delete",
"iscsiadm -m discovery -t discovery_type -p target_IP : port",
"iscsiadm -m discovery -t discovery_type -p target_IP : port -o update -n setting -v % value"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/iscsi-config
|
Chapter 14. Locking
|
Chapter 14. Locking Red Hat JBoss Data Grid provides locking mechanisms to prevent dirty reads (where a transaction reads an outdated value before another transaction has applied changes to it) and non-repeatable reads. Report a bug 14.1. Configure Locking (Remote Client-Server Mode) In Remote Client-Server mode, locking is configured using the locking element within the cache tags (for example, invalidation-cache , distributed-cache , replicated-cache or local-cache ). Note The default isolation mode for the Remote Client-Server mode configuration is READ_COMMITTED . If the isolation attribute is included to explicitly specify an isolation mode, it is ignored, a warning is thrown, and the default value is used instead. The following is a sample procedure of a basic locking configuration for a default cache in Red Hat JBoss Data Grid's Remote Client-Server mode. Procedure 14.1. Configure Locking (Remote Client-Server Mode) The acquire-timeout parameter specifies the number of milliseconds after which lock acquisition will time out. The concurrency-level parameter defines the number of lock stripes used by the LockManager. The striping parameter specifies whether lock striping will be used for the local cache. Report a bug
|
[
"<distributed-cache> <locking acquire-timeout=\"30000\" concurrency-level=\"1000\" striping=\"false\" /> <!-- Additional configuration here --> </distributed-cache>"
] |
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/chap-locking
|
4.4. Securing Network Access
|
4.4. Securing Network Access 4.4.1. Securing Services With TCP Wrappers and xinetd TCP Wrappers are capable of much more than denying access to services. This section illustrates how they can be used to send connection banners, warn of attacks from particular hosts, and enhance logging functionality. See the hosts_options (5) man page for information about the TCP Wrapper functionality and control language. See the xinetd.conf (5) man page for the available flags, which act as options you can apply to a service. 4.4.1.1. TCP Wrappers and Connection Banners Displaying a suitable banner when users connect to a service is a good way to let potential attackers know that the system administrator is being vigilant. You can also control what information about the system is presented to users. To implement a TCP Wrappers banner for a service, use the banner option. This example implements a banner for vsftpd . To begin, create a banner file. It can be anywhere on the system, but it must have same name as the daemon. For this example, the file is called /etc/banners/vsftpd and contains the following lines: 220-Hello, %c 220-All activity on ftp.example.com is logged. 220-Inappropriate use will result in your access privileges being removed. The %c token supplies a variety of client information, such as the user name and host name, or the user name and IP address to make the connection even more intimidating. For this banner to be displayed to incoming connections, add the following line to the /etc/hosts.allow file: vsftpd : ALL : banners /etc/banners/ 4.4.1.2. TCP Wrappers and Attack Warnings If a particular host or network has been detected attacking the server, TCP Wrappers can be used to warn the administrator of subsequent attacks from that host or network using the spawn directive. In this example, assume that a cracker from the 206.182.68.0/24 network has been detected attempting to attack the server. Place the following line in the /etc/hosts.deny file to deny any connection attempts from that network, and to log the attempts to a special file: ALL : 206.182.68.0 : spawn /bin/echo `date` %c %d >> /var/log/intruder_alert The %d token supplies the name of the service that the attacker was trying to access. To allow the connection and log it, place the spawn directive in the /etc/hosts.allow file. Note Because the spawn directive executes any shell command, it is a good idea to create a special script to notify the administrator or execute a chain of commands in the event that a particular client attempts to connect to the server. 4.4.1.3. TCP Wrappers and Enhanced Logging If certain types of connections are of more concern than others, the log level can be elevated for that service using the severity option. For this example, assume that anyone attempting to connect to port 23 (the Telnet port) on an FTP server is a cracker. To denote this, place an emerg flag in the log files instead of the default flag, info , and deny the connection. To do this, place the following line in /etc/hosts.deny : in.telnetd : ALL : severity emerg This uses the default authpriv logging facility, but elevates the priority from the default value of info to emerg , which posts log messages directly to the console. 4.4.2. Verifying Which Ports Are Listening It is important to close unused ports to avoid possible attacks. For unexpected ports in listening state, you should investigate for possible signs of intrusion. Using netstat for Open Ports Scan Enter the following command as root to determine which ports are listening for connections from the network: Use the -l option of the netstat command to display only listening server sockets: Using ss for Open Ports Scan Alternatively, use the ss utility to list open ports in the listening state. It can display more TCP and state information than netstat . The UNCONN state shows the ports in UDP listening mode. Make a scan for every IP address shown in the ss output (except for localhost 127.0.0.0 or ::1 range) from an external system. Use the -6 option for scanning an IPv6 address. Proceed then to make external checks using the nmap tool from another remote machine connected through the network to the first system. This can be used to verify rules in firewalld . The following is an example to determine which ports are listening for TCP connections: The TCP connect scan (-sT) is the default TCP scan type when the TCP SYN scan (-sS) is not an option. The -O option detects the operating system of the host. Using netstat and ss to Scan for Open SCTP Ports The netstat utility prints information about the Linux networking subsystem. To display protocol statistics for open Stream Control Transmission Protocol (SCTP) ports, enter the following command as root : The ss utility is also able to show SCTP open ports: See the ss (8) , netstat (8) , nmap (1) , and services (5) manual pages for more information. 4.4.3. Disabling Source Routing Source routing is an Internet Protocol mechanism that allows an IP packet to carry information, a list of addresses, that tells a router the path the packet must take. There is also an option to record the hops as the route is traversed. The list of hops taken, the "route record", provides the destination with a return path to the source. This allows the source (the sending host) to specify the route, loosely or strictly, ignoring the routing tables of some or all of the routers. It can allow a user to redirect network traffic for malicious purposes. Therefore, source-based routing should be disabled. The accept_source_route option causes network interfaces to accept packets with the Strict Source Routing ( SSR ) or Loose Source Routing ( LSR ) option set. The acceptance of source routed packets is controlled by sysctl settings. Issue the following command as root to drop packets with the SSR or LSR option set: Disabling the forwarding of packets should also be done in conjunction with the above when possible (disabling forwarding may interfere with virtualization). Issue the commands listed below as root: These commands disable forwarding of IPv4 and IPv6 packets on all interfaces: These commands disable forwarding of all multicast packets on all interfaces: Accepting ICMP redirects has few legitimate uses. Disable the acceptance and sending of ICMP redirected packets unless specifically required. These commands disable acceptance of all ICMP redirected packets on all interfaces: This command disables acceptance of secure ICMP redirected packets on all interfaces: This command disables acceptance of all IPv4 ICMP redirected packets on all interfaces: Important Sending of ICMP redirects remains active if at least one of the net.ipv4.conf.all.send_redirects or net.ipv4.conf. interface .send_redirects options is set to enabled. Ensure that you set the net.ipv4.conf. interface .send_redirects option to the 0 value for every interface . To automatically disable sending of ICMP requests whenever you add a new interface, enter the following command: There is only a directive to disable sending of IPv4 redirected packets. See RFC4294 for an explanation of " IPv6 Node Requirements " which resulted in this difference between IPv4 and IPv6. Note To make these settings persistent across reboots, modify the /etc/sysctl.conf file. For example, to disable acceptance of all IPv4 ICMP redirected packets on all interfaces, open the /etc/sysctl.conf file with an editor running as the root user and add a line as follows: net.ipv4.conf.all.send_redirects=0 See the sysctl man page, sysctl(8) , for more information. See RFC791 for an explanation of the Internet options related to source based routing and its variants. Warning Ethernet networks provide additional ways to redirect traffic, such as ARP or MAC address spoofing, unauthorized DHCP servers, and IPv6 router or neighbor advertisements. In addition, unicast traffic is occasionally broadcast, causing information leaks. These weaknesses can only be addressed by specific countermeasures implemented by the network operator. Host-based countermeasures are not fully effective. 4.4.3.1. Reverse Path Forwarding Reverse Path Forwarding is used to prevent packets that arrived through one interface from leaving through a different interface. When outgoing routes and incoming routes are different, it is sometimes referred to as asymmetric routing . Routers often route packets this way, but most hosts should not need to do this. Exceptions are such applications that involve sending traffic out over one link and receiving traffic over another link from a different service provider. For example, using leased lines in combination with xDSL or satellite links with 3G modems. If such a scenario is applicable to you, then turning off reverse path forwarding on the incoming interface is necessary. In short, unless you know that it is required, it is best enabled as it prevents users spoofing IP addresses from local subnets and reduces the opportunity for DDoS attacks. Note Red Hat Enterprise Linux 7 defaults to using Strict Reverse Path Forwarding following the Strict Reverse Path recommendation from RFC 3704, Ingress Filtering for Multihomed Networks .. Warning If forwarding is enabled, then Reverse Path Forwarding should only be disabled if there are other means for source-address validation (such as iptables rules for example). rp_filter Reverse Path Forwarding is enabled by means of the rp_filter directive. The sysctl utility can be used to make changes to the running system, and permanent changes can be made by adding lines to the /etc/sysctl.conf file. The rp_filter option is used to direct the kernel to select from one of three modes. To make a temporary global change, enter the following commands as root : where integer is one of the following: 0 - No source validation. 1 - Strict mode as defined in RFC 3704. 2 - Loose mode as defined in RFC 3704. The setting can be overridden per network interface using the net.ipv4.conf. interface .rp_filter command as follows: sysctl -w net.ipv4.conf. interface .rp_filter= integer Note To make these settings persistent across reboots, modify the /etc/sysctl.conf file. For example, to change the mode for all interfaces, open the /etc/sysctl.conf file with an editor running as the root user and add a line as follows: net.ipv4.conf.all.rp_filter=2 IPv6_rpfilter In case of the IPv6 protocol the firewalld daemon applies to Reverse Path Forwarding by default. The setting can be checked in the /etc/firewalld/firewalld.conf file. You can change the firewalld behavior by setting the IPv6_rpfilter option. If you need a custom configuration of Reverse Path Forwarding, you can perform it without the firewalld daemon by using the ip6tables command as follows: ip6tables -t raw -I PREROUTING -m rpfilter --invert -j DROP This rule should be inserted near the beginning of the raw/PREROUTING chain, so that it applies to all traffic, in particular before the stateful matching rules. For more information about the iptables and ip6tables services, see Section 5.13, "Setting and Controlling IP sets using iptables " . Enabling Packet Forwarding To enable packets arriving from outside of a system to be forwarded to another external host, IP forwarding must be enabled in the kernel. Log in as root and change the line which reads net.ipv4.ip_forward = 0 in the /etc/sysctl.conf file to the following: To load the changes from the /etc/sysctl.conf file, enter the following command: To check if IP forwarding is turned on, issue the following command as root : If the above command returns a 1 , then IP forwarding is enabled. If it returns a 0 , then you can turn it on manually using the following command: 4.4.3.2. Additional Resources The following are resources which explain more about Reverse Path Forwarding. Installed Documentation /usr/share/doc/kernel-doc- version /Documentation/networking/ip-sysctl.txt - This file contains a complete list of files and options available in the directory. Before accessing the kernel documentation for the first time, enter the following command as root : Online Documentation See RFC 3704 for an explanation of Ingress Filtering for Multihomed Networks.
|
[
"220-Hello, %c 220-All activity on ftp.example.com is logged. 220-Inappropriate use will result in your access privileges being removed.",
"vsftpd : ALL : banners /etc/banners/",
"ALL : 206.182.68.0 : spawn /bin/echo `date` %c %d >> /var/log/intruder_alert",
"in.telnetd : ALL : severity emerg",
"~]# netstat -pan -A inet,inet6 | grep -v ESTABLISHED Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name Active Internet connections (servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN 1/systemd tcp 0 0 192.168.124.1:53 0.0.0.0:* LISTEN 1829/dnsmasq tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 1176/sshd tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN 1177/cupsd tcp6 0 0 :::111 :::* LISTEN 1/systemd tcp6 0 0 ::1:25 :::* LISTEN 1664/master sctp 0.0.0.0:2500 LISTEN 20985/sctp_darn udp 0 0 192.168.124.1:53 0.0.0.0:* 1829/dnsmasq udp 0 0 0.0.0.0:67 0.0.0.0:* 977/dhclient",
"~]# netstat -tlnw Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:111 0.0.0.0:* LISTEN tcp 0 0 192.168.124.1:53 0.0.0.0:* LISTEN tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:631 0.0.0.0:* LISTEN tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN tcp6 0 0 :::111 :::* LISTEN tcp6 0 0 :::22 :::* LISTEN tcp6 0 0 ::1:631 :::* LISTEN tcp6 0 0 ::1:25 :::* LISTEN raw6 0 0 :::58 :::* 7",
"~]# ss -tlw etid State Recv-Q Send-Q Local Address:Port Peer Address:Port udp UNCONN 0 0 :::ipv6-icmp :::* tcp LISTEN 0 128 *:sunrpc *:* tcp LISTEN 0 5 192.168.124.1:domain *:* tcp LISTEN 0 128 *:ssh *:* tcp LISTEN 0 128 127.0.0.1:ipp *:* tcp LISTEN 0 100 127.0.0.1:smtp *:* tcp LISTEN 0 128 :::sunrpc :::* tcp LISTEN 0 128 :::ssh :::* tcp LISTEN 0 128 ::1:ipp :::* tcp LISTEN 0 100 ::1:smtp :::*",
"~]# ss -plno -A tcp,udp,sctp Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port udp UNCONN 0 0 192.168.124.1:53 *:* users:((\"dnsmasq\",pid=1829,fd=5)) udp UNCONN 0 0 *%virbr0:67 *:* users:((\"dnsmasq\",pid=1829,fd=3)) udp UNCONN 0 0 *:68 *:* users:((\"dhclient\",pid=977,fd=6)) tcp LISTEN 0 5 192.168.124.1:53 *:* users:((\"dnsmasq\",pid=1829,fd=6)) tcp LISTEN 0 128 *:22 *:* users:((\"sshd\",pid=1176,fd=3)) tcp LISTEN 0 128 127.0.0.1:631 *:* users:((\"cupsd\",pid=1177,fd=12)) tcp LISTEN 0 100 127.0.0.1:25 *:* users:((\"master\",pid=1664,fd=13)) sctp LISTEN 0 5 *:2500 *:* users:((\"sctp_darn\",pid=20985,fd=3))",
"~]# nmap -sT -O 192.168.122.65 Starting Nmap 6.40 ( http://nmap.org ) at 2017-03-27 09:30 CEST Nmap scan report for 192.168.122.65 Host is up (0.00032s latency). Not shown: 998 closed ports PORT STATE SERVICE 22/tcp open ssh 111/tcp open rpcbind Device type: general purpose Running: Linux 3.X OS CPE: cpe:/o:linux:linux_kernel:3 OS details: Linux 3.7 - 3.9 Network Distance: 0 hops OS detection performed. Please report any incorrect results at http://nmap.org/submit/ . Nmap done: 1 IP address (1 host up) scanned in 1.79 seconds",
"~]# netstat -plnS Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name sctp 127.0.0.1:250 LISTEN 4125/sctp_darn sctp 0 0 127.0.0.1:260 127.0.0.1:250 CLOSE 4250/sctp_darn sctp 0 0 127.0.0.1:250 127.0.0.1:260 LISTEN 4125/sctp_darn",
"~]# netstat -nl -A inet,inet6 | grep 2500 sctp 0.0.0.0:2500 LISTEN",
"~]# ss -an | grep 2500 sctp LISTEN 0 5 *:2500 *:*",
"~]# /sbin/sysctl -w net.ipv4.conf.all.accept_source_route=0",
"~]# /sbin/sysctl -w net.ipv4.conf.all.forwarding=0",
"~]# /sbin/sysctl -w net.ipv6.conf.all.forwarding=0",
"~]# /sbin/sysctl -w net.ipv4.conf.all.mc_forwarding=0",
"~]# /sbin/sysctl -w net.ipv6.conf.all.mc_forwarding=0",
"~]# /sbin/sysctl -w net.ipv4.conf.all.accept_redirects=0",
"~]# /sbin/sysctl -w net.ipv6.conf.all.accept_redirects=0",
"~]# /sbin/sysctl -w net.ipv4.conf.all.secure_redirects=0",
"~]# /sbin/sysctl -w net.ipv4.conf.all.send_redirects=0",
"~]# /sbin/sysctl -w net.ipv4.conf.default.send_redirects=0",
"sysctl -w net.ipv4.conf.default.rp_filter= integer sysctl -w net.ipv4.conf.all.rp_filter= integer",
"net.ipv4.ip_forward = 1",
"/sbin/sysctl -p",
"/sbin/sysctl net.ipv4.ip_forward",
"/sbin/sysctl -w net.ipv4.ip_forward=1",
"~]# yum install kernel-doc"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-Securing_Network_Access
|
Web console
|
Web console OpenShift Container Platform 4.7 Getting started with the web console in OpenShift Container Platform Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/web_console/index
|
Chapter 2. Creating the required Alibaba Cloud resources
|
Chapter 2. Creating the required Alibaba Cloud resources Before you install OpenShift Container Platform, you must use the Alibaba Cloud console to create a Resource Access Management (RAM) user that has sufficient permissions to install OpenShift Container Platform into your Alibaba Cloud. This user must also have permissions to create new RAM users. You can also configure and use the ccoctl tool to create new credentials for the OpenShift Container Platform components with the permissions that they require. Important Alibaba Cloud on OpenShift Container Platform is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.1. Creating the required RAM user You must have a Alibaba Cloud Resource Access Management (RAM) user for the installation that has sufficient privileges. You can use the Alibaba Cloud Resource Access Management console to create a new user or modify an existing user. Later, you create credentials in OpenShift Container Platform based on this user's permissions. When you configure the RAM user, be sure to consider the following requirements: The user must have an Alibaba Cloud AccessKey ID and AccessKey secret pair. For a new user, you can select Open API Access for the Access Mode when creating the user. This mode generates the required AccessKey pair. For an existing user, you can add an AccessKey pair or you can obtain the AccessKey pair for that user. Note When created, the AccessKey secret is displayed only once. You must immediately save the AccessKey pair because the AccessKey pair is required for API calls. Add the AccessKey ID and secret to the ~/.alibabacloud/credentials file on your local computer. Alibaba Cloud automatically creates this file when you log in to the console. The Cloud Credential Operator (CCO) utility, ccoutil, uses these credentials when processing Credential Request objects. For example: [default] # Default client type = access_key # Certification type: access_key access_key_id = LTAI5t8cefXKmt # Key 1 access_key_secret = wYx56mszAN4Uunfh # Secret 1 Add your AccessKeyID and AccessKeySecret here. The RAM user must have the AdministratorAccess policy to ensure that the account has sufficient permission to create the OpenShift Container Platform cluster. This policy grants permissions to manage all Alibaba Cloud resources. When you attach the AdministratorAccess policy to a RAM user, you grant that user full access to all Alibaba Cloud services and resources. If you do not want to create a user with full access, create a custom policy with the following actions that you can add to your RAM user for installation. These actions are sufficient to install OpenShift Container Platform. Tip You can copy and paste the following JSON code into the Alibaba Cloud console to create a custom poicy. For information on creating custom policies, see Create a custom policy in the Alibaba Cloud documentation. Example 2.1. Example custom policy JSON file { "Version": "1", "Statement": [ { "Action": [ "tag:ListTagResources", "tag:UntagResources" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "vpc:DescribeVpcs", "vpc:DeleteVpc", "vpc:DescribeVSwitches", "vpc:DeleteVSwitch", "vpc:DescribeEipAddresses", "vpc:DescribeNatGateways", "vpc:ReleaseEipAddress", "vpc:DeleteNatGateway", "vpc:DescribeSnatTableEntries", "vpc:CreateSnatEntry", "vpc:AssociateEipAddress", "vpc:ListTagResources", "vpc:TagResources", "vpc:DescribeVSwitchAttributes", "vpc:CreateVSwitch", "vpc:CreateNatGateway", "vpc:DescribeRouteTableList", "vpc:CreateVpc", "vpc:AllocateEipAddress", "vpc:ListEnhanhcedNatGatewayAvailableZones" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "ecs:ModifyInstanceAttribute", "ecs:DescribeSecurityGroups", "ecs:DeleteSecurityGroup", "ecs:DescribeSecurityGroupReferences", "ecs:DescribeSecurityGroupAttribute", "ecs:RevokeSecurityGroup", "ecs:DescribeInstances", "ecs:DeleteInstances", "ecs:DescribeNetworkInterfaces", "ecs:DescribeInstanceRamRole", "ecs:DescribeUserData", "ecs:DescribeDisks", "ecs:ListTagResources", "ecs:AuthorizeSecurityGroup", "ecs:RunInstances", "ecs:TagResources", "ecs:ModifySecurityGroupPolicy", "ecs:CreateSecurityGroup", "ecs:DescribeAvailableResource", "ecs:DescribeRegions", "ecs:AttachInstanceRamRole" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "pvtz:DescribeRegions", "pvtz:DescribeZones", "pvtz:DeleteZone", "pvtz:DeleteZoneRecord", "pvtz:BindZoneVpc", "pvtz:DescribeZoneRecords", "pvtz:AddZoneRecord", "pvtz:SetZoneRecordStatus", "pvtz:DescribeZoneInfo", "pvtz:DescribeSyncEcsHostTask", "pvtz:AddZone" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "slb:DescribeLoadBalancers", "slb:SetLoadBalancerDeleteProtection", "slb:DeleteLoadBalancer", "slb:SetLoadBalancerModificationProtection", "slb:DescribeLoadBalancerAttribute", "slb:AddBackendServers", "slb:DescribeLoadBalancerTCPListenerAttribute", "slb:SetLoadBalancerTCPListenerAttribute", "slb:StartLoadBalancerListener", "slb:CreateLoadBalancerTCPListener", "slb:ListTagResources", "slb:TagResources", "slb:CreateLoadBalancer" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "ram:ListResourceGroups", "ram:DeleteResourceGroup", "ram:ListPolicyAttachments", "ram:DetachPolicy", "ram:GetResourceGroup", "ram:CreateResourceGroup", "ram:DeleteRole", "ram:GetPolicy", "ram:DeletePolicy", "ram:ListPoliciesForRole", "ram:CreateRole", "ram:AttachPolicyToRole", "ram:GetRole", "ram:CreatePolicy", "ram:CreateUser", "ram:DetachPolicyFromRole", "ram:CreatePolicyVersion", "ram:DetachPolicyFromUser", "ram:ListPoliciesForUser", "ram:AttachPolicyToUser", "ram:CreateUser", "ram:GetUser", "ram:DeleteUser", "ram:CreateAccessKey", "ram:ListAccessKeys", "ram:DeleteAccessKey", "ram:ListUsers", "ram:ListPolicyVersions" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "oss:DeleteBucket", "oss:DeleteBucketTagging", "oss:GetBucketTagging", "oss:GetBucketCors", "oss:GetBucketPolicy", "oss:GetBucketLifecycle", "oss:GetBucketReferer", "oss:GetBucketTransferAcceleration", "oss:GetBucketLog", "oss:GetBucketWebSite", "oss:GetBucketInfo", "oss:PutBucketTagging", "oss:PutBucket", "oss:OpenOssService", "oss:ListBuckets", "oss:GetService", "oss:PutBucketACL", "oss:GetBucketLogging", "oss:ListObjects", "oss:GetObject", "oss:PutObject", "oss:DeleteObject" ], "Resource": "*", "Effect": "Allow" }, { "Action": [ "alidns:DescribeDomainRecords", "alidns:DeleteDomainRecord", "alidns:DescribeDomains", "alidns:DescribeDomainRecordInfo", "alidns:AddDomainRecord", "alidns:SetDomainRecordStatus" ], "Resource": "*", "Effect": "Allow" }, { "Action": "bssapi:CreateInstance", "Resource": "*", "Effect": "Allow" }, { "Action": "ram:PassRole", "Resource": "*", "Effect": "Allow", "Condition": { "StringEquals": { "acs:Service": "ecs.aliyuncs.com" } } } ] } For more information about creating a RAM user and granting permissions, see Create a RAM user and Grant permissions to a RAM user in the Alibaba Cloud documentation. 2.2. Configuring the Cloud Credential Operator utility To assign RAM users and policies that provide long-lived RAM AccessKeys (AKs) for each in-cluster component, extract and prepare the Cloud Credential Operator (CCO) utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Obtain the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file by running the following command: USD ccoctl --help Output of ccoctl --help OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Preparing to update a cluster with manually maintained credentials 2.3. steps Install a cluster on Alibaba Cloud infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster quickly on Alibaba Cloud : You can install a cluster quickly by using the default configuration options. Installing a customized cluster on Alibaba Cloud : The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation .
|
[
"Default client type = access_key # Certification type: access_key access_key_id = LTAI5t8cefXKmt # Key 1 access_key_secret = wYx56mszAN4Uunfh # Secret",
"{ \"Version\": \"1\", \"Statement\": [ { \"Action\": [ \"tag:ListTagResources\", \"tag:UntagResources\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"vpc:DescribeVpcs\", \"vpc:DeleteVpc\", \"vpc:DescribeVSwitches\", \"vpc:DeleteVSwitch\", \"vpc:DescribeEipAddresses\", \"vpc:DescribeNatGateways\", \"vpc:ReleaseEipAddress\", \"vpc:DeleteNatGateway\", \"vpc:DescribeSnatTableEntries\", \"vpc:CreateSnatEntry\", \"vpc:AssociateEipAddress\", \"vpc:ListTagResources\", \"vpc:TagResources\", \"vpc:DescribeVSwitchAttributes\", \"vpc:CreateVSwitch\", \"vpc:CreateNatGateway\", \"vpc:DescribeRouteTableList\", \"vpc:CreateVpc\", \"vpc:AllocateEipAddress\", \"vpc:ListEnhanhcedNatGatewayAvailableZones\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"ecs:ModifyInstanceAttribute\", \"ecs:DescribeSecurityGroups\", \"ecs:DeleteSecurityGroup\", \"ecs:DescribeSecurityGroupReferences\", \"ecs:DescribeSecurityGroupAttribute\", \"ecs:RevokeSecurityGroup\", \"ecs:DescribeInstances\", \"ecs:DeleteInstances\", \"ecs:DescribeNetworkInterfaces\", \"ecs:DescribeInstanceRamRole\", \"ecs:DescribeUserData\", \"ecs:DescribeDisks\", \"ecs:ListTagResources\", \"ecs:AuthorizeSecurityGroup\", \"ecs:RunInstances\", \"ecs:TagResources\", \"ecs:ModifySecurityGroupPolicy\", \"ecs:CreateSecurityGroup\", \"ecs:DescribeAvailableResource\", \"ecs:DescribeRegions\", \"ecs:AttachInstanceRamRole\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"pvtz:DescribeRegions\", \"pvtz:DescribeZones\", \"pvtz:DeleteZone\", \"pvtz:DeleteZoneRecord\", \"pvtz:BindZoneVpc\", \"pvtz:DescribeZoneRecords\", \"pvtz:AddZoneRecord\", \"pvtz:SetZoneRecordStatus\", \"pvtz:DescribeZoneInfo\", \"pvtz:DescribeSyncEcsHostTask\", \"pvtz:AddZone\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"slb:DescribeLoadBalancers\", \"slb:SetLoadBalancerDeleteProtection\", \"slb:DeleteLoadBalancer\", \"slb:SetLoadBalancerModificationProtection\", \"slb:DescribeLoadBalancerAttribute\", \"slb:AddBackendServers\", \"slb:DescribeLoadBalancerTCPListenerAttribute\", \"slb:SetLoadBalancerTCPListenerAttribute\", \"slb:StartLoadBalancerListener\", \"slb:CreateLoadBalancerTCPListener\", \"slb:ListTagResources\", \"slb:TagResources\", \"slb:CreateLoadBalancer\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"ram:ListResourceGroups\", \"ram:DeleteResourceGroup\", \"ram:ListPolicyAttachments\", \"ram:DetachPolicy\", \"ram:GetResourceGroup\", \"ram:CreateResourceGroup\", \"ram:DeleteRole\", \"ram:GetPolicy\", \"ram:DeletePolicy\", \"ram:ListPoliciesForRole\", \"ram:CreateRole\", \"ram:AttachPolicyToRole\", \"ram:GetRole\", \"ram:CreatePolicy\", \"ram:CreateUser\", \"ram:DetachPolicyFromRole\", \"ram:CreatePolicyVersion\", \"ram:DetachPolicyFromUser\", \"ram:ListPoliciesForUser\", \"ram:AttachPolicyToUser\", \"ram:CreateUser\", \"ram:GetUser\", \"ram:DeleteUser\", \"ram:CreateAccessKey\", \"ram:ListAccessKeys\", \"ram:DeleteAccessKey\", \"ram:ListUsers\", \"ram:ListPolicyVersions\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"oss:DeleteBucket\", \"oss:DeleteBucketTagging\", \"oss:GetBucketTagging\", \"oss:GetBucketCors\", \"oss:GetBucketPolicy\", \"oss:GetBucketLifecycle\", \"oss:GetBucketReferer\", \"oss:GetBucketTransferAcceleration\", \"oss:GetBucketLog\", \"oss:GetBucketWebSite\", \"oss:GetBucketInfo\", \"oss:PutBucketTagging\", \"oss:PutBucket\", \"oss:OpenOssService\", \"oss:ListBuckets\", \"oss:GetService\", \"oss:PutBucketACL\", \"oss:GetBucketLogging\", \"oss:ListObjects\", \"oss:GetObject\", \"oss:PutObject\", \"oss:DeleteObject\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": [ \"alidns:DescribeDomainRecords\", \"alidns:DeleteDomainRecord\", \"alidns:DescribeDomains\", \"alidns:DescribeDomainRecordInfo\", \"alidns:AddDomainRecord\", \"alidns:SetDomainRecordStatus\" ], \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": \"bssapi:CreateInstance\", \"Resource\": \"*\", \"Effect\": \"Allow\" }, { \"Action\": \"ram:PassRole\", \"Resource\": \"*\", \"Effect\": \"Allow\", \"Condition\": { \"StringEquals\": { \"acs:Service\": \"ecs.aliyuncs.com\" } } } ] }",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"ccoctl --help",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command."
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_alibaba/manually-creating-alibaba-ram
|
Chapter 4. MTR 1.2.5
|
Chapter 4. MTR 1.2.5 4.1. New features Migration Toolkit for Runtimes (MTR) 1.2.5 has the following new features: New ruleset for MicroProfile metrics replaces old ruleset A new ruleset for MicroProfile (MP) metrics replaces the old ruleset. (WINDUPRULE-1043) New ruleset for MicroProfile OpenTracing replaces the old ruleset A new ruleset for MicroProfile (MP) OpenTracing replaces the old ruleset. (WINDUPRULE-1044) 4.2. Known issues There are no major known issues in this Migration Toolkit for Runtimes (MTR) 1.2.5 release. For a complete list of all known issues, see the list of MTR 1.2.5 known issues in Jira. 4.3. Resolved issues Migration Toolkit for Runtimes (MTR) 1.2.5 resolves the following issues: CVE-2024-25710 commons-compress: Denial of service caused by an infinite loop A loop with an unreachable exit condition, meaning an Infinite Loop, vulnerability, was found in Apache Common Compress. This issue could have led to a denial of service. This issue affects Apache Commons Compress: from 1.3 through 1.25.0. Users are recommended to upgrade to MTR 1.2.5, which resolves this issue. For more details, see (CVE-2024-25710) . CVE-2024-26308 commons-compress: OutOfMemoryError An allocation of resources without limits or throttling vulnerability was found in Apache Commons Compress. This issue could lead to an out-of-memory error (OOM). This issue affects Apache Commons Compress, from 1.21 to 1.26. Users are recommended to upgrade to MTR 1.2.5, which resolves this issue. For more details, see (CVE-2024-26308) . CVE-2024-1300: A vulnerability in the Eclipse Vert.x toolkit causes a memory leak in TCP servers configured with TLS and SNI support A vulnerability in the Eclipse Vert.x toolkit causes a memory leak in Transmission Control Protocol (TCP) servers configured with TLS and SNI support. When processing an unknown Server Name Indication (SNI) server name assigned the default certificate instead of a mapped certificate, the Secure Sockets Layer (SSL) context is erroneously cached in the server name map, leading to memory exhaustion. This affects only TLS servers with SNI enabled. Users are recommended to upgrade to MTR 1.2.5, which resolves this issue. For more details, see (CVE-2024-1300) . For a complete list of all issues resolved in this release, see the list of MTR 1.2.5 resolved issues in Jira.
| null |
https://docs.redhat.com/en/documentation/migration_toolkit_for_runtimes/1.2/html/release_notes/mtr_1_2_5
|
5.3. Displaying Device-Specific Fencing Options
|
5.3. Displaying Device-Specific Fencing Options Use the following command to view the options for the specified STONITH agent. For example, the following command displays the options for the fence agent for APC over telnet/SSH. Warning For fence agents that provide a method option, a value of cycle is unsupported and should not be specified, as it may cause data corruption.
|
[
"pcs stonith describe stonith_agent",
"pcs stonith describe fence_apc Stonith options for: fence_apc ipaddr (required): IP Address or Hostname login (required): Login Name passwd: Login password or passphrase passwd_script: Script to retrieve password cmd_prompt: Force command prompt secure: SSH connection port (required): Physical plug number or name of virtual machine identity_file: Identity file for ssh switch: Physical switch number on device inet4_only: Forces agent to use IPv4 addresses only inet6_only: Forces agent to use IPv6 addresses only ipport: TCP port to use for connection with device action (required): Fencing Action verbose: Verbose mode debug: Write debug information to given file version: Display version information and exit help: Display help and exit separator: Separator for CSV created by operation list power_timeout: Test X seconds for status change after ON/OFF shell_timeout: Wait X seconds for cmd prompt after issuing command login_timeout: Wait X seconds for cmd prompt after login power_wait: Wait X seconds after issuing ON/OFF delay: Wait X seconds before fencing is started retry_on: Count of attempts to retry power on"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-fencedevicespecific-HAAR
|
5.56. dvd+rw-tools
|
5.56. dvd+rw-tools 5.56.1. RHBA-2012:1320 - dvd+rw-tools bug fix update Updated dvd+rw-tools packages that fix one bug are now available for Red Hat Enterprise Linux 6. The dvd+rw-tools packages contain a collection of tools to master DVD+RW/+R media. Bug Fix BZ# 807474 Prior to this update, the growisofs utility wrote chunks of 32KB and reported an error during the last chunk when burning ISO image files that were not aligned to 32KB. This update allows the written chunk to be smaller than a multiple of 16 blocks. All users of dvd+rw-tools are advised to upgrade to these updated packages, which fix this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/dvd-rw-tools
|
Chapter 2. Logging 6.1
|
Chapter 2. Logging 6.1 2.1. Support Only the configuration options described in this documentation are supported for logging. Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across OpenShift Container Platform releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences. Note If you must perform configurations not described in the OpenShift Container Platform documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged . An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed . Note Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Container Platform. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. Important For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems. Logging is not: A high scale log collection system Security Information and Event Monitoring (SIEM) compliant A "bring your own" (BYO) log collector configuration Historical or long term log retention or storage A guaranteed log sink Secure storage - audit logs are not stored by default 2.1.1. Supported API custom resource definitions The following table describes the supported Logging APIs. Table 2.1. Logging API support states CustomResourceDefinition (CRD) ApiVersion Support state LokiStack lokistack.loki.grafana.com/v1 Supported from 5.5 RulerConfig rulerconfig.loki.grafana/v1 Supported from 5.7 AlertingRule alertingrule.loki.grafana/v1 Supported from 5.7 RecordingRule recordingrule.loki.grafana/v1 Supported from 5.7 LogFileMetricExporter LogFileMetricExporter.logging.openshift.io/v1alpha1 Supported from 5.8 ClusterLogForwarder clusterlogforwarder.observability.openshift.io/v1 Supported from 6.0 2.1.2. Unsupported configurations You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components: The collector configuration file The collector daemonset Explicitly unsupported cases include: Configuring the logging collector using environment variables . You cannot use environment variables to modify the log collector. Configuring how the log collector normalizes logs . You cannot modify default log normalization. 2.1.3. Support policy for unmanaged Operators The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates. While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. An Operator can be set to an unmanaged state using the following methods: Individual Operator configuration Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource. Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery. Warning Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed. Cluster Version Operator (CVO) overrides The spec.overrides parameter can be added to the CVO's configuration to allow administrators to provide a list of overrides to the CVO's behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed. 2.1.4. Collecting logging data for Red Hat Support When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components. For prompt support, supply diagnostic information for both OpenShift Container Platform and logging. 2.1.4.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues. For your logging, must-gather collects the following information: Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level Cluster-level resources, including nodes, roles, and role bindings at the cluster level OpenShift Logging resources in the openshift-logging and openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer When you run oc adm must-gather , a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in the current working directory. 2.1.4.2. Collecting logging data You can use the oc adm must-gather CLI command to collect information about logging. Procedure To collect logging information with must-gather : Navigate to the directory where you want to store the must-gather information. Run the oc adm must-gather command against the logging image: USD oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') The must-gather tool creates a new directory that starts with must-gather.local within the current directory. For example: must-gather.local.4157245944708210408 . Create a compressed file from the must-gather directory that was just created. For example, on a computer that uses a Linux operating system, run the following command: USD tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408 Attach the compressed file to your support case on the Red Hat Customer Portal . 2.2. Logging 6.1 2.2.1. Logging 6.1.3 Release Notes This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.3 . 2.2.1.1. Bug Fixes Before this update, when using the new 1x.pico size with the Loki Operator, the PodDisruptionBudget created for the Ingester pod allowed Kubernetes to evict two of the three Ingester pods. With this update, the Operator now creates a PodDisruptionBudget that allows eviction of only a single Ingester pod. ( LOG-6693 ) Before this update, the Operator did not support templating of syslog facility and severity level , which was consistent with the rest of the API. Instead, the Operator relied upon the 5.x API, which is no longer supported. With this update, the Operator supports templating by adding the required validation to the API and rejecting resources that do not match the required format. ( LOG-6788 ) Before this update, empty OTEL tuning configuration caused a validation error. With this update, the validation rules allow empty OTEL tuning configurations. ( LOG-6532 ) 2.2.1.2. CVEs CVE-2020-11023 CVE-2024-9287 CVE-2024-12797 2.2.2. Logging 6.1.2 Release Notes This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.2 . 2.2.2.1. New Features and Enhancements This enhancement adds OTel semantic stream labels to the lokiStack output so that you can query logs by using both ViaQ and OTel stream labels. ( LOG-6579 ) 2.2.2.2. Bug Fixes Before this update, the collector alerting rules contained summary and message fields. With this update, the collector alerting rules contain summary and description fields. ( LOG-6126 ) Before this update, the collector metrics dashboard could get removed after an Operator upgrade due to a race condition during the transition from the old to the new pod deployment. With this update, labels are added to the dashboard ConfigMap to identify the upgraded deployment as the current owner so that it will not be removed. ( LOG-6280 ) Before this update, when you included infrastructure namespaces in application inputs, their log_type would be set to application . With this update, the log_type of infrastructure namespaces included in application inputs is set to infrastructure . ( LOG-6373 ) Before this update, the Cluster Logging Operator used a cached client to fetch the SecurityContextConstraint cluster resource, which could result in an error when the cache is invalid. With this update, the Operator now always retrieves data from the API server instead of using a cache. ( LOG-6418 ) Before this update, the logging must-gather did not collect resources such as UIPlugin , ClusterLogForwarder , LogFileMetricExporter , and LokiStack . With this update, the must-gather now collects all of these resources and places them in their respective namespace directory instead of the cluster-logging directory. ( LOG-6422 ) Before this update, the Vector startup script attempted to delete buffer lock files during startup. With this update, the Vector startup script no longer attempts to delete buffer lock files during startup. ( LOG-6506 ) Before this update, the API documentation incorrectly claimed that lokiStack outputs would default the target namespace, which could prevent the collector from writing to that output. With this update, this claim has been removed from the API documentation and the Cluster Logging Operator now validates that a target namespace is present. ( LOG-6573 ) Before this update, the Cluster Logging Operator could deploy the collector with output configurations that were not referenced by any inputs. With this update, a validation check for the ClusterLogForwarder resource prevents the Operator from deploying the collector. ( LOG-6585 ) 2.2.2.3. CVEs CVE-2019-12900 2.2.3. Logging 6.1.1 Release Notes This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.1 . 2.2.3.1. New Features and Enhancements With this update, the Loki Operator supports configuring the workload identity federation on the Google Cloud Platform (GCP) by using the Cluster Credential Operator (CCO) in OpenShift Container Platform 4.17 or later. ( LOG-6420 ) 2.2.3.2. Bug Fixes Before this update, the collector was discarding longer audit log messages with the following error message: Internal log [Found line that exceeds max_line_bytes; discarding.] . With this update, the discarding of longer audit messages is avoided by increasing the audit configuration thresholds: The maximum line size, max_line_bytes , is 3145728 bytes. The maximum number of bytes read during a read cycle, max_read_bytes , is 262144 bytes. ( LOG-6379 ) Before this update, an input receiver service was repeatedly created and deleted, causing issues with mounting the TLS secrets. With this update, the service is created once and only deleted if it is not defined in the ClusterLogForwarder custom resource. ( LOG-6383 ) Before this update, pipeline validation might have entered an infinite loop if a name was a substring of another name. With this update, stricter name equality checks prevent the infinite loop. ( LOG-6405 ) Before this update, the collector alerting rules included the summary and message fields. With this update, the collector alerting rules include the summary and description fields. ( LOG-6407 ) Before this update, setting up the custom audit inputs in the ClusterLogForwarder custom resource with configured LokiStack output caused errors due to the nil pointer dereference. With this update, the Operator performs the nil checks, preventing such errors. ( LOG-6449 ) Before this update, the ValidLokistackOTLPOutputs condition appeared in the status of the ClusterLogForwarder custom resource even when the output type is not LokiStack . With this update, the ValidLokistackOTLPOutputs condition is removed, and the validation messages for the existing output conditions are corrected. ( LOG-6469 ) Before this update, the collector did not correctly mount the /var/log/oauth-server/ path, which prevented the collection of the audit logs. With this update, the volume mount is added, and the audit logs are collected as expected. ( LOG-6484 ) Before this update, the must-gather script of the Red Hat OpenShift Logging Operator might have failed to gather the LokiStack data. With this update, the must-gather script is fixed, and the LokiStack data is gathered reliably. ( LOG-6498 ) Before this update, the collector did not correctly mount the oauth-apiserver audit log file. As a result, such audit logs were not collected. With this update, the volume mount is correctly mounted, and the logs are collected as expected. ( LOG-6533 ) 2.2.3.3. CVEs CVE-2019-12900 CVE-2024-2511 CVE-2024-3596 CVE-2024-4603 CVE-2024-4741 CVE-2024-5535 CVE-2024-10963 CVE-2024-50602 2.2.4. Logging 6.1.0 Release Notes This release includes Logging for Red Hat OpenShift Bug Fix Release 6.1.0 . 2.2.4.1. New Features and Enhancements 2.2.4.1.1. Log Collection This enhancement adds the source iostream to the attributes sent from collected container logs. The value is set to either stdout or stderr based on how the collector received it. ( LOG-5292 ) With this update, the default memory limit for the collector increases from 1024 Mi to 2048 Mi. Users should adjust resource limits based on their cluster's specific needs and specifications. ( LOG-6072 ) With this update, users can now set the syslog output delivery mode of the ClusterLogForwarder CR to either AtLeastOnce or AtMostOnce. ( LOG-6355 ) 2.2.4.1.2. Log Storage With this update, the new 1x.pico LokiStack size supports clusters with fewer workloads and lower log volumes (up to 50GB/day). ( LOG-5939 ) 2.2.4.2. Technology Preview Important The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . With this update, OpenTelemetry logs can now be forwarded using the OTel (OpenTelemetry) data model to a Red Hat Managed LokiStack instance. To enable this feature, add the observability.openshift.io/tech-preview-otlp-output: "enabled" annotation to your ClusterLogForwarder configuration. For additional configuration information, see OTLP Forwarding . With this update, a dataModel field has been added to the lokiStack output specification. Set the dataModel to Otel to configure log forwarding using the OpenTelemetry data format. The default is set to Viaq . For information about data mapping see OTLP Specification . 2.2.4.3. Bug Fixes None. 2.2.4.4. CVEs CVE-2024-6119 CVE-2024-6232 2.3. Logging 6.1 The ClusterLogForwarder custom resource (CR) is the central configuration point for log collection and forwarding. 2.3.1. Inputs and outputs Inputs specify the sources of logs to be forwarded. Logging provides the following built-in input types that select logs from different parts of your cluster: application receiver infrastructure audit You can also define custom inputs based on namespaces or pod labels to fine-tune log selection. Outputs define the destinations where logs are sent. Each output type has its own set of configuration options, allowing you to customize the behavior and authentication settings. 2.3.2. Receiver input type The receiver input type enables the Logging system to accept logs from external sources. It supports two formats for receiving logs: http and syslog . The ReceiverSpec field defines the configuration for a receiver input. 2.3.3. Pipelines and filters Pipelines determine the flow of logs from inputs to outputs. A pipeline consists of one or more input refs, output refs, and optional filter refs. You can use filters to transform or drop log messages within a pipeline. The order of filters matters, as they are applied sequentially, and earlier filters can prevent log messages from reaching later stages. 2.3.4. Operator behavior The Cluster Logging Operator manages the deployment and configuration of the collector based on the managementState field of the ClusterLogForwarder resource: When set to Managed (default), the Operator actively manages the logging resources to match the configuration defined in the spec. When set to Unmanaged , the Operator does not take any action, allowing you to manually manage the logging components. 2.3.5. Validation Logging includes extensive validation rules and default values to ensure a smooth and error-free configuration experience. The ClusterLogForwarder resource enforces validation checks on required fields, dependencies between fields, and the format of input values. Default values are provided for certain fields, reducing the need for explicit configuration in common scenarios. 2.3.6. Quick start OpenShift Logging supports two data models: ViaQ (General Availability) OpenTelemetry (Technology Preview) You can select either of these data models based on your requirement by configuring the lokiStack.dataModel field in the ClusterLogForwarder . ViaQ is the default data model when forwarding logs to LokiStack. Note In future releases of OpenShift Logging, the default data model will change from ViaQ to OpenTelemetry. 2.3.6.1. Quick start with ViaQ To use the default ViaQ data model, follow these steps: Prerequisites Cluster administrator permissions Procedure Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. Create a LokiStack custom resource (CR) in the openshift-logging namespace: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging Note Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see Secrets and TLS Configuration. Create a service account for the collector: USD oc create sa collector -n openshift-logging Allow the collector's service account to write data to the LokiStack CR: USD oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector Note The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. Allow the collector's service account to collect logs: USD oc project openshift-logging USD oc adm policy add-cluster-role-to-user collect-application-logs -z collector USD oc adm policy add-cluster-role-to-user collect-audit-logs -z collector USD oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector Note The example binds the collector to all three roles (application, infrastructure, and audit), but by default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. Create a UIPlugin CR to enable the Log section in the Observe tab: apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki Create a ClusterLogForwarder CR to configure log forwarding: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: authentication: token: from: serviceAccount target: name: logging-loki namespace: openshift-logging tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack Note The dataModel field is optional and left unset ( dataModel: "" ) by default. This allows the Cluster Logging Operator (CLO) to automatically select a data model. Currently, the CLO defaults to the ViaQ model when the field is unset, but this will change in future releases. Specifying dataModel: ViaQ ensures the configuration remains compatible if the default changes. Verification Verify that logs are visible in the Log section of the Observe tab in the OpenShift web console. 2.3.6.2. Quick start with OpenTelemetry Important The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To configure OTLP ingestion and enable the OpenTelemetry data model, follow these steps: Prerequisites Cluster administrator permissions Procedure Install the Red Hat OpenShift Logging Operator, Loki Operator, and Cluster Observability Operator (COO) from OperatorHub. Create a LokiStack custom resource (CR) in the openshift-logging namespace: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging Note Ensure that the logging-loki-s3 secret is created beforehand. The contents of this secret vary depending on the object storage in use. For more information, see "Secrets and TLS Configuration". Create a service account for the collector: USD oc create sa collector -n openshift-logging Allow the collector's service account to write data to the LokiStack CR: USD oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector Note The ClusterRole resource is created automatically during the Cluster Logging Operator installation and does not need to be created manually. Allow the collector's service account to collect logs: USD oc project openshift-logging USD oc adm policy add-cluster-role-to-user collect-application-logs -z collector USD oc adm policy add-cluster-role-to-user collect-audit-logs -z collector USD oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector Note The example binds the collector to all three roles (application, infrastructure, and audit). By default, only application and infrastructure logs are collected. To collect audit logs, update your ClusterLogForwarder configuration to include them. Assign roles based on the specific log types required for your environment. Create a UIPlugin CR to enable the Log section in the Observe tab: apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki Create a ClusterLogForwarder CR to configure log forwarding: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging annotations: observability.openshift.io/tech-preview-otlp-output: "enabled" 1 spec: serviceAccount: name: collector outputs: - name: loki-otlp type: lokiStack 2 lokiStack: target: name: logging-loki namespace: openshift-logging dataModel: Otel 3 authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: my-pipeline inputRefs: - application - infrastructure outputRefs: - loki-otlp 1 Use the annotation to enable the Otel data model, which is a Technology Preview feature. 2 Define the output type as lokiStack . 3 Specifies the OpenTelemetry data model. Note You cannot use lokiStack.labelKeys when dataModel is Otel . To achieve similar functionality when dataModel is Otel , refer to "Configuring LokiStack for OTLP data ingestion". Verification Verify that OTLP is functioning correctly by going to Observe OpenShift Logging LokiStack Writes in the OpenShift web console, and checking Distributor - Structured Metadata . 2.4. Installing Logging OpenShift Container Platform Operators use custom resources (CRs) to manage applications and their components. You provide high-level configuration and settings through the CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the logic of the Operator. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs to generate CRs. To get started with logging, you must install the following Operators: Loki Operator to manage your log store. Red Hat OpenShift Logging Operator to manage log collection and forwarding. Cluster Observability Operator (COO) to manage visualization. You can use either the OpenShift Container Platform web console or the OpenShift Container Platform CLI to install or configure logging. Important You must configure the Red Hat OpenShift Logging Operator after the Loki Operator. 2.4.1. Installation by using the CLI The following sections describe installing the Loki Operator and the Red Hat OpenShift Logging Operator by using the CLI. 2.4.1.1. Installing the Loki Operator by using the CLI Install Loki Operator on your OpenShift Container Platform cluster to manage the log store Loki by using the OpenShift Container Platform command-line interface (CLI). You can deploy and configure the Loki log store by reconciling the resource LokiStack with the Loki Operator. Prerequisites You have administrator permissions. You installed the OpenShift CLI ( oc ). You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation. Procedure Create a Namespace object for Loki Operator: Example Namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 labels: openshift.io/cluster-monitoring: "true" 2 1 You must specify openshift-operators-redhat as the namespace. To enable monitoring for the operator, configure Cluster Monitoring Operator to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community operators, which are untrusted and could publish a metric with the same name as an OpenShift Container Platform metric, causing conflicts. 2 A string value that specifies the label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. Apply the Namespace object by running the following command: USD oc apply -f <filename>.yaml Create an OperatorGroup object. Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: upgradeStrategy: Default 1 You must specify openshift-operators-redhat as the namespace. Apply the OperatorGroup object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object for Loki Operator: Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable-6.<y> 2 installPlanApproval: Automatic 3 name: loki-operator source: redhat-operators 4 sourceNamespace: openshift-marketplace 1 You must specify openshift-operators-redhat as the namespace. 2 Specify stable-6.<y> as the channel. 3 If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. 4 Specify redhat-operators as the value. If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object that you created when you configured Operator Lifecycle Manager (OLM). Apply the Subscription object by running the following command: USD oc apply -f <filename>.yaml Create a namespace object for deploy the LokiStack: Example namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 labels: openshift.io/cluster-monitoring: "true" 2 1 The openshift-logging namespace is dedicated for all logging workloads. 2 A string value that specifies the label, as shown, to ensure that cluster monitoring scrapes the openshift-logging namespace. Apply the namespace object by running the following command: USD oc apply -f <filename>.yaml Create a secret with the credentials to access the object storage. For example, create a secret to access Amazon Web Services (AWS) s3. Example Secret object apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 1 namespace: openshift-logging stringData: 2 access_key_id: <access_key_id> access_key_secret: <access_secret> bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1 1 Use the name logging-loki-s3 to match the name used in LokiStack. 2 For the contents of the secret see the Loki object storage section. Important If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. Apply the Secret object by running the following command: USD oc apply -f <filename>.yaml Create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: "<yyyy>-<mm>-<dd>" 4 secret: name: logging-loki-s3 5 type: s3 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8 1 Use the name logging-loki . 2 You must specify openshift-logging as the namespace. 3 Specify the deployment size. Supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . Additionally, 1x.pico is supported starting with logging 6.1. 4 For new installations this date should be set to the equivalent of "yesterday", as this will be the date from when the schema takes effect. 5 Specify the name of your log store secret. 6 Specify the corresponding storage type. 7 Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. You can list the available storage classes for your cluster by using the oc get storageclasses command. 8 The openshift-logging mode is the default tenancy mode where a tenant is created for log types, such as audit, infrastructure, and application. This enables access control for individual users and user groups to different log streams. Apply the LokiStack CR object by running the following command: USD oc apply -f <filename>.yaml Verification Verify the installation by running the following command: USD oc get pods -n openshift-logging Example output USD oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE logging-loki-compactor-0 1/1 Running 0 42m logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m logging-loki-index-gateway-0 1/1 Running 0 42m logging-loki-ingester-0 1/1 Running 0 42m logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m 2.4.1.2. Installing Red Hat OpenShift Logging Operator by using the CLI Install Red Hat OpenShift Logging Operator on your OpenShift Container Platform cluster to collect and forward logs to a log store by using the OpenShift CLI ( oc ). Prerequisites You have administrator permissions. You installed the OpenShift CLI ( oc ). You installed and configured Loki Operator. You have created the openshift-logging namespace. Procedure Create an OperatorGroup object: Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: upgradeStrategy: Default 1 You must specify openshift-logging as the namespace. Apply the OperatorGroup object by running the following command: USD oc apply -f <filename>.yaml Create a Subscription object for Red Hat OpenShift Logging Operator: Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable-6.<y> 2 installPlanApproval: Automatic 3 name: cluster-logging source: redhat-operators 4 sourceNamespace: openshift-marketplace 1 You must specify openshift-logging as the namespace. 2 Specify stable-6.<y> as the channel. 3 If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. 4 Specify redhat-operators as the value. If your OpenShift Container Platform cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object that you created when you configured Operator Lifecycle Manager (OLM). Apply the Subscription object by running the following command: USD oc apply -f <filename>.yaml Create a service account to be used by the log collector: USD oc create sa logging-collector -n openshift-logging Assign the necessary permissions to the service account for the collector to be able to collect and forward logs. In this example, the collector is provided permissions to collect logs from both infrastructure and application logs. USD oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging USD oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging USD oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logging-collector -n openshift-logging Create a ClusterLogForwarder CR: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging 1 spec: serviceAccount: name: logging-collector 2 outputs: - name: lokistack-out type: lokiStack 3 lokiStack: target: 4 name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: infra-app-logs inputRefs: 5 - application - infrastructure outputRefs: - lokistack-out 1 You must specify the openshift-logging namespace. 2 Specify the name of the service account created before. 3 Select the lokiStack output type to send logs to the LokiStack instance. 4 Point the ClusterLogForwarder to the LokiStack instance created earlier. 5 Select the log output types you want to send to the LokiStack instance. Apply the ClusterLogForwarder CR object by running the following command: USD oc apply -f <filename>.yaml Verification Verify the installation by running the following command: USD oc get pods -n openshift-logging Example output USD oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m instance-222js 2/2 Running 0 18m instance-g9ddv 2/2 Running 0 18m instance-hfqq8 2/2 Running 0 18m instance-sphwg 2/2 Running 0 18m instance-vv7zn 2/2 Running 0 18m instance-wk5zz 2/2 Running 0 18m logging-loki-compactor-0 1/1 Running 0 42m logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m logging-loki-index-gateway-0 1/1 Running 0 42m logging-loki-ingester-0 1/1 Running 0 42m logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m 2.4.2. Installation by using the web console The following sections describe installing the Loki Operator and the Red Hat OpenShift Logging Operator by using the web console. 2.4.2.1. Installing Logging by using the web console Install Loki Operator on your OpenShift Container Platform cluster to manage the log store Loki from the OperatorHub by using the OpenShift Container Platform web console. You can deploy and configure the Loki log store by reconciling the resource LokiStack with the Loki Operator. Prerequisites You have administrator permissions. You have access to the OpenShift Container Platform web console. You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation). Procedure In the OpenShift Container Platform web console Administrator perspective, go to Operators OperatorHub . Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install . Important The Community Loki Operator is not supported by Red Hat. Select stable-x.y as the Update channel . The Loki Operator must be deployed to the global Operator group namespace openshift-operators-redhat , so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it will be created for you. Select Enable Operator-recommended cluster monitoring on this namespace. This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace. For Update approval select Automatic , then click Install . If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Note An Operator might display a Failed status before the installation completes. If the Operator install completes with an InstallSucceeded message, refresh the page. While the Operator installs, create the namespace to which the log store will be deployed. Click + in the top right of the screen to access the Import YAML page. Add the YAML definition for the openshift-logging namespace: Example namespace object apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 labels: openshift.io/cluster-monitoring: "true" 2 1 The openshift-logging namespace is dedicated for all logging workloads. 2 A string value that specifies the label, as shown, to ensure that cluster monitoring scrapes the openshift-logging namespace. Click Create . Create a secret with the credentials to access the object storage. Click + in the top right of the screen to access the Import YAML page. Add the YAML definition for the secret. For example, create a secret to access Amazon Web Services (AWS) s3: Example Secret object apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 1 namespace: openshift-logging 2 stringData: 3 access_key_id: <access_key_id> access_key_secret: <access_key> bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1 1 Note down the name used for the secret logging-loki-s3 to use it later when creating the LokiStack resource. 2 Set the namespace to openshift-logging as that will be the namespace used to deploy LokiStack . 3 For the contents of the secret see the Loki object storage section. Important If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. Click Create . Navigate to the Installed Operators page. Select the Loki Operator under the Provided APIs find the LokiStack resource and click Create Instance . Select YAML view , and then use the following template to create a LokiStack CR: Example LokiStack CR apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: "<yyyy>-<mm>-<dd>" secret: name: logging-loki-s3 4 type: s3 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging 7 1 Use the name logging-loki . 2 You must specify openshift-logging as the namespace. 3 Specify the deployment size. Supported size options for production instances of Loki are 1x.extra-small , 1x.small , or 1x.medium . Additionally, 1x.pico is supported starting with logging 6.1. 4 Specify the name of your log store secret. 5 Specify the corresponding storage type. 6 Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. You can list the available storage classes for your cluster by using the oc get storageclasses command. 7 The openshift-logging mode is the default tenancy mode where a tenant is created for log types, such as audit, infrastructure, and application. This enables access control for individual users and user groups to different log streams. Click Create . Verification In the LokiStack tab veriy that you see your LokiStack instance. In the Status column, verify that you see the message Condition: Ready with a green checkmark. 2.4.2.2. Installing Red Hat OpenShift Logging Operator by using the web console Install Red Hat OpenShift Logging Operator on your OpenShift Container Platform cluster to collect and forward logs to a log store from the OperatorHub by using the OpenShift Container Platform web console. Prerequisites You have administrator permissions. You have access to the OpenShift Container Platform web console. You installed and configured Loki Operator. Procedure In the OpenShift Container Platform web console Administrator perspective, go to Operators OperatorHub . Type Red Hat OpenShift Logging Operator in the Filter by keyword field. Click Red Hat OpenShift Logging Operator in the list of available Operators, and then click Install . Select stable-x.y as the Update channel . The latest version is already selected in the Version field. The Red Hat OpenShift Logging Operator must be deployed to the logging namespace openshift-logging , so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it will be created for you. Select Enable Operator-recommended cluster monitoring on this namespace. This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-logging namespace. For Update approval select Automatic , then click Install . If the approval strategy in the subscription is set to Automatic , the update process initiates as soon as a new operator version is available in the selected channel. If the approval strategy is set to Manual , you must manually approve pending updates. Note An Operator might display a Failed status before the installation completes. If the operator installation completes with an InstallSucceeded message, refresh the page. While the operator installs, create the service account that will be used by the log collector to collect the logs. Click the + in the top right of the screen to access the Import YAML page. Enter the YAML definition for the service account. Example ServiceAccount object apiVersion: v1 kind: ServiceAccount metadata: name: logging-collector 1 namespace: openshift-logging 2 1 Note down the name used for the service account logging-collector to use it later when creating the ClusterLogForwarder resource. 2 Set the namespace to openshift-logging because that is the namespace for deploying the ClusterLogForwarder resource. Click the Create button. Create the ClusterRoleBinding objects to grant the necessary permissions to the log collector for accessing the logs that you want to collect and to write the log store, for example infrastructure and application logs. Click the + in the top right of the screen to access the Import YAML page. Enter the YAML definition for the ClusterRoleBinding resources. Example ClusterRoleBinding resources apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:write-logs roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: logging-collector-logs-writer 1 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:collect-application roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-application-logs 2 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:collect-infrastructure roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-infrastructure-logs 3 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging 1 The cluster role to allow the log collector to write logs to LokiStack. 2 The cluster role to allow the log collector to collect logs from applications. 3 The cluster role to allow the log collector to collect logs from infrastructure. Click the Create button. Go to the Operators Installed Operators page. Select the operator and click the All instances tab. After granting the necessary permissions to the service account, navigate to the Installed Operators page. Select the Red Hat OpenShift Logging Operator under the Provided APIs , find the ClusterLogForwarder resource and click Create Instance . Select YAML view , and then use the following template to create a ClusterLogForwarder CR: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging 1 spec: serviceAccount: name: logging-collector 2 outputs: - name: lokistack-out type: lokiStack 3 lokiStack: target: 4 name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: infra-app-logs inputRefs: 5 - application - infrastructure outputRefs: - lokistack-out 1 You must specify openshift-logging as the namespace. 2 Specify the name of the service account created earlier. 3 Select the lokiStack output type to send logs to the LokiStack instance. 4 Point the ClusterLogForwarder to the LokiStack instance created earlier. 5 Select the log output types you want to send to the LokiStack instance. Click Create . Verification In the ClusterLogForwarder tab verify that you see your ClusterLogForwarder instance. In the Status column, verify that you see the messages: Condition: observability.openshift.io/Authorized observability.openshift.io/Valid, Ready Additional resources About OVN-Kubernetes network policy 2.5. Configuring log forwarding The ClusterLogForwarder (CLF) allows users to configure forwarding of logs to various destinations. It provides a flexible way to select log messages from different sources, send them through a pipeline that can transform or filter them, and forward them to one or more outputs. Key Functions of the ClusterLogForwarder Selects log messages using inputs Forwards logs to external destinations using outputs Filters, transforms, and drops log messages using filters Defines log forwarding pipelines connecting inputs, filters and outputs 2.5.1. Setting up log collection This release of Cluster Logging requires administrators to explicitly grant log collection permissions to the service account associated with ClusterLogForwarder . This was not required in releases for the legacy logging scenario consisting of a ClusterLogging and, optionally, a ClusterLogForwarder.logging.openshift.io resource. The Red Hat OpenShift Logging Operator provides collect-audit-logs , collect-application-logs , and collect-infrastructure-logs cluster roles, which enable the collector to collect audit logs, application logs, and infrastructure logs respectively. Setup log collection by binding the required cluster roles to your service account. 2.5.1.1. Legacy service accounts To use the existing legacy service account logcollector , create the following ClusterRoleBinding : USD oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector USD oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector Additionally, create the following ClusterRoleBinding if collecting audit logs: USD oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector 2.5.1.2. Creating service accounts Prerequisites The Red Hat OpenShift Logging Operator is installed in the openshift-logging namespace. You have administrator permissions. Procedure Create a service account for the collector. If you want to write logs to storage that requires a token for authentication, you must include a token in the service account. Bind the appropriate cluster roles to the service account: Example binding command USD oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name> 2.5.1.2.1. Cluster Role Binding for your Service Account The role_binding.yaml file binds the ClusterLogging operator's ClusterRole to a specific ServiceAccount, allowing it to manage Kubernetes resources cluster-wide. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8 1 roleRef: References the ClusterRole to which the binding applies. 2 apiGroup: Indicates the RBAC API group, specifying that the ClusterRole is part of Kubernetes' RBAC system. 3 kind: Specifies that the referenced role is a ClusterRole, which applies cluster-wide. 4 name: The name of the ClusterRole being bound to the ServiceAccount, here cluster-logging-operator. 5 subjects: Defines the entities (users or service accounts) that are being granted the permissions from the ClusterRole. 6 kind: Specifies that the subject is a ServiceAccount. 7 Name: The name of the ServiceAccount being granted the permissions. 8 namespace: Indicates the namespace where the ServiceAccount is located. 2.5.1.2.2. Writing application logs The write-application-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to write application logs to the Loki logging application. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 1 rules: Specifies the permissions granted by this ClusterRole. 2 apiGroups: Refers to the API group loki.grafana.com, which relates to the Loki logging system. 3 loki.grafana.com: The API group for managing Loki-related resources. 4 resources: The resource type that the ClusterRole grants permission to interact with. 5 application: Refers to the application resources within the Loki logging system. 6 resourceNames: Specifies the names of resources that this role can manage. 7 logs: Refers to the log resources that can be created. 8 verbs: The actions allowed on the resources. 9 create: Grants permission to create new logs in the Loki system. 2.5.1.2.3. Writing audit logs The write-audit-logs-clusterrole.yaml file defines a ClusterRole that grants permissions to create audit logs in the Loki logging system. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 1 rules: Defines the permissions granted by this ClusterRole. 2 apiGroups: Specifies the API group loki.grafana.com. 3 loki.grafana.com: The API group responsible for Loki logging resources. 4 resources: Refers to the resource type this role manages, in this case, audit. 5 audit: Specifies that the role manages audit logs within Loki. 6 resourceNames: Defines the specific resources that the role can access. 7 logs: Refers to the logs that can be managed under this role. 8 verbs: The actions allowed on the resources. 9 create: Grants permission to create new audit logs. 2.5.1.2.4. Writing infrastructure logs The write-infrastructure-logs-clusterrole.yaml file defines a ClusterRole that grants permission to create infrastructure logs in the Loki logging system. Sample YAML apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9 1 rules: Specifies the permissions this ClusterRole grants. 2 apiGroups: Specifies the API group for Loki-related resources. 3 loki.grafana.com: The API group managing the Loki logging system. 4 resources: Defines the resource type that this role can interact with. 5 infrastructure: Refers to infrastructure-related resources that this role manages. 6 resourceNames: Specifies the names of resources this role can manage. 7 logs: Refers to the log resources related to infrastructure. 8 verbs: The actions permitted by this role. 9 create: Grants permission to create infrastructure logs in the Loki system. 2.5.1.2.5. ClusterLogForwarder editor role The clusterlogforwarder-editor-role.yaml file defines a ClusterRole that allows users to manage ClusterLogForwarders in OpenShift. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13 1 rules: Specifies the permissions this ClusterRole grants. 2 apiGroups: Refers to the OpenShift-specific API group 3 obervability.openshift.io: The API group for managing observability resources, like logging. 4 resources: Specifies the resources this role can manage. 5 clusterlogforwarders: Refers to the log forwarding resources in OpenShift. 6 verbs: Specifies the actions allowed on the ClusterLogForwarders. 7 create: Grants permission to create new ClusterLogForwarders. 8 delete: Grants permission to delete existing ClusterLogForwarders. 9 get: Grants permission to retrieve information about specific ClusterLogForwarders. 10 list: Allows listing all ClusterLogForwarders. 11 patch: Grants permission to partially modify ClusterLogForwarders. 12 update: Grants permission to update existing ClusterLogForwarders. 13 watch: Grants permission to monitor changes to ClusterLogForwarders. 2.5.2. Modifying log level in collector To modify the log level in the collector, you can set the observability.openshift.io/log-level annotation to trace , debug , info , warn , error , and off . Example log level annotation apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug # ... 2.5.3. Managing the Operator The ClusterLogForwarder resource has a managementState field that controls whether the operator actively manages its resources or leaves them Unmanaged: Managed (default) The operator will drive the logging resources to match the desired state in the CLF spec. Unmanaged The operator will not take any action related to the logging components. This allows administrators to temporarily pause log forwarding by setting managementState to Unmanaged . 2.5.4. Structure of the ClusterLogForwarder The CLF has a spec section that contains the following key components: Inputs Select log messages to be forwarded. Built-in input types application , infrastructure and audit forward logs from different parts of the cluster. You can also define custom inputs. Outputs Define destinations to forward logs to. Each output has a unique name and type-specific configuration. Pipelines Define the path logs take from inputs, through filters, to outputs. Pipelines have a unique name and consist of a list of input, output and filter names. Filters Transform or drop log messages in the pipeline. Users can define filters that match certain log fields and drop or modify the messages. Filters are applied in the order specified in the pipeline. 2.5.4.1. Inputs Inputs are configured in an array under spec.inputs . There are three built-in input types: application Selects logs from all application containers, excluding those in infrastructure namespaces. infrastructure Selects logs from nodes and from infrastructure components running in the following namespaces: default kube openshift Containing the kube- or openshift- prefix audit Selects logs from the OpenShift API server audit logs, Kubernetes API server audit logs, ovn audit logs, and node audit logs from auditd. Users can define custom inputs of type application that select logs from specific namespaces or using pod labels. 2.5.4.2. Outputs Outputs are configured in an array under spec.outputs . Each output must have a unique name and a type. Supported types are: azureMonitor Forwards logs to Azure Monitor. cloudwatch Forwards logs to AWS CloudWatch. googleCloudLogging Forwards logs to Google Cloud Logging. http Forwards logs to a generic HTTP endpoint. kafka Forwards logs to a Kafka broker. loki Forwards logs to a Loki logging backend. lokistack Forwards logs to the logging supported combination of Loki and web proxy with OpenShift Container Platform authentication integration. LokiStack's proxy uses OpenShift Container Platform authentication to enforce multi-tenancy otlp Forwards logs using the OpenTelemetry Protocol. splunk Forwards logs to Splunk. syslog Forwards logs to an external syslog server. Each output type has its own configuration fields. 2.5.5. Configuring OTLP output Cluster administrators can use the OpenTelemetry Protocol (OTLP) output to collect and forward logs to OTLP receivers. The OTLP output uses the specification defined by the OpenTelemetry Observability framework to send data over HTTP with JSON encoding. Important The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Procedure Create or edit a ClusterLogForwarder custom resource (CR) to enable forwarding using OTLP by adding the following annotation: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: annotations: observability.openshift.io/tech-preview-otlp-output: "enabled" 1 name: clf-otlp spec: serviceAccount: name: <service_account_name> outputs: - name: otlp type: otlp otlp: tuning: compression: gzip deliveryMode: AtLeastOnce maxRetryDuration: 20 maxWrite: 10M minRetryDuration: 5 url: <otlp_url> 2 pipelines: - inputRefs: - application - infrastructure - audit name: otlp-logs outputRefs: - otlp 1 Use this annotation to enable the OpenTelemetry Protocol (OTLP) output, which is a Technology Preview feature. 2 This URL must be absolute and is a placeholder for the OTLP endpoint where logs are sent. Note The OTLP output uses the OpenTelemetry data model, which is different from the ViaQ data model that is used by other output types. It adheres to the OTLP using OpenTelemetry Semantic Conventions defined by the OpenTelemetry Observability framework. 2.5.5.1. Pipelines Pipelines are configured in an array under spec.pipelines . Each pipeline must have a unique name and consists of: inputRefs Names of inputs whose logs should be forwarded to this pipeline. outputRefs Names of outputs to send logs to. filterRefs (optional) Names of filters to apply. The order of filterRefs matters, as they are applied sequentially. Earlier filters can drop messages that will not be processed by later filters. 2.5.5.2. Filters Filters are configured in an array under spec.filters . They can match incoming log messages based on the value of structured fields and modify or drop them. Administrators can configure the following types of filters: 2.5.5.3. Enabling multi-line exception detection Enables multi-line error detection of container logs. Warning Enabling this feature could have performance implications and may require additional computing resources or alternate logging solutions. Log parsers often incorrectly identify separate lines of the same exception as separate exceptions. This leads to extra log entries and an incomplete or inaccurate view of the traced information. Example java exception java.lang.NullPointerException: Cannot invoke "String.toString()" because "<param1>" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10) To enable logging to detect multi-line exceptions and reassemble them into a single log entry, ensure that the ClusterLogForwarder Custom Resource (CR) contains a detectMultilineErrors field under the .spec.filters . Example ClusterLogForwarder CR apiVersion: "observability.openshift.io/v1" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name> 2.5.5.3.1. Details When log messages appear as a consecutive sequence forming an exception stack trace, they are combined into a single, unified log record. The first log message's content is replaced with the concatenated content of all the message fields in the sequence. The collector supports the following languages: Java JS Ruby Python Golang PHP Dart 2.5.5.4. Configuring content filters to drop unwanted log records When the drop filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector drops unwanted log records that match the specified configuration. Procedure Add a configuration for a filter to the filters spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to drop log records based on regular expressions: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels."foo-bar/baz" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: "my-pod" 6 pipelines: - name: <pipeline_name> 7 filterRefs: ["<filter_name>"] # ... 1 Specifies the type of filter. The drop filter drops log records that match the filter configuration. 2 Specifies configuration options for applying the drop filter. 3 Specifies the configuration for tests that are used to evaluate whether a log record is dropped. If all the conditions specified for a test are true, the test passes and the log record is dropped. When multiple tests are specified for the drop filter configuration, if any of the tests pass, the record is dropped. If there is an error evaluating a condition, for example, the field is missing from the log record being evaluated, that condition evaluates to false. 4 Specifies a dot-delimited field path, which is a path to a field in the log record. The path can contain alpha-numeric characters and underscores ( a-zA-Z0-9_ ), for example, .kubernetes.namespace_name . If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz" . You can include multiple field paths in a single test configuration, but they must all evaluate to true for the test to pass and the drop filter to be applied. 5 Specifies a regular expression. If log records match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. 6 Specifies a regular expression. If log records do not match this regular expression, they are dropped. You can set either the matches or notMatches condition for a single field path, but not both. 7 Specifies the pipeline that the drop filter is applied to. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml Additional examples The following additional example shows how you can configure the drop filter to only keep higher priority log records: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: "(?i)critical|error" - field: .level matches: "info|warning" # ... In addition to including multiple field paths in a single test configuration, you can also include additional tests that are treated as OR checks. In the following example, records are dropped if either test configuration evaluates to true. However, for the second test configuration, both field specs must be true for it to be evaluated to true: apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: "^open" - test: - field: .log_type matches: "application" - field: .kubernetes.pod_name notMatches: "my-pod" # ... 2.5.5.5. Overview of API audit filter OpenShift API servers generate audit events for each API call, detailing the request, response, and the identity of the requester, leading to large volumes of data. The API Audit filter uses rules to enable the exclusion of non-essential events and the reduction of event size, facilitating a more manageable audit trail. Rules are checked in order, and checking stops at the first match. The amount of data that is included in an event is determined by the value of the level field: None : The event is dropped. Metadata : Audit metadata is included, request and response bodies are removed. Request : Audit metadata and the request body are included, the response body is removed. RequestResponse : All data is included: metadata, request body and response body. The response body can be very large. For example, oc get pods -A generates a response body containing the YAML description of every pod in the cluster. The ClusterLogForwarder custom resource (CR) uses the same format as the standard Kubernetes audit policy , while providing the following additional functions: Wildcards Names of users, groups, namespaces, and resources can have a leading or trailing * asterisk character. For example, the namespace openshift-\* matches openshift-apiserver or openshift-authentication . Resource \*/status matches Pod/status or Deployment/status . Default Rules Events that do not match any rule in the policy are filtered as follows: Read-only system events such as get , list , and watch are dropped. Service account write events that occur within the same namespace as the service account are dropped. All other events are forwarded, subject to any configured rate limits. To disable these defaults, either end your rules list with a rule that has only a level field or add an empty rule. Omit Response Codes A list of integer status codes to omit. You can drop events based on the HTTP status code in the response by using the OmitResponseCodes field, which lists HTTP status codes for which no events are created. The default value is [404, 409, 422, 429] . If the value is an empty list, [] , then no status codes are omitted. The ClusterLogForwarder CR audit policy acts in addition to the OpenShift Container Platform audit policy. The ClusterLogForwarder CR audit filter changes what the log collector forwards and provides the ability to filter by verb, user, group, namespace, or resource. You can create multiple filters to send different summaries of the same audit stream to different places. For example, you can send a detailed stream to the local cluster log store and a less detailed stream to a remote site. Note You must have a cluster role collect-audit-logs to collect the audit logs. The following example provided is intended to illustrate the range of rules possible in an audit policy and is not a recommended configuration. Example audit policy apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - "RequestReceived" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: "" resources: ["pods"] # Log "pods/log", "pods/status" at Metadata level - level: Metadata resources: - group: "" resources: ["pods/log", "pods/status"] # Don't log requests to a configmap called "controller-leader" - level: None resources: - group: "" resources: ["configmaps"] resourceNames: ["controller-leader"] # Don't log watch requests by the "system:kube-proxy" on endpoints or services - level: None users: ["system:kube-proxy"] verbs: ["watch"] resources: - group: "" # core API group resources: ["endpoints", "services"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: ["system:authenticated"] nonResourceURLs: - "/api*" # Wildcard matching. - "/version" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: "" # core API group resources: ["configmaps"] # This rule only applies to resources in the "kube-system" namespace. # The empty string "" can be used to select non-namespaced resources. namespaces: ["kube-system"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: "" # core API group resources: ["secrets", "configmaps"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: "" # core API group - group: "extensions" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata 1 The log types that are collected. The value for this field can be audit for audit logs, application for application logs, infrastructure for infrastructure logs, or a named input that has been defined for your application. 2 The name of your audit policy. 2.5.5.6. Filtering application logs at input by including the label expressions or a matching label key and values You can include the application logs based on the label expressions or a matching label key and its values by using the input selector. Procedure Add a configuration for a filter to the input spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to include logs based on label expressions or matched label key/values: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: ["prod", "qa"] 3 - key: zone operator: NotIn values: ["east", "west"] matchLabels: 4 app: one name: app1 type: application # ... 1 Specifies the label key to match. 2 Specifies the operator. Valid values include: In , NotIn , Exists , and DoesNotExist . 3 Specifies an array of string values. If the operator value is either Exists or DoesNotExist , the value array must be empty. 4 Specifies an exact key or value mapping. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 2.5.5.7. Configuring content filters to prune log records When the prune filter is configured, the log collector evaluates log streams according to the filters before forwarding. The collector prunes log records by removing low value fields such as pod annotations. Procedure Add a configuration for a filter to the prune spec in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to prune log records based on field paths: Important If both are specified, records are pruned based on the notIn array first, which takes precedence over the in array. After records have been pruned by using the notIn array, they are then pruned by using the in array. Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: # ... spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,."@timestamp"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: ["<filter_name>"] # ... 1 Specify the type of filter. The prune filter prunes log records by configured fields. 2 Specify configuration options for applying the prune filter. The in and notIn fields are specified as arrays of dot-delimited field paths, which are paths to fields in log records. These paths can contain alpha-numeric characters and underscores ( a-zA-Z0-9_ ), for example, .kubernetes.namespace_name . If segments contain characters outside of this range, the segment must be in quotes, for example, .kubernetes.labels."foo.bar-bar/baz" . 3 Optional: Any fields that are specified in this array are removed from the log record. 4 Optional: Any fields that are not specified in this array are removed from the log record. 5 Specify the pipeline that the prune filter is applied to. Note The filters exempts the log_type , .log_source , and .message fields. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 2.5.6. Filtering the audit and infrastructure log inputs by source You can define the list of audit and infrastructure sources to collect the logs by using the input selector. Procedure Add a configuration to define the audit and infrastructure sources in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to define audit and infrastructure sources: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn # ... 1 Specifies the list of infrastructure sources to collect. The valid sources include: node : Journal log from the node container : Logs from the workloads deployed in the namespaces 2 Specifies the list of audit sources to collect. The valid sources include: kubeAPI : Logs from the Kubernetes API servers openshiftAPI : Logs from the OpenShift API servers auditd : Logs from a node auditd service ovn : Logs from an open virtual network service Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 2.5.7. Filtering application logs at input by including or excluding the namespace or container name You can include or exclude the application logs based on the namespace and container name by using the input selector. Procedure Add a configuration to include or exclude the namespace and container names in the ClusterLogForwarder CR. The following example shows how to configure the ClusterLogForwarder CR to include or exclude namespaces and container names: Example ClusterLogForwarder CR apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder # ... spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: "my-project" 1 container: "my-container" 2 excludes: - container: "other-container*" 3 namespace: "other-namespace" 4 type: application # ... 1 Specifies that the logs are only collected from these namespaces. 2 Specifies that the logs are only collected from these containers. 3 Specifies the pattern of namespaces to ignore when collecting the logs. 4 Specifies the set of containers to ignore when collecting the logs. Note The excludes field takes precedence over the includes field. Apply the ClusterLogForwarder CR by running the following command: USD oc apply -f <filename>.yaml 2.6. Storing logs with LokiStack You can configure a LokiStack CR to store application, audit, and infrastructure-related logs. Loki is a horizontally scalable, highly available, multi-tenant log aggregation system offered as a GA log store for logging for Red Hat OpenShift that can be visualized with the OpenShift Observability UI. The Loki configuration provided by OpenShift Logging is a short-term log store designed to enable users to perform fast troubleshooting with the collected logs. For that purpose, the logging for Red Hat OpenShift configuration of Loki has short-term storage, and is optimized for very recent queries. Important For long-term storage or queries over a long time period, users should look to log stores external to their cluster. Loki sizing is only tested and supported for short term storage, for a maximum of 30 days. 2.6.1. Loki deployment sizing Sizing for Loki follows the format of 1x.<size> where the value 1x is number of instances and <size> specifies performance capabilities. The 1x.pico configuration defines a single Loki deployment with minimal resource and limit requirements, offering high availability (HA) support for all Loki components. This configuration is suited for deployments that do not require a single replication factor or auto-compaction. Disk requests are similar across size configurations, allowing customers to test different sizes to determine the best fit for their deployment needs. Important It is not possible to change the number 1x for the deployment size. Table 2.2. Loki sizing 1x.demo 1x.pico [6.1+ only] 1x.extra-small 1x.small 1x.medium Data transfer Demo use only 50GB/day 100GB/day 500GB/day 2TB/day Queries per second (QPS) Demo use only 1-25 QPS at 200ms 1-25 QPS at 200ms 25-50 QPS at 200ms 25-75 QPS at 200ms Replication factor None 2 2 2 2 Total CPU requests None 7 vCPUs 14 vCPUs 34 vCPUs 54 vCPUs Total CPU requests if using the ruler None 8 vCPUs 16 vCPUs 42 vCPUs 70 vCPUs Total memory requests None 17Gi 31Gi 67Gi 139Gi Total memory requests if using the ruler None 18Gi 35Gi 83Gi 171Gi Total disk requests 40Gi 590Gi 430Gi 430Gi 590Gi Total disk requests if using the ruler 80Gi 910Gi 750Gi 750Gi 910Gi 2.6.2. Prerequisites You have installed the Loki Operator by using the CLI or web console. You have a serviceAccount in the same namespace in which you create the ClusterLogForwarder . The serviceAccount is assigned collect-audit-logs , collect-application-logs , and collect-infrastructure-logs cluster roles. 2.6.3. Core Setup and Configuration Role-based access controls, basic monitoring, and pod placement to deploy Loki. 2.6.4. Authorizing LokiStack rules RBAC permissions Administrators can allow users to create and manage their own alerting and recording rules by binding cluster roles to usernames. Cluster roles are defined as ClusterRole objects that contain necessary role-based access control (RBAC) permissions for users. The following cluster roles for alerting and recording rules are available for LokiStack: Rule name Description alertingrules.loki.grafana.com-v1-admin Users with this role have administrative-level access to manage alerting rules. This cluster role grants permissions to create, read, update, delete, list, and watch AlertingRule resources within the loki.grafana.com/v1 API group. alertingrules.loki.grafana.com-v1-crdview Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to AlertingRule resources within the loki.grafana.com/v1 API group, but do not have permissions for modifying or managing these resources. alertingrules.loki.grafana.com-v1-edit Users with this role have permission to create, update, and delete AlertingRule resources. alertingrules.loki.grafana.com-v1-view Users with this role can read AlertingRule resources within the loki.grafana.com/v1 API group. They can inspect configurations, labels, and annotations for existing alerting rules but cannot make any modifications to them. recordingrules.loki.grafana.com-v1-admin Users with this role have administrative-level access to manage recording rules. This cluster role grants permissions to create, read, update, delete, list, and watch RecordingRule resources within the loki.grafana.com/v1 API group. recordingrules.loki.grafana.com-v1-crdview Users with this role can view the definitions of Custom Resource Definitions (CRDs) related to RecordingRule resources within the loki.grafana.com/v1 API group, but do not have permissions for modifying or managing these resources. recordingrules.loki.grafana.com-v1-edit Users with this role have permission to create, update, and delete RecordingRule resources. recordingrules.loki.grafana.com-v1-view Users with this role can read RecordingRule resources within the loki.grafana.com/v1 API group. They can inspect configurations, labels, and annotations for existing alerting rules but cannot make any modifications to them. 2.6.4.1. Examples To apply cluster roles for a user, you must bind an existing cluster role to a specific username. Cluster roles can be cluster or namespace scoped, depending on which type of role binding you use. When a RoleBinding object is used, as when using the oc adm policy add-role-to-user command, the cluster role only applies to the specified namespace. When a ClusterRoleBinding object is used, as when using the oc adm policy add-cluster-role-to-user command, the cluster role applies to all namespaces in the cluster. The following example command gives the specified user create, read, update and delete (CRUD) permissions for alerting rules in a specific namespace in the cluster: Example cluster role binding command for alerting rule CRUD permissions in a specific namespace USD oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username> The following command gives the specified user administrator permissions for alerting rules in all namespaces: Example cluster role binding command for administrator permissions USD oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username> 2.6.5. Creating a log-based alerting rule with Loki The AlertingRule CR contains a set of specifications and webhook validation definitions to declare groups of alerting rules for a single LokiStack instance. In addition, the webhook validation definition provides support for rule validation conditions: If an AlertingRule CR includes an invalid interval period, it is an invalid alerting rule If an AlertingRule CR includes an invalid for period, it is an invalid alerting rule. If an AlertingRule CR includes an invalid LogQL expr , it is an invalid alerting rule. If an AlertingRule CR includes two groups with the same name, it is an invalid alerting rule. If none of the above applies, an alerting rule is considered valid. Table 2.3. AlertingRule definitions Tenant type Valid namespaces for AlertingRule CRs application <your_application_namespace> audit openshift-logging infrastructure openshift-/* , kube-/\* , default Procedure Create an AlertingRule custom resource (CR): Example infrastructure AlertingRule CR apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: "true" spec: tenantID: "infrastructure" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"} |= "error" [1m])) by (job) / sum(rate({kubernetes_namespace_name="openshift-operators-redhat", kubernetes_pod_name=~"loki-operator-controller-manager.*"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7 1 The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. 2 The labels block must match the LokiStack spec.rules.selector definition. 3 AlertingRule CRs for infrastructure tenants are only supported in the openshift-* , kube-\* , or default namespaces. 4 The value for kubernetes_namespace_name: must match the value for metadata.namespace . 5 The value of this mandatory field must be critical , warning , or info . 6 This field is mandatory. 7 This field is mandatory. Example application AlertingRule CR apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: "true" spec: tenantID: "application" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name="app-ns", kubernetes_pod_name=~"podName.*"} |= "error" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6 1 The namespace where this AlertingRule CR is created must have a label matching the LokiStack spec.rules.namespaceSelector definition. 2 The labels block must match the LokiStack spec.rules.selector definition. 3 Value for kubernetes_namespace_name: must match the value for metadata.namespace . 4 The value of this mandatory field must be critical , warning , or info . 5 The value of this mandatory field is a summary of the rule. 6 The value of this mandatory field is a detailed description of the rule. Apply the AlertingRule CR: USD oc apply -f <filename>.yaml 2.6.6. Configuring Loki to tolerate memberlist creation failure In an OpenShift Container Platform cluster, administrators generally use a non-private IP network range. As a result, the LokiStack memberlist configuration fails because, by default, it only uses private IP networks. As an administrator, you can select the pod network for the memberlist configuration. You can modify the LokiStack custom resource (CR) to use the podIP address in the hashRing spec. To configure the LokiStack CR, use the following command: USD oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{"spec": {"hashRing":{"memberlist":{"instanceAddrType":"podIP"},"type":"memberlist"}}}' Example LokiStack to include podIP apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... hashRing: type: memberlist memberlist: instanceAddrType: podIP # ... 2.6.7. Enabling stream-based retention with Loki You can configure retention policies based on log streams. Rules for these may be set globally, per-tenant, or both. If you configure both, tenant rules apply before global rules. Important If there is no retention period defined on the s3 bucket or in the LokiStack custom resource (CR), then the logs are not pruned and they stay in the s3 bucket forever, which might fill up the s3 storage. Note Schema v13 is recommended. Procedure Create a LokiStack CR: Enable stream-based retention globally as shown in the following example: Example global stream-based retention for AWS apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~"test.+"}' 3 - days: 1 priority: 1 selector: '{log_type="infrastructure"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging 1 Sets retention policy for all log streams. Note: This field does not impact the retention period for stored logs in object storage. 2 Retention is enabled in the cluster when this block is added to the CR. 3 Contains the LogQL query used to define the log stream.spec: limits: Enable stream-based retention per-tenant basis as shown in the following example: Example per-tenant stream-based retention for AWS apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~"test.+"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~"openshift-cluster.+"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: "2020-10-11" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging 1 Sets retention policy by tenant. Valid tenant types are application , audit , and infrastructure . 2 Contains the LogQL query used to define the log stream. Apply the LokiStack CR: USD oc apply -f <filename>.yaml 2.6.8. Loki pod placement You can control which nodes the Loki pods run on, and prevent other workloads from using those nodes, by using tolerations or node selectors on the pods. You can apply tolerations to the log store pods with the LokiStack custom resource (CR) and apply taints to a node with the node specification. A taint on a node is a key:value pair that instructs the node to repel all pods that do not allow the taint. Using a specific key:value pair that is not on other pods ensures that only the log store pods can run on that node. Example LokiStack with node selectors apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: "" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: "" gateway: nodeSelector: node-role.kubernetes.io/infra: "" indexGateway: nodeSelector: node-role.kubernetes.io/infra: "" ingester: nodeSelector: node-role.kubernetes.io/infra: "" querier: nodeSelector: node-role.kubernetes.io/infra: "" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: "" ruler: nodeSelector: node-role.kubernetes.io/infra: "" # ... 1 Specifies the component pod type that applies to the node selector. 2 Specifies the pods that are moved to nodes containing the defined label. Example LokiStack CR with node selectors and tolerations apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: compactor: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: "" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved # ... To configure the nodeSelector and tolerations fields of the LokiStack (CR), you can use the oc explain command to view the description and fields for a particular resource: USD oc explain lokistack.spec.template Example output KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec. ... For more detailed information, you can add a specific field: USD oc explain lokistack.spec.template.compactor Example output KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it. ... 2.6.8.1. Enhanced Reliability and Performance Configurations to ensure Loki's reliability and efficiency in production. 2.6.8.2. Enabling authentication to cloud-based log stores using short-lived tokens Workload identity federation enables authentication to cloud-based log stores using short-lived tokens. Procedure Use one of the following options to enable authentication: If you use the OpenShift Container Platform web console to install the Loki Operator, clusters that use short-lived tokens are automatically detected. You are prompted to create roles and supply the data required for the Loki Operator to create a CredentialsRequest object, which populates a secret. If you use the OpenShift CLI ( oc ) to install the Loki Operator, you must manually create a Subscription object using the appropriate template for your storage provider, as shown in the following examples. This authentication strategy is only supported for the storage providers indicated. Example Azure sample subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-6.0" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region> Example AWS sample subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: "stable-6.0" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN> 2.6.8.3. Configuring Loki to tolerate node failure The Loki Operator supports setting pod anti-affinity rules to request that pods of the same component are scheduled on different available nodes in the cluster. Affinity is a property of pods that controls the nodes on which they prefer to be scheduled. Anti-affinity is a property of pods that prevents a pod from being scheduled on a node. In OpenShift Container Platform, pod affinity and pod anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled on based on the key-value labels on other pods. The Operator sets default, preferred podAntiAffinity rules for all Loki components, which includes the compactor , distributor , gateway , indexGateway , ingester , querier , queryFrontend , and ruler components. You can override the preferred podAntiAffinity settings for Loki components by configuring required settings in the requiredDuringSchedulingIgnoredDuringExecution field: Example user settings for the ingester component apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: # ... template: ingester: podAntiAffinity: # ... requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname # ... 1 The stanza to define a required rule. 2 The key-value pair (label) that must be matched to apply the rule. 2.6.8.4. LokiStack behavior during cluster restarts When an OpenShift Container Platform cluster is restarted, LokiStack ingestion and the query path continue to operate within the available CPU and memory resources available for the node. This means that there is no downtime for the LokiStack during OpenShift Container Platform cluster updates. This behavior is achieved by using PodDisruptionBudget resources. The Loki Operator provisions PodDisruptionBudget resources for Loki, which determine the minimum number of pods that must be available per component to ensure normal operations under certain conditions. 2.6.8.5. Advanced Deployment and Scalability Specialized configurations for high availability, scalability, and error handling. 2.6.8.6. Zone aware data replication The Loki Operator offers support for zone-aware data replication through pod topology spread constraints. Enabling this feature enhances reliability and safeguards against log loss in the event of a single zone failure. When configuring the deployment size as 1x.extra-small , 1x.small , or 1x.medium , the replication.factor field is automatically set to 2. To ensure proper replication, you need to have at least as many availability zones as the replication factor specifies. While it is possible to have more availability zones than the replication factor, having fewer zones can lead to write failures. Each zone should host an equal number of instances for optimal operation. Example LokiStack CR with zone replication enabled apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4 1 Deprecated field, values entered are overwritten by replication.factor . 2 This value is automatically set when deployment size is selected at setup. 3 The maximum difference in number of pods between any two topology domains. The default is 1, and you cannot specify a value of 0. 4 Defines zones in the form of a topology key that corresponds to a node label. 2.6.8.7. Recovering Loki pods from failed zones In OpenShift Container Platform a zone failure happens when specific availability zone resources become inaccessible. Availability zones are isolated areas within a cloud provider's data center, aimed at enhancing redundancy and fault tolerance. If your OpenShift Container Platform cluster is not configured to handle this, a zone failure can lead to service or data loss. Loki pods are part of a StatefulSet , and they come with Persistent Volume Claims (PVCs) provisioned by a StorageClass object. Each Loki pod and its PVCs reside in the same zone. When a zone failure occurs in a cluster, the StatefulSet controller automatically attempts to recover the affected pods in the failed zone. Warning The following procedure will delete the PVCs in the failed zone, and all data contained therein. To avoid complete data loss the replication factor field of the LokiStack CR should always be set to a value greater than 1 to ensure that Loki is replicating. Prerequisites Verify your LokiStack CR has a replication factor greater than 1. Zone failure detected by the control plane, and nodes in the failed zone are marked by cloud provider integration. The StatefulSet controller automatically attempts to reschedule pods in a failed zone. Because the associated PVCs are also in the failed zone, automatic rescheduling to a different zone does not work. You must manually delete the PVCs in the failed zone to allow successful re-creation of the stateful Loki Pod and its provisioned PVC in the new zone. Procedure List the pods in Pending status by running the following command: USD oc get pods --field-selector status.phase==Pending -n openshift-logging Example oc get pods output NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m 1 These pods are in Pending status because their corresponding PVCs are in the failed zone. List the PVCs in Pending status by running the following command: USD oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == "Pending") | .metadata.name' -r Example oc get pvc output storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1 Delete the PVC(s) for a pod by running the following command: USD oc delete pvc <pvc_name> -n openshift-logging Delete the pod(s) by running the following command: USD oc delete pod <pod_name> -n openshift-logging Once these objects have been successfully deleted, they should automatically be rescheduled in an available zone. 2.6.8.7.1. Troubleshooting PVC in a terminating state The PVCs might hang in the terminating state without being deleted, if PVC metadata finalizers are set to kubernetes.io/pv-protection . Removing the finalizers should allow the PVCs to delete successfully. Remove the finalizer for each PVC by running the command below, then retry deletion. USD oc patch pvc <pvc_name> -p '{"metadata":{"finalizers":null}}' -n openshift-logging 2.6.8.8. Troubleshooting Loki rate limit errors If the Log Forwarder API forwards a large block of messages that exceeds the rate limit to Loki, Loki generates rate limit ( 429 ) errors. These errors can occur during normal operation. For example, when adding the logging to a cluster that already has some logs, rate limit errors might occur while the logging tries to ingest all of the existing log entries. In this case, if the rate of addition of new logs is less than the total rate limit, the historical data is eventually ingested, and the rate limit errors are resolved without requiring user intervention. In cases where the rate limit errors continue to occur, you can fix the issue by modifying the LokiStack custom resource (CR). Important The LokiStack CR is not available on Grafana-hosted Loki. This topic does not apply to Grafana-hosted Loki servers. Conditions The Log Forwarder API is configured to forward logs to Loki. Your system sends a block of messages that is larger than 2 MB to Loki. For example: "values":[["1630410392689800468","{\"kind\":\"Event\",\"apiVersion\":\ ....... ...... ...... ...... \"received_at\":\"2021-08-31T11:46:32.800278+00:00\",\"version\":\"1.7.4 1.6.0\"}},\"@timestamp\":\"2021-08-31T11:46:32.799692+00:00\",\"viaq_index_name\":\"audit-write\",\"viaq_msg_id\":\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\",\"log_type\":\"audit\"}"]]}]} After you enter oc logs -n openshift-logging -l component=collector , the collector logs in your cluster show a line containing one of the following error messages: 429 Too Many Requests Ingestion rate limit exceeded Example Vector error message 2023-08-25T16:08:49.301780Z WARN sink{component_kind="sink" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true The error is also visible on the receiving end. For example, in the LokiStack ingester pod: Example Loki ingester error message level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err="rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream Procedure Update the ingestionBurstSize and ingestionRate fields in the LokiStack CR: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2 # ... 1 The ingestionBurstSize field defines the maximum local rate-limited sample size per distributor replica in MB. This value is a hard limit. Set this value to at least the maximum logs size expected in a single push request. Single requests that are larger than the ingestionBurstSize value are not permitted. 2 The ingestionRate field is a soft limit on the maximum amount of ingested samples per second in MB. Rate limit errors occur if the rate of logs exceeds the limit, but the collector retries sending the logs. As long as the total average is lower than the limit, the system recovers and errors are resolved without user intervention. 2.7. OTLP data ingestion in Loki You can use an API endpoint by using the OpenTelemetry Protocol (OTLP) with Logging 6.1. As OTLP is a standardized format not specifically designed for Loki, OTLP requires an additional Loki configuration to map data format of OpenTelemetry to data model of Loki. OTLP lacks concepts such as stream labels or structured metadata . Instead, OTLP provides metadata about log entries as attributes , grouped into the following three categories: Resource Scope Log You can set metadata for multiple entries simultaneously or individually as needed. 2.7.1. Configuring LokiStack for OTLP data ingestion Important The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To configure a LokiStack custom resource (CR) for OTLP ingestion, follow these steps: Prerequisites Ensure that your Loki setup supports structured metadata, introduced in schema version 13 to enable OTLP log ingestion. Procedure Set the schema version: When creating a new LokiStack CR, set version: v13 in the storage schema configuration. Note For existing configurations, add a new schema entry with version: v13 and an effectiveDate in the future. For more information on updating schema versions, see Upgrading Schemas (Grafana documentation). Configure the storage schema as follows: Example configure storage schema # ... spec: storage: schemas: - version: v13 effectiveDate: 2024-10-25 Once the effectiveDate has passed, the v13 schema takes effect, enabling your LokiStack to store structured metadata. 2.7.2. Attribute mapping When you set the Loki Operator to the openshift-logging mode, Loki Operator automatically applies a default set of attribute mappings. These mappings align specific OTLP attributes with stream labels and structured metadata of Loki. For typical setups, these default mappings are sufficient. However, you might need to customize attribute mapping in the following cases: Using a custom collector: If your setup includes a custom collector that generates additional attributes, consider customizing the mapping to ensure these attributes are retained in Loki. Adjusting attribute detail levels: If the default attribute set is more detailed than necessary, you can reduce it to essential attributes only. This can avoid excessive data storage and streamline the logging process. Important Attributes that are not mapped to either stream labels or structured metadata are not stored in Loki. 2.7.2.1. Custom attribute mapping for OpenShift When using the Loki Operator in openshift-logging mode, attribute mapping follow OpenShift default values, but you can configure custom mappings to adjust default values. In the openshift-logging mode, you can configure custom attribute mappings globally for all tenants or for individual tenants as needed. When you define custom mappings, they are appended to the OpenShift default values. If you do not need default labels, you can disable them in the tenant configuration. Note A major difference between the Loki Operator and Loki lies in inheritance handling. Loki copies only default_resource_attributes_as_index_labels to tenants by default, while the Loki Operator applies the entire global configuration to each tenant in the openshift-logging mode. Within LokiStack , attribute mapping configuration is managed through the limits setting. See the following example LokiStack configuration: # ... spec: limits: global: otlp: {} 1 tenants: application: otlp: {} 2 1 Defines global OTLP attribute configuration. 2 OTLP attribute configuration for the application tenant within openshift-logging mode. Note Both global and per-tenant OTLP configurations can map attributes to stream labels or structured metadata. At least one stream label is required to save a log entry to Loki storage, so ensure this configuration meets that requirement. Stream labels derive only from resource-level attributes, which the LokiStack resource structure reflects: spec: limits: global: otlp: streamLabels: resourceAttributes: - name: "k8s.namespace.name" - name: "k8s.pod.name" - name: "k8s.container.name" Structured metadata, in contrast, can be generated from resource, scope or log-level attributes: # ... spec: limits: global: otlp: streamLabels: # ... structuredMetadata: resourceAttributes: - name: "process.command_line" - name: "k8s\\.pod\\.labels\\..+" regex: true scopeAttributes: - name: "service.name" logAttributes: - name: "http.route" Tip Use regular expressions by setting regex: true for attributes names when mapping similar attributes in Loki. Important Avoid using regular expressions for stream labels, as this can increase data volume. 2.7.2.2. Customizing OpenShift defaults In openshift-logging mode, certain attributes are required and cannot be removed from the configuration due to their role in OpenShift functions. Other attributes, labeled recommended , might be disabled if performance is impacted. When using the openshift-logging mode without custom attributes, you can achieve immediate compatibility with OpenShift tools. If additional attributes are needed as stream labels or structured metadata, use custom configuration. Custom configurations can merge with default configurations. 2.7.2.3. Removing recommended attributes To reduce default attributes in openshift-logging mode, disable recommended attributes: # ... spec: tenants: mode: openshift-logging openshift: otlp: disableRecommendedAttributes: true 1 1 Set disableRecommendedAttributes: true to remove recommended attributes, which limits default attributes to the required attributes . Note This option is beneficial if the default attributes causes performance or storage issues. This setting might negatively impact query performance, as it removes default stream labels. You should pair this option with a custom attribute configuration to retain attributes essential for queries. 2.7.3. Additional resources Loki labels Structured metadata OpenTelemetry attribute 2.8. OpenTelemetry data model This document outlines the protocol and semantic conventions for Red Hat OpenShift Logging's OpenTelemetry support with Logging 6.1. Important The OpenTelemetry Protocol (OTLP) output log forwarder is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 2.8.1. Forwarding and ingestion protocol Red Hat OpenShift Logging collects and forwards logs to OpenTelemetry endpoints using OTLP Specification . OTLP encodes, transports, and delivers telemetry data. You can also deploy Loki storage, which provides an OTLP endpont to ingest log streams. This document defines the semantic conventions for the logs collected from various OpenShift cluster sources. 2.8.2. Semantic conventions The log collector in this solution gathers the following log streams: Container logs Cluster node journal logs Cluster node auditd logs Kubernetes and OpenShift API server logs OpenShift Virtual Network (OVN) logs You can forward these streams according to the semantic conventions defined by OpenTelemetry semantic attributes. The semantic conventions in OpenTelemetry define a resource as an immutable representation of the entity producing telemetry, identified by attributes. For example, a process running in a container includes attributes such as container_name , cluster_id , pod_name , namespace , and possibly deployment or app_name . These attributes are grouped under the resource object, which helps reduce repetition and optimizes log transmission as telemetry data. In addition to resource attributes, logs might also contain scope attributes specific to instrumentation libraries and log attributes specific to each log entry. These attributes provide greater detail about each log entry and enhance filtering capabilities when querying logs in storage. The following sections define the attributes that are generally forwarded. 2.8.2.1. Log entry structure All log streams include the following log data fields: The Applicable Sources column indicates which log sources each field applies to: all : This field is present in all logs. container : This field is present in Kubernetes container logs, both application and infrastructure. audit : This field is present in Kubernetes, OpenShift API, and OVN logs. auditd : This field is present in node auditd logs. journal : This field is present in node journal logs. Name Applicable Sources Comment body all observedTimeUnixNano all timeUnixNano all severityText container, journal attributes all (Optional) Present when forwarding stream specific attributes 2.8.2.2. Attributes Log entries include a set of resource, scope, and log attributes based on their source, as described in the following table. The Location column specifies the type of attribute: resource : Indicates a resource attribute scope : Indicates a scope attribute log : Indicates a log attribute The Storage column indicates whether the attribute is stored in a LokiStack using the default openshift-logging mode and specifies where the attribute is stored: stream label : Enables efficient filtering and querying based on specific labels. Can be labeled as required if the Loki Operator enforces this attribute in the configuration. structured metadata : Allows for detailed filtering and storage of key-value pairs. Enables users to use direct labels for streamlined queries without requiring JSON parsing. With OTLP, users can filter queries directly by labels rather than using JSON parsing, improving the speed and efficiency of queries. Name Location Applicable Sources Storage (LokiStack) Comment log_source resource all required stream label (DEPRECATED) Compatibility attribute, contains same information as openshift.log.source log_type resource all required stream label (DEPRECATED) Compatibility attribute, contains same information as openshift.log.type kubernetes.container_name resource container stream label (DEPRECATED) Compatibility attribute, contains same information as k8s.container.name kubernetes.host resource all stream label (DEPRECATED) Compatibility attribute, contains same information as k8s.node.name kubernetes.namespace_name resource container required stream label (DEPRECATED) Compatibility attribute, contains same information as k8s.namespace.name kubernetes.pod_name resource container stream label (DEPRECATED) Compatibility attribute, contains same information as k8s.pod.name openshift.cluster_id resource all (DEPRECATED) Compatibility attribute, contains same information as openshift.cluster.uid level log container, journal (DEPRECATED) Compatibility attribute, contains same information as severityText openshift.cluster.uid resource all required stream label openshift.log.source resource all required stream label openshift.log.type resource all required stream label openshift.labels.* resource all structured metadata k8s.node.name resource all stream label k8s.namespace.name resource container required stream label k8s.container.name resource container stream label k8s.pod.labels.* resource container structured metadata k8s.pod.name resource container stream label k8s.pod.uid resource container structured metadata k8s.cronjob.name resource container stream label Conditionally forwarded based on creator of pod k8s.daemonset.name resource container stream label Conditionally forwarded based on creator of pod k8s.deployment.name resource container stream label Conditionally forwarded based on creator of pod k8s.job.name resource container stream label Conditionally forwarded based on creator of pod k8s.replicaset.name resource container structured metadata Conditionally forwarded based on creator of pod k8s.statefulset.name resource container stream label Conditionally forwarded based on creator of pod log.iostream log container structured metadata k8s.audit.event.level log audit structured metadata k8s.audit.event.stage log audit structured metadata k8s.audit.event.user_agent log audit structured metadata k8s.audit.event.request.uri log audit structured metadata k8s.audit.event.response.code log audit structured metadata k8s.audit.event.annotation.* log audit structured metadata k8s.audit.event.object_ref.resource log audit structured metadata k8s.audit.event.object_ref.name log audit structured metadata k8s.audit.event.object_ref.namespace log audit structured metadata k8s.audit.event.object_ref.api_group log audit structured metadata k8s.audit.event.object_ref.api_version log audit structured metadata k8s.user.username log audit structured metadata k8s.user.groups log audit structured metadata process.executable.name resource journal structured metadata process.executable.path resource journal structured metadata process.command_line resource journal structured metadata process.pid resource journal structured metadata service.name resource journal stream label systemd.t.* log journal structured metadata systemd.u.* log journal structured metadata Note Attributes marked as Compatibility attribute support minimal backward compatibility with the ViaQ data model. These attributes are deprecated and function as a compatibility layer to ensure continued UI functionality. These attributes will remain supported until the Logging UI fully supports the OpenTelemetry counterparts in future releases. Loki changes the attribute names when persisting them to storage. The names will be lowercased, and all characters in the set: ( . , / , - ) will be replaced by underscores ( _ ). For example, k8s.namespace.name will become k8s_namespace_name . 2.8.3. Additional resources Semantic Conventions Logs Data Model General Logs Attributes 2.9. Visualization for logging Visualization for logging is provided by deploying the Logging UI Plugin of the Cluster Observability Operator , which requires Operator installation. Important Until the approaching General Availability (GA) release of the Cluster Observability Operator (COO), which is currently in Technology Preview (TP), Red Hat provides support to customers who are using Logging 6.0 or later with the COO for its Logging UI Plugin on OpenShift Container Platform 4.14 or later. This support exception is temporary as the COO includes several independent features, some of which are still TP features, but the Logging UI Plugin is ready for GA.
|
[
"Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.",
"oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')",
"tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector",
"oc project openshift-logging",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-audit-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging spec: serviceAccount: name: collector outputs: - name: default-lokistack type: lokiStack lokiStack: authentication: token: from: serviceAccount target: name: logging-loki namespace: openshift-logging tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: default-logstore inputRefs: - application - infrastructure outputRefs: - default-lokistack",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: managementState: Managed size: 1x.extra-small storage: schemas: - effectiveDate: '2024-10-01' version: v13 secret: name: logging-loki-s3 type: s3 storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc create sa collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z collector",
"oc project openshift-logging",
"oc adm policy add-cluster-role-to-user collect-application-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-audit-logs -z collector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z collector",
"apiVersion: observability.openshift.io/v1alpha1 kind: UIPlugin metadata: name: logging spec: type: Logging logging: lokiStack: name: logging-loki",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector namespace: openshift-logging annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 spec: serviceAccount: name: collector outputs: - name: loki-otlp type: lokiStack 2 lokiStack: target: name: logging-loki namespace: openshift-logging dataModel: Otel 3 authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: my-pipeline inputRefs: - application - infrastructure outputRefs: - loki-otlp",
"apiVersion: v1 kind: Namespace metadata: name: openshift-operators-redhat 1 labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: upgradeStrategy: Default",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat 1 spec: channel: stable-6.<y> 2 installPlanApproval: Automatic 3 name: loki-operator source: redhat-operators 4 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 labels: openshift.io/cluster-monitoring: \"true\" 2",
"oc apply -f <filename>.yaml",
"apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 1 namespace: openshift-logging stringData: 2 access_key_id: <access_key_id> access_key_secret: <access_secret> bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" 4 secret: name: logging-loki-s3 5 type: s3 6 storageClassName: <storage_class_name> 7 tenants: mode: openshift-logging 8",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE logging-loki-compactor-0 1/1 Running 0 42m logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m logging-loki-index-gateway-0 1/1 Running 0 42m logging-loki-ingester-0 1/1 Running 0 42m logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m",
"apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: cluster-logging namespace: openshift-logging 1 spec: upgradeStrategy: Default",
"oc apply -f <filename>.yaml",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: cluster-logging namespace: openshift-logging 1 spec: channel: stable-6.<y> 2 installPlanApproval: Automatic 3 name: cluster-logging source: redhat-operators 4 sourceNamespace: openshift-marketplace",
"oc apply -f <filename>.yaml",
"oc create sa logging-collector -n openshift-logging",
"oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logging-collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-application-logs -z logging-collector -n openshift-logging oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logging-collector -n openshift-logging",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging 1 spec: serviceAccount: name: logging-collector 2 outputs: - name: lokistack-out type: lokiStack 3 lokiStack: target: 4 name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: infra-app-logs inputRefs: 5 - application - infrastructure outputRefs: - lokistack-out",
"oc apply -f <filename>.yaml",
"oc get pods -n openshift-logging",
"oc get pods -n openshift-logging NAME READY STATUS RESTARTS AGE cluster-logging-operator-fb7f7cf69-8jsbq 1/1 Running 0 98m instance-222js 2/2 Running 0 18m instance-g9ddv 2/2 Running 0 18m instance-hfqq8 2/2 Running 0 18m instance-sphwg 2/2 Running 0 18m instance-vv7zn 2/2 Running 0 18m instance-wk5zz 2/2 Running 0 18m logging-loki-compactor-0 1/1 Running 0 42m logging-loki-distributor-7d7688bcb9-dvcj8 1/1 Running 0 42m logging-loki-gateway-5f6c75f879-bl7k9 2/2 Running 0 42m logging-loki-gateway-5f6c75f879-xhq98 2/2 Running 0 42m logging-loki-index-gateway-0 1/1 Running 0 42m logging-loki-ingester-0 1/1 Running 0 42m logging-loki-querier-6b7b56bccc-2v9q4 1/1 Running 0 42m logging-loki-query-frontend-84fb57c578-gq2f7 1/1 Running 0 42m",
"apiVersion: v1 kind: Namespace metadata: name: openshift-logging 1 labels: openshift.io/cluster-monitoring: \"true\" 2",
"apiVersion: v1 kind: Secret metadata: name: logging-loki-s3 1 namespace: openshift-logging 2 stringData: 3 access_key_id: <access_key_id> access_key_secret: <access_key> bucketnames: s3-bucket-name endpoint: https://s3.eu-central-1.amazonaws.com region: eu-central-1",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki 1 namespace: openshift-logging 2 spec: size: 1x.small 3 storage: schemas: - version: v13 effectiveDate: \"<yyyy>-<mm>-<dd>\" secret: name: logging-loki-s3 4 type: s3 5 storageClassName: <storage_class_name> 6 tenants: mode: openshift-logging 7",
"apiVersion: v1 kind: ServiceAccount metadata: name: logging-collector 1 namespace: openshift-logging 2",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:write-logs roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: logging-collector-logs-writer 1 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:collect-application roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-application-logs 2 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: logging-collector:collect-infrastructure roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: collect-infrastructure-logs 3 subjects: - kind: ServiceAccount name: logging-collector namespace: openshift-logging",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: instance namespace: openshift-logging 1 spec: serviceAccount: name: logging-collector 2 outputs: - name: lokistack-out type: lokiStack 3 lokiStack: target: 4 name: logging-loki namespace: openshift-logging authentication: token: from: serviceAccount tls: ca: key: service-ca.crt configMapName: openshift-service-ca.crt pipelines: - name: infra-app-logs inputRefs: 5 - application - infrastructure outputRefs: - lokistack-out",
"oc adm policy add-cluster-role-to-user collect-application-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-infrastructure-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user collect-audit-logs system:serviceaccount:openshift-logging:logcollector",
"oc adm policy add-cluster-role-to-user <cluster_role_name> system:serviceaccount:<namespace_name>:<service_account_name>",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: manager-rolebinding roleRef: 1 apiGroup: rbac.authorization.k8s.io 2 kind: ClusterRole 3 name: cluster-logging-operator 4 subjects: 5 - kind: ServiceAccount 6 name: cluster-logging-operator 7 namespace: openshift-logging 8",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-application-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - application 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-audit-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - audit 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: cluster-logging-write-infrastructure-logs rules: 1 - apiGroups: 2 - loki.grafana.com 3 resources: 4 - infrastructure 5 resourceNames: 6 - logs 7 verbs: 8 - create 9",
"apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: clusterlogforwarder-editor-role rules: 1 - apiGroups: 2 - observability.openshift.io 3 resources: 4 - clusterlogforwarders 5 verbs: 6 - create 7 - delete 8 - get 9 - list 10 - patch 11 - update 12 - watch 13",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: collector annotations: observability.openshift.io/log-level: debug",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: annotations: observability.openshift.io/tech-preview-otlp-output: \"enabled\" 1 name: clf-otlp spec: serviceAccount: name: <service_account_name> outputs: - name: otlp type: otlp otlp: tuning: compression: gzip deliveryMode: AtLeastOnce maxRetryDuration: 20 maxWrite: 10M minRetryDuration: 5 url: <otlp_url> 2 pipelines: - inputRefs: - application - infrastructure - audit name: otlp-logs outputRefs: - otlp",
"java.lang.NullPointerException: Cannot invoke \"String.toString()\" because \"<param1>\" is null at testjava.Main.handle(Main.java:47) at testjava.Main.printMe(Main.java:19) at testjava.Main.main(Main.java:10)",
"apiVersion: \"observability.openshift.io/v1\" kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> filters: - name: <name> type: detectMultilineException pipelines: - inputRefs: - <input-name> name: <pipeline-name> filterRefs: - <filter-name> outputRefs: - <output-name>",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: drop 1 drop: 2 - test: 3 - field: .kubernetes.labels.\"foo-bar/baz\" 4 matches: .+ 5 - field: .kubernetes.pod_name notMatches: \"my-pod\" 6 pipelines: - name: <pipeline_name> 7 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .message notMatches: \"(?i)critical|error\" - field: .level matches: \"info|warning\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: important type: drop drop: - test: - field: .kubernetes.namespace_name matches: \"^open\" - test: - field: .log_type matches: \"application\" - field: .kubernetes.pod_name notMatches: \"my-pod\"",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: name: <log_forwarder_name> namespace: <log_forwarder_namespace> spec: serviceAccount: name: <service_account_name> pipelines: - name: my-pipeline inputRefs: audit 1 filterRefs: my-policy 2 filters: - name: my-policy type: kubeAPIAudit kubeAPIAudit: # Don't generate audit events for all requests in RequestReceived stage. omitStages: - \"RequestReceived\" rules: # Log pod changes at RequestResponse level - level: RequestResponse resources: - group: \"\" resources: [\"pods\"] # Log \"pods/log\", \"pods/status\" at Metadata level - level: Metadata resources: - group: \"\" resources: [\"pods/log\", \"pods/status\"] # Don't log requests to a configmap called \"controller-leader\" - level: None resources: - group: \"\" resources: [\"configmaps\"] resourceNames: [\"controller-leader\"] # Don't log watch requests by the \"system:kube-proxy\" on endpoints or services - level: None users: [\"system:kube-proxy\"] verbs: [\"watch\"] resources: - group: \"\" # core API group resources: [\"endpoints\", \"services\"] # Don't log authenticated requests to certain non-resource URL paths. - level: None userGroups: [\"system:authenticated\"] nonResourceURLs: - \"/api*\" # Wildcard matching. - \"/version\" # Log the request body of configmap changes in kube-system. - level: Request resources: - group: \"\" # core API group resources: [\"configmaps\"] # This rule only applies to resources in the \"kube-system\" namespace. # The empty string \"\" can be used to select non-namespaced resources. namespaces: [\"kube-system\"] # Log configmap and secret changes in all other namespaces at the Metadata level. - level: Metadata resources: - group: \"\" # core API group resources: [\"secrets\", \"configmaps\"] # Log all other resources in core and extensions at the Request level. - level: Request resources: - group: \"\" # core API group - group: \"extensions\" # Version of group should NOT be included. # A catch-all rule to log all other requests at the Metadata level. - level: Metadata",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: selector: matchExpressions: - key: env 1 operator: In 2 values: [\"prod\", \"qa\"] 3 - key: zone operator: NotIn values: [\"east\", \"west\"] matchLabels: 4 app: one name: app1 type: application",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder metadata: spec: serviceAccount: name: <service_account_name> filters: - name: <filter_name> type: prune 1 prune: 2 in: [.kubernetes.annotations, .kubernetes.namespace_id] 3 notIn: [.kubernetes,.log_type,.message,.\"@timestamp\"] 4 pipelines: - name: <pipeline_name> 5 filterRefs: [\"<filter_name>\"]",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs1 type: infrastructure infrastructure: sources: 1 - node - name: mylogs2 type: audit audit: sources: 2 - kubeAPI - openshiftAPI - ovn",
"oc apply -f <filename>.yaml",
"apiVersion: observability.openshift.io/v1 kind: ClusterLogForwarder spec: serviceAccount: name: <service_account_name> inputs: - name: mylogs application: includes: - namespace: \"my-project\" 1 container: \"my-container\" 2 excludes: - container: \"other-container*\" 3 namespace: \"other-namespace\" 4 type: application",
"oc apply -f <filename>.yaml",
"oc adm policy add-role-to-user alertingrules.loki.grafana.com-v1-admin -n <namespace> <username>",
"oc adm policy add-cluster-role-to-user alertingrules.loki.grafana.com-v1-admin <username>",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: loki-operator-alerts namespace: openshift-operators-redhat 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"infrastructure\" 3 groups: - name: LokiOperatorHighReconciliationError rules: - alert: HighPercentageError expr: | 4 sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"} |= \"error\" [1m])) by (job) / sum(rate({kubernetes_namespace_name=\"openshift-operators-redhat\", kubernetes_pod_name=~\"loki-operator-controller-manager.*\"}[1m])) by (job) > 0.01 for: 10s labels: severity: critical 5 annotations: summary: High Loki Operator Reconciliation Errors 6 description: High Loki Operator Reconciliation Errors 7",
"apiVersion: loki.grafana.com/v1 kind: AlertingRule metadata: name: app-user-workload namespace: app-ns 1 labels: 2 openshift.io/<label_name>: \"true\" spec: tenantID: \"application\" groups: - name: AppUserWorkloadHighError rules: - alert: expr: | 3 sum(rate({kubernetes_namespace_name=\"app-ns\", kubernetes_pod_name=~\"podName.*\"} |= \"error\" [1m])) by (job) for: 10s labels: severity: critical 4 annotations: summary: 5 description: 6",
"oc apply -f <filename>.yaml",
"oc patch LokiStack logging-loki -n openshift-logging --type=merge -p '{\"spec\": {\"hashRing\":{\"memberlist\":{\"instanceAddrType\":\"podIP\"},\"type\":\"memberlist\"}}}'",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: hashRing: type: memberlist memberlist: instanceAddrType: podIP",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: 1 retention: 2 days: 20 streams: - days: 4 priority: 1 selector: '{kubernetes_namespace_name=~\"test.+\"}' 3 - days: 1 priority: 1 selector: '{log_type=\"infrastructure\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: retention: days: 20 tenants: 1 application: retention: days: 1 streams: - days: 4 selector: '{kubernetes_namespace_name=~\"test.+\"}' 2 infrastructure: retention: days: 5 streams: - days: 1 selector: '{kubernetes_namespace_name=~\"openshift-cluster.+\"}' managementState: Managed replicationFactor: 1 size: 1x.small storage: schemas: - effectiveDate: \"2020-10-11\" version: v13 secret: name: logging-loki-s3 type: aws storageClassName: gp3-csi tenants: mode: openshift-logging",
"oc apply -f <filename>.yaml",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2 distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" querier: nodeSelector: node-role.kubernetes.io/infra: \"\" queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" ruler: nodeSelector: node-role.kubernetes.io/infra: \"\"",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: compactor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved distributor: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved indexGateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ingester: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved querier: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved queryFrontend: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved ruler: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved gateway: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: - effect: NoSchedule key: node-role.kubernetes.io/infra value: reserved - effect: NoExecute key: node-role.kubernetes.io/infra value: reserved",
"oc explain lokistack.spec.template",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: template <Object> DESCRIPTION: Template defines the resource/limits/tolerations/nodeselectors per component FIELDS: compactor <Object> Compactor defines the compaction component spec. distributor <Object> Distributor defines the distributor component spec.",
"oc explain lokistack.spec.template.compactor",
"KIND: LokiStack VERSION: loki.grafana.com/v1 RESOURCE: compactor <Object> DESCRIPTION: Compactor defines the compaction component spec. FIELDS: nodeSelector <map[string]string> NodeSelector defines the labels required by a node to schedule the component onto it.",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: CLIENTID value: <your_client_id> - name: TENANTID value: <your_tenant_id> - name: SUBSCRIPTIONID value: <your_subscription_id> - name: REGION value: <your_region>",
"apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: loki-operator namespace: openshift-operators-redhat spec: channel: \"stable-6.0\" installPlanApproval: Manual name: loki-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: ROLEARN value: <role_ARN>",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: template: ingester: podAntiAffinity: # requiredDuringSchedulingIgnoredDuringExecution: 1 - labelSelector: matchLabels: 2 app.kubernetes.io/component: ingester topologyKey: kubernetes.io/hostname",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: replicationFactor: 2 1 replication: factor: 2 2 zones: - maxSkew: 1 3 topologyKey: topology.kubernetes.io/zone 4",
"oc get pods --field-selector status.phase==Pending -n openshift-logging",
"NAME READY STATUS RESTARTS AGE 1 logging-loki-index-gateway-1 0/1 Pending 0 17m logging-loki-ingester-1 0/1 Pending 0 16m logging-loki-ruler-1 0/1 Pending 0 16m",
"oc get pvc -o=json -n openshift-logging | jq '.items[] | select(.status.phase == \"Pending\") | .metadata.name' -r",
"storage-logging-loki-index-gateway-1 storage-logging-loki-ingester-1 wal-logging-loki-ingester-1 storage-logging-loki-ruler-1 wal-logging-loki-ruler-1",
"oc delete pvc <pvc_name> -n openshift-logging",
"oc delete pod <pod_name> -n openshift-logging",
"oc patch pvc <pvc_name> -p '{\"metadata\":{\"finalizers\":null}}' -n openshift-logging",
"\"values\":[[\"1630410392689800468\",\"{\\\"kind\\\":\\\"Event\\\",\\\"apiVersion\\\": .... ... ... ... \\\"received_at\\\":\\\"2021-08-31T11:46:32.800278+00:00\\\",\\\"version\\\":\\\"1.7.4 1.6.0\\\"}},\\\"@timestamp\\\":\\\"2021-08-31T11:46:32.799692+00:00\\\",\\\"viaq_index_name\\\":\\\"audit-write\\\",\\\"viaq_msg_id\\\":\\\"MzFjYjJkZjItNjY0MC00YWU4LWIwMTEtNGNmM2E5ZmViMGU4\\\",\\\"log_type\\\":\\\"audit\\\"}\"]]}]}",
"429 Too Many Requests Ingestion rate limit exceeded",
"2023-08-25T16:08:49.301780Z WARN sink{component_kind=\"sink\" component_id=default_loki_infra component_type=loki component_name=default_loki_infra}: vector::sinks::util::retries: Retrying after error. error=Server responded with an error: 429 Too Many Requests internal_log_rate_limit=true",
"level=warn ts=2023-08-30T14:57:34.155592243Z caller=grpc_logging.go:43 duration=1.434942ms method=/logproto.Pusher/Push err=\"rpc error: code = Code(429) desc = entry with timestamp 2023-08-30 14:57:32.012778399 +0000 UTC ignored, reason: 'Per stream rate limit exceeded (limit: 3MB/sec) while attempting to ingest for stream",
"apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: logging-loki namespace: openshift-logging spec: limits: global: ingestion: ingestionBurstSize: 16 1 ingestionRate: 8 2",
"spec: storage: schemas: - version: v13 effectiveDate: 2024-10-25",
"spec: limits: global: otlp: {} 1 tenants: application: otlp: {} 2",
"spec: limits: global: otlp: streamLabels: resourceAttributes: - name: \"k8s.namespace.name\" - name: \"k8s.pod.name\" - name: \"k8s.container.name\"",
"spec: limits: global: otlp: streamLabels: structuredMetadata: resourceAttributes: - name: \"process.command_line\" - name: \"k8s\\\\.pod\\\\.labels\\\\..+\" regex: true scopeAttributes: - name: \"service.name\" logAttributes: - name: \"http.route\"",
"spec: tenants: mode: openshift-logging openshift: otlp: disableRecommendedAttributes: true 1"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/logging/logging-6-1
|
5.4.16. RAID Logical Volumes
|
5.4.16. RAID Logical Volumes As of the Red Hat Enterprise Linux 6.3 release, LVM supports RAID4/5/6 and a new implementation of mirroring. The latest implementation of mirroring differs from the implementation of mirroring (documented in Section 5.4.3, "Creating Mirrored Volumes" ) in the following ways: The segment type for the new implementation of mirroring is raid1 . For the earlier implementation, the segment type is mirror . The new implementation of mirroring leverages MD software RAID, just as for the RAID 4/5/6 implementations. The new implementation of mirroring maintains a fully redundant bitmap area for each mirror image, which increases its fault handling capabilities. This means that there is no --mirrorlog option or --corelog option for mirrors created with this segment type. The new implementation of mirroring can handle transient failures. Mirror images can be temporarily split from the array and merged back into the array later. The new implementation of mirroring supports snapshots (as do the higher-level RAID implementations). The new RAID implementations are not cluster-aware. You cannot create an LVM RAID logical volume in a clustered volume group. For information on how failures are handled by the RAID logical volumes, see Section 5.4.16.8, "Setting a RAID fault policy" . The remainder of this section describes the following administrative tasks you can perform on LVM RAID devices: Section 5.4.16.1, "Creating a RAID Logical Volume" Section 5.4.16.2, "Converting a Linear Device to a RAID Device" Section 5.4.16.3, "Converting an LVM RAID1 Logical Volume to an LVM Linear Logical Volume" Section 5.4.16.4, "Converting a Mirrored LVM Device to a RAID1 Device" Section 5.4.16.5, "Changing the Number of Images in an Existing RAID1 Device" Section 5.4.16.6, "Splitting off a RAID Image as a Separate Logical Volume" Section 5.4.16.7, "Splitting and Merging a RAID Image" Section 5.4.16.8, "Setting a RAID fault policy" Section 5.4.16.9, "Replacing a RAID device" Section 5.4.16.10, "Scrubbing a RAID Logical Volume" Section 5.4.16.11, "Controlling I/O Operations on a RAID1 Logical Volume" 5.4.16.1. Creating a RAID Logical Volume To create a RAID logical volume, you specify a raid type as the --type argument of the lvcreate command. Usually when you create a logical volume with the lvcreate command, the --type argument is implicit. For example, when you specify the -i stripes argument, the lvcreate command assumes the --type stripe option. When you specify the -m mirrors argument, the lvcreate command assumes the --type mirror option. When you create a RAID logical volume, however, you must explicitly specify the segment type you desire. The possible RAID segment types are described in Table 5.1, "RAID Segment Types" . Table 5.1. RAID Segment Types Segment type Description raid1 RAID1 mirroring raid4 RAID4 dedicated parity disk raid5 Same as raid5_ls raid5_la RAID5 left asymmetric. Rotating parity 0 with data continuation raid5_ra RAID5 right asymmetric. Rotating parity N with data continuation raid5_ls RAID5 left symmetric. Rotating parity 0 with data restart raid5_rs RAID5 right symmetric. Rotating parity N with data restart raid6 Same as raid6_zr raid6_zr RAID6 zero restart Rotating parity zero (left-to-right) with data restart raid6_nr RAID6 N restart Rotating parity N (left-to-right) with data restart raid6_nc RAID6 N continue Rotating parity N (left-to-right) with data continuation raid10 (Red Hat Enterprise Linux 6.4 and later Striped mirrors Striping of mirror sets For most users, specifying one of the five available primary types ( raid1 , raid4 , raid5 , raid6 , raid10 ) should be sufficient. For more information on the different algorithms used by RAID 5/6, see chapter four of the Common RAID Disk Data Format Specification at http://www.snia.org/sites/default/files/SNIA_DDF_Technical_Position_v2.0.pdf . When you create a RAID logical volume, LVM creates a metadata subvolume that is one extent in size for every data or parity subvolume in the array. For example, creating a 2-way RAID1 array results in two metadata subvolumes ( lv_rmeta_0 and lv_rmeta_1 ) and two data subvolumes ( lv_rimage_0 and lv_rimage_1 ). Similarly, creating a 3-way stripe (plus 1 implicit parity device) RAID4 results in 4 metadata subvolumes ( lv_rmeta_0 , lv_rmeta_1 , lv_rmeta_2 , and lv_rmeta_3 ) and 4 data subvolumes ( lv_rimage_0 , lv_rimage_1 , lv_rimage_2 , and lv_rimage_3 ). The following command creates a 2-way RAID1 array named my_lv in the volume group my_vg that is 1G in size. You can create RAID1 arrays with different numbers of copies according to the value you specify for the -m argument. Although the -m argument is the same argument used to specify the number of copies for the mirror implementation, in this case you override the default segment type mirror by explicitly setting the segment type as raid1 . Similarly, you specify the number of stripes for a RAID 4/5/6 logical volume with the familiar -i argument , overriding the default segment type with the desired RAID type. You can also specify the stripe size with the -I argument. Note You can set the default mirror segment type to raid1 by changing mirror_segtype_default in the lvm.conf file. The following command creates a RAID5 array (3 stripes + 1 implicit parity drive) named my_lv in the volume group my_vg that is 1G in size. Note that you specify the number of stripes just as you do for an LVM striped volume; the correct number of parity drives is added automatically. The following command creates a RAID6 array (3 stripes + 2 implicit parity drives) named my_lv in the volume group my_vg that is 1G in size. After you have created a RAID logical volume with LVM, you can activate, change, remove, display, and use the volume just as you would any other LVM logical volume. When you create RAID10 logical volumes, the background I/O required to initialize the logical volumes with a sync operation can crowd out other I/O operations to LVM devices, such as updates to volume group metadata, particularly when you are creating many RAID logical volumes. This can cause the other LVM operations to slow down. As of Red Hat Enterprise Linux 6.5, you can control the rate at which a RAID logical volume is initialized by implementing recovery throttling. You control the rate at which sync operations are performed by setting the minimum and maximum I/O rate for those operations with the --minrecoveryrate and --maxrecoveryrate options of the lvcreate command. You specify these options as follows. --maxrecoveryrate Rate [bBsSkKmMgG] Sets the maximum recovery rate for a RAID logical volume so that it will not crowd out nominal I/O operations. The Rate is specified as an amount per second for each device in the array. If no suffix is given, then kiB/sec/device is assumed. Setting the recovery rate to 0 means it will be unbounded. --minrecoveryrate Rate [bBsSkKmMgG] Sets the minimum recovery rate for a RAID logical volume to ensure that I/O for sync operations achieves a minimum throughput, even when heavy nominal I/O is present. The Rate is specified as an amount per second for each device in the array. If no suffix is given, then kiB/sec/device is assumed. The following command creates a 2-way RAID10 array with 3 stripes that is 10G is size with a maximum recovery rate of 128 kiB/sec/device. The array is named my_lv and is in the volume group my_vg . You can also specify minimum and maximum recovery rates for a RAID scrubbing operation. For information on RAID scrubbing, see Section 5.4.16.10, "Scrubbing a RAID Logical Volume" .
|
[
"lvcreate --type raid1 -m 1 -L 1G -n my_lv my_vg",
"lvcreate --type raid5 -i 3 -L 1G -n my_lv my_vg",
"lvcreate --type raid6 -i 3 -L 1G -n my_lv my_vg",
"lvcreate --type raid10 -i 2 -m 1 -L 10G --maxrecoveryrate 128 -n my_lv my_vg"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/RAID_volumes
|
Chapter 5. Changing the update approval strategy
|
Chapter 5. Changing the update approval strategy To ensure that the storage system gets updated automatically when a new update is available in the same channel, we recommend keeping the update approval strategy to Automatic . Changing the update approval strategy to Manual will need manual approval for each upgrade. Procedure Navigate to Operators Installed Operators . Select openshift-storage from the Project drop-down list. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Click on OpenShift Data Foundation operator name Go to the Subscription tab. Click on the pencil icon for changing the Update approval . Select the update approval strategy and click Save . Verification steps Verify that the Update approval shows the newly selected approval strategy below it.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/updating_openshift_data_foundation/changing-the-update-approval-strategy_rhodf
|
Chapter 2. Planning for operational measurements
|
Chapter 2. Planning for operational measurements The resources that you monitor depend on your business requirements. You can use Ceilometer or collectd to monitor your resources. For more information on Ceilometer measurements, see Section 2.1, "Ceilometer measurements" . For more information on collectd measurements, see Section 2.2, "Collectd measurements" . 2.1. Ceilometer measurements For a full list of Ceilometer measures, see https://docs.openstack.org/ceilometer/train/admin/telemetry-measurements.html 2.2. Collectd measurements The following measurements are useful collectd metrics: disk interface load memory tcpconns 2.3. Planning for data storage Gnocchi stores a collection of data points, where each data point is an aggregate. The storage format is compressed using different techniques. As a result, to calculate the size of a time-series database, you must estimate the size based on the worst-case scenario. Warning The use of Red Hat OpenStack Platform (RHOSP) Object Storage (swift) for time series database (Gnocchi) storage is only supported for small and non-production environments. Procedure Calculate the number of data points: number of points = timespan / granularity For example, if you want to retain a year of data with one-minute resolution, use the formula: number of data points = (365 days X 24 hours X 60 minutes) / 1 minute number of data points = 525600 Calculate the size of the time-series database: size in bytes = number of data points X 8 bytes If you apply this formula to the example, the result is 4.1 MB: size in bytes = 525600 points X 8 bytes = 4204800 bytes = 4.1 MB This value is an estimated storage requirement for a single aggregated time-series database. If your archive policy uses multiple aggregation methods (min, max, mean, sum, std, count), multiply this value by the number of aggregation methods you use. Additional resources Section 1.3.1, "Archive policies: Storing both short and long-term data in a time-series database" Section 2.4, "Planning and managing archive policies" 2.4. Planning and managing archive policies An archive policy defines how you aggregate the metrics and for how long you store the metrics in the time-series database. An archive policy is defined as the number of points over a timespan. If your archive policy defines a policy of 10 points with a granularity of 1 second, the time series archive keeps up to 10 seconds, each representing an aggregation over 1 second. This means that the time series retains, at a maximum, 10 seconds of data between the more recent point and the older point. The archive policy also defines the aggregate method to use. The default is set to the parameter default_aggregation_methods , where the default values are set to mean , min , max . sum , std , count . So, depending on the use case, the archive policy and the granularity can vary. To plan an archive policy, ensure that you are familiar with the following concepts: Metrics. For more information, see Section 2.4.1, "Metrics" . Measures. For more information, see Section 2.4.2, "Creating custom measures" . Aggregation. For more information, see Section 2.4.4, "Calculating the size of a time-series aggregate" . Metricd workers. For more information, see Section 2.4.5, "Metricd workers" . To create and manage an archive police, complete the following tasks: Create an archive policy. For more information, see Section 2.4.6, "Creating an archive policy" . Manage an archive policy. For more information, see Section 2.4.7, "Managing archive policies" . Create an archive policy rule. For more information, see Section 2.4.8, "Creating an archive policy rule" . 2.4.1. Metrics Gnocchi provides an object type called metric . A metric is anything that you can measure, for example, the CPU usage of a server, the temperature of a room, or the number of bytes sent by a network interface. A metric has the following properties: A UUID to identify it A name The archive policy used to store and aggregate the measures Additional resources For terminology definitions, see Gnocchi Metric-as-a-Service terminology . 2.4.1.1. Creating a metric Procedure Create a resource. Replace <resource_name> with the name of the resource: Create the metric. Replace <resource_name> with the name of the resource and <metric_name> with the name of the metric: When you create the metric, the archive policy attribute is fixed and unchangeable. You can change the definition of the archive policy through the archive_policy endpoint. 2.4.2. Creating custom measures A measure is an incoming datapoint tuple that the API sends to Gnocchi. It is composed of a timestamp and a value. You can create your own custom measures. Procedure Create a custom measure: 2.4.3. Default archive policies By default, Gnocchi has the following archive policies: low 5 minutes granularity over 30 days aggregation methods used: default_aggregation_methods maximum estimated size per metric: 406 KiB medium 1 minute granularity over 7 days 1 hour granularity over 365 days aggregation methods used: default_aggregation_methods maximum estimated size per metric: 887 KiB high 1 second granularity over 1 hour 1 minute granularity over 1 week 1 hour granularity over 1 year aggregation methods used: default_aggregation_methods maximum estimated size per metric: 1 057 KiB bool 1 second granularity over 1 year aggregation methods used: last maximum optimistic size per metric: 1 539 KiB maximum pessimistic size per metric: 277 172 KiB 2.4.4. Calculating the size of a time-series aggregate Gnocchi stores a collection of data points, where each point is an aggregate. The storage format is compressed using different techniques. As a result, calculating the size of a time-series is estimated based on a worst case scenario, as shown in the following example. Procedure Use this formula to calculate the number of points: number of points = timespan / granularity For example, if you want to keep a year of data with one-minute resolution: number of points = (365 days X 24 hours X 60 minutes) / 1 minute number of points = 525600 To calculate the point size in bytes, use this formula: size in bytes = number of points X 8 bytes size in bytes = 525600 points X 8 bytes = 4204800 bytes = 4.1 MB This value is an estimated storage requirement for a single aggregated time-series. If your archive policy uses multiple aggregation methods - min, max, mean, sum, std, count - multiply this value by the number of aggregation methods you use. 2.4.5. Metricd workers You can use the metricd daemon to processes measures, create aggregates, store measures in aggregate storage and delete metrics. The metricd daemon is responsible for most CPU usage and I/O jobs in Gnocchi. The archive policy of each metric determines how fast the metricd daemon performs. Metricd checks the incoming storage for new measures periodically. To configure the delay between each check, you can use the [metricd]metric_processing_delay configuration option. 2.4.6. Creating an archive policy Procedure Create an archive policy. Replace <archive-policy-name> with the name of the policy and <aggregation-method> with the method of aggregation. Note <definition> is the policy definition. Separate multiple attributes with a comma (,). Separate the name and value of the archive policy definition with a colon (:). 2.4.7. Managing archive policies To delete an archive policy: To view all archive policies: To view the details of an archive policy: 2.4.8. Creating an archive policy rule An archive policy rule defines a mapping between a metric and an archive policy. This gives users the ability to predefine rules so an archive policy is assigned to metrics based on a matched pattern. Procedure Create an archive policy rule. Replace <rule-name> with the name of the rule and <archive-policy-name> with the name of the archive policy: 2.5. Verifying the Red Hat OpenStack Platform deployment You can use the openstack metric command to verify a successful deployment. Procedure Verify the deployment:
|
[
"openstack metric resource create <resource_name>",
"openstack metric metric create -r <resource_name> <metric_name>",
"openstack metric measures add -m <MEASURE1> -m <MEASURE2> .. -r <RESOURCE_NAME> <METRIC_NAME>",
"openstack metric archive policy create <archive-policy-name> --definition <definition> --aggregation-method <aggregation-method>",
"openstack metric archive policy delete <archive-policy-name>",
"openstack metric archive policy list",
"openstack metric archive-policy show <archive-policy-name>",
"openstack metric archive-policy-rule create <rule-name> / --archive-policy-name <archive-policy-name>",
"(overcloud) [stack@undercloud-0 ~]USD openstack metric status +-----------------------------------------------------+-------+ | Field | Value | +-----------------------------------------------------+-------+ | storage/number of metric having measures to process | 0 | | storage/total number of measures to process | 0 | +-----------------------------------------------------+-------+"
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/operational_measurements/planning-for-operational-measurements_assembly
|
Chapter 32. Customizing language settings
|
Chapter 32. Customizing language settings You can change the language on the Business Central Settings page. Business Central supports the following languages: English Spanish French Japanese The default language is English. Procedure In Business Central, select the Admin icon in the top-right corner of the screen and select Languages . The Language Selector window opens. Select the desired language from the Language list. Click Ok .
| null |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/managing-business-central-languages-proc
|
Chapter 8. Summary
|
Chapter 8. Summary This document has provided only a general introduction to security for Red Hat Ceph Storage. Contact the Red Hat Ceph Storage consulting team for additional help.
| null |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/data_security_and_hardening_guide/con-sec-summay-sec
|
Chapter 9. Subscriptions
|
Chapter 9. Subscriptions To install Red Hat OpenStack Services on OpenShift (RHOSO), you must register all systems in the RHOSO environment with Red Hat Subscription Manager, and subscribe to the required channels. For more information about Red Hat OpenStack Services on OpenShift subscriptions, see the Red Hat OpenStack Services on OpenShift FAQ .
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/planning_your_deployment/ref_subscriptions_planning
|
10.5.9.5. MinSpareThreads and MaxSpareThreads
|
10.5.9.5. MinSpareThreads and MaxSpareThreads These values are only used with the worker MPM. They adjust how the Apache HTTP Server dynamically adapts to the perceived load by maintaining an appropriate number of spare server threads based on the number of incoming requests. The server checks the number of server threads waiting for a request and kills some if there are more than MaxSpareThreads or creates some if the number of servers is less than MinSpareThreads . The default MinSpareThreads value is 25 ; the default MaxSpareThreads value is 75 . These default settings should be appropriate for most situations. The value for MaxSpareThreads must be greater than or equal to the sum of MinSpareThreads and ThreadsPerChild , else the Apache HTTP Server automatically corrects it.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s3-apache-minmaxsparethreads
|
Making open source more inclusive
|
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/replacing_devices/making-open-source-more-inclusive
|
probe::nfs.fop.aio_read
|
probe::nfs.fop.aio_read Name probe::nfs.fop.aio_read - NFS client aio_read file operation Synopsis nfs.fop.aio_read Values ino inode number cache_time when we started read-caching this inode file_name file name buf the address of buf in user space dev device identifier pos current position of file attrtimeo how long the cached information is assumed to be valid. We need to revalidate the cached attrs for this inode if jiffies - read_cache_jiffies > attrtimeo. count read bytes parent_name parent dir name cache_valid cache related bit mask flag
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-nfs-fop-aio-read
|
5.9.2.2. EXT3
|
5.9.2.2. EXT3 The ext3 file system builds upon ext2 by adding journaling capabilities to the already-proven ext2 codebase. As a journaling file system, ext3 always keeps the file system in a consistent state, eliminating the need for lengthy file system integrity checks. This is accomplished by writing all file system changes to an on-disk journal, which is then flushed on a regular basis. After an unexpected system event (such as a power outage or system crash), the only operation that needs to take place prior to making the file system available is to process the contents of the journal; in most cases this takes approximately one second. Because ext3's on-disk data format is based on ext2, it is possible to access an ext3 file system on any system capable of reading and writing an ext2 file system (without the benefit of journaling, however). This can be a sizable benefit in organizations where some systems are using ext3 and some are still using ext2.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-storage-fs-ext3
|
Chapter 3. Getting started
|
Chapter 3. Getting started This chapter guides you through the steps to set up your environment and run a simple messaging program. 3.1. Prerequisites You must complete the installation procedure for your environment. You must have an AMQP 1.0 message broker listening for connections on interface localhost and port 5672 . It must have anonymous access enabled. For more information, see Starting the broker . You must have a queue named examples . For more information, see Creating a queue . 3.2. Running Hello World on Red Hat Enterprise Linux The Hello World example creates a connection to the broker, sends a message containing a greeting to the examples queue, and receives it back. On success, it prints the received message to the console. Change to the examples directory and run the helloworld.py example. USD cd /usr/share/proton/examples/python/ USD python helloworld.py Hello World! 3.3. Running Hello World on Microsoft Windows The Hello World example creates a connection to the broker, sends a message containing a greeting to the examples queue, and receives it back. On success, it prints the received message to the console. Download and run the Hello World example. > curl -o helloworld.py https://raw.githubusercontent.com/apache/qpid-proton/master/python/examples/helloworld.py > python helloworld.py Hello World!
|
[
"cd /usr/share/proton/examples/python/ python helloworld.py Hello World!",
"> curl -o helloworld.py https://raw.githubusercontent.com/apache/qpid-proton/master/python/examples/helloworld.py > python helloworld.py Hello World!"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_python_client/getting_started
|
Chapter 29. Migrating from an LDAP Directory to IdM
|
Chapter 29. Migrating from an LDAP Directory to IdM When an infrastructure has previously deployed an LDAP server for authentication and identity lookups, it is possible to migrate the user data, including passwords, to a new Identity Management instance, without losing user or password data. Identity Management has migration tools to help move directory data and only requires minimal updates to clients. However, the migration process assumes a simple deployment scenario (one LDAP namespace to one IdM namespace). For more complex environments, such as ones with multiple namespaces or custom schema, contact Red Hat support services for assistance. Identity Management has migration tools to help move directory data and only requires minimal updates to clients. However, the migration process assumes a very simple deployment scenario (one LDAP directory namespace to one IdM namespace). 29.1. An Overview of LDAP to IdM Migration The actual migration part of moving from an LDAP server to Identity Management - the process of moving the data from one server to the other - is fairly straightforward. The process is simple: move data, move passwords, and move clients. The crucial part of migration is not data migration; it is deciding how clients are going to be configured to use Identity Management. For each client in the infrastructure, you need to decide what services (such as Kerberos and SSSD) are being used and what services can be used in the final, IdM deployment. A secondary, but significant, consideration is planning how to migrate passwords. Identity Management requires Kerberos hashes for every user account in addition to passwords. Some of the considerations and migration paths for passwords are covered in Section 29.1.2, "Planning Password Migration" . 29.1.1. Planning the Client Configuration Identity Management can support a number of different client configurations, with varying degrees of functionality, flexibility, and security. Decide which configuration is best for each individual client based on its operating system, functional area (such as development machines, production servers, or user laptops), and your IT maintenance priorities. Important The different client configurations are not mutually exclusive . Most environments will have a mix of different ways that clients use to connect to the IdM domain. Administrators must decide which scenario is best for each individual client. 29.1.1.1. Initial Client Configuration (Pre-Migration) Before deciding where you want to go with the client configuration in Identity Management, first establish where you are before the migration. The initial state for almost all LDAP deployments that will be migrated is that there is an LDAP service providing identity and authentication services. Figure 29.1. Basic LDAP Directory and Client Configuration Linux and Unix clients use PAM_LDAP and NSS_LDAP libraries to connect directly to the LDAP services. These libraries allow clients to retrieve user information from the LDAP directory as if the data were stored in /etc/passwd or /etc/shadow . (In real life, the infrastructure may be more complex if a client uses LDAP for identity lookups and Kerberos for authentication or other configurations.) There are structural differences between an LDAP directory and an IdM server, particularly in schema support and the structure of the directory tree. (For more background on those differences, see Section 1.1, "IdM v. LDAP: A More Focused Type of Service" .) While those differences may impact data (especially with the directory tree, which affects entry names), they have little impact on the client configuration , so it really has little impact on migrating clients to Identity Management. 29.1.1.2. Recommended Configuration for Red Hat Enterprise Linux Clients Red Hat Enterprise Linux has a service called the System Security Services Daemon (SSSD). SSSD uses special PAM and NSS libraries ( pam_sss and nss_sss , respectively) which allow SSSD to be integrated very closely with Identity Management and leverage the full authentication and identity features in Identity Management. SSSD has a number of useful features, like caching identity information so that users can log in even if the connection is lost to the central server; these are described in the Red Hat Enterprise Linux Deployment Guide . Unlike generic LDAP directory services (using pam_ldap and nss_ldap ), SSSD establishes relationships between identity and authentication information by defining domains . A domain in SSSD defines four backend functions: authentication, identity lookups, access, and password changes. The SSSD domain is then configured to use a provider to supply the information for any one (or all) of those four functions. An identity provider is always required in the domain configuration. The other three providers are optional; if an authentication, access, or password provider is not defined, then the identity provider is used for that function. SSSD can use Identity Management for all of its backend functions. This is the ideal configuration because it provides the full range of Identity Management functionality, unlike generic LDAP identity providers or Kerberos authentication. For example, during daily operation, SSSD enforces host-based access control rules and security features in Identity Management. Note During the migration process from an LDAP directory to Identity Management, SSSD can seamlessly migrate user passwords without additional user interaction. Figure 29.2. Clients and SSSD with an IdM Backend The ipa-client-install script automatically configured SSSD to use IdM for all four of its backend services, so Red Hat Enterprise Linux clients are set up with the recommended configuration by default. Note This client configuration is only supported for Red Hat Enterprise Linux 6.1 and later and Red Hat Enterprise Linux 5.7 later, which support the latest versions of SSSD and ipa-client . Older versions of Red Hat Enterprise Linux can be configured as described in Section 29.1.1.3, "Alternative Supported Configuration" . Note This client configuration is only supported for Red Hat Enterprise Linux 15 and later, which supports the latest versions of SSSD and ipa-client . Older versions of Red Hat Enterprise Linux can be configured as described in Section 29.1.1.3, "Alternative Supported Configuration" . 29.1.1.3. Alternative Supported Configuration Unix and Linux systems such as Mac, Solaris, HP-UX, AIX, and Scientific Linux support all of the services that IdM manages but do not use SSSD. Likewise, older Red Hat Enterprise Linux versions (6.1 and 5.6) support SSSD but have an older version, which does not support IdM as an identity provider. Unix and Linux systems such as Mac, Solaris, HP-UX, AIX, and Scientific Linux support all of the services that IdM manages but do not use SSSD. Likewise, older Red Hat Enterprise Linux versions (15) support SSSD but have an older version, which does not support IdM as an identity provider. When it is not possible to use a modern version of SSSD on a system, then clients can be configured to connect to the IdM server as if it were an LDAP directory service for identity lookups (using nss_ldap ) and to IdM as if it were a regular Kerberos KDC (using pam_krb5 ). Figure 29.3. Clients and IdM with LDAP and Kerberos If a Red Hat Enterprise Linux client is using an older version of SSSD, SSSD can still be configured to use the IdM server as its identity provider and its Kerberos authentication domain; this is described in the SSSD configuration section of the Red Hat Enterprise Linux Deployment Guide . Any IdM domain client can be configured to use nss_ldap and pam_krb5 to connect to the IdM server. For some maintenance situations and IT structures, a scenario that fits the lowest common denominator may be required, using LDAP for both identity and authentication ( nss_ldap and pam_ldap ). However, it is generally best practice to use the most secure configuration possible for a client (meaning SSSD and Kerberos or LDAP and Kerberos). 29.1.2. Planning Password Migration Probably the most visible issue that can impact LDAP-to-Identity Management migration is migrating user passwords. Identity Management (by default) uses Kerberos for authentication and requires that each user has Kerberos hashes stored in the Identity Management Directory Server in addition to the standard user passwords. To generate these hashes, the user password needs to be available to the IdM server in cleartext. This is the case when the user is created in Identity Management. However, when the user is migrated from an LDAP directory, the associated user password is already hashed, so the corresponding Kerberos key cannot be generated. Important Users cannot authenticate to the IdM domain or access IdM resources until they have Kerberos hashes. If a user does not have a Kerberos hash [10] , that user cannot log into the IdM domain even if he has a user account. There are three options for migrating passwords: forcing a password change, using a web page, and using SSSD. Migrating users from an existing system provides a smoother transition but also requires parallel management of LDAP directory and IdM during the migration and transition process. If you do not preserve passwords, the migration can be performed more quickly but it requires more manual work by administrators and users. 29.1.2.1. Method 1: Using Temporary Passwords and Requiring a Change When passwords are changed in Identity Management, they will be created with the appropriate Kerberos hashes. So one alternative for administrators is to force users to change their passwords by resetting all user passwords when user accounts are migrated. (This can also be done simply by re-creating the LDAP directory accounts in IdM, which automatically creates accounts with the appropriate keys.) The new users are assigned a temporary password which they change at the first login. No passwords are migrated. 29.1.2.2. Method 2: Using the Migration Web Page When it is running in migration mode, Identity Management has a special web page in its web UI that will capture a cleartext password and create the appropriate Kerberos hash. Administrators could tell users to authenticate once to this web page, which would properly update their user accounts with their password and corresponding Kerberos hash, without requiring password changes. 29.1.2.3. Method 3: Using SSSD (Recommended) SSSD can work with IdM to mitigate the user impact on migrating by generating the required user keys. For deployments with a lot of users or where users shouldn't be burdened with password changes, this is the best scenario. A user tries to log into a machine with SSSD. SSSD attempts to perform Kerberos authentication against the IdM server. Even though the user exists in the system, the authentication will fail with the error key type is not supported because the Kerberos hashes do not yet exist. SSSD then performs a plaintext LDAP bind over a secure connection. IdM intercepts this bind request. If the user has a Kerberos principal but no Kerberos hashes, then the IdM identity provider generates the hashes and stores them in the user entry. If authentication is successful, SSSD disconnects from IdM and tries Kerberos authentication again. This time, the request succeeds because the hash exists in the entry. That entire process is entirely transparent to the user; as far as users known, they simply log into a client service and it works as normal. 29.1.2.4. Migrating Cleartext LDAP Passwords Although in most deployments LDAP passwords are stored encrypted, there may be some users or some environments that use cleartext passwords for user entries. When users are migrated from the LDAP server to the IdM server, their cleartext passwords are not migrated over. Identity Management does not allow cleartext passwords. Instead, a Kerberos principle is created for the user, the keytab is set to true, and the password is set as expired. This means that Identity Management requires the user to reset the password at the login. Note If passwords are hashed, the password is successfully migrated through SSSD and the migration web page, as in Section 29.1.2.3, "Method 3: Using SSSD (Recommended)" . 29.1.2.5. Automatically Resetting Passwords That Do Not Meet Requirements If user passwords in the original directory do not meet the password policies defined in Identity Management, then the passwords must be reset after migration. Password resets are done automatically the first time the users attempts to kinit into the IdM domain. 29.1.3. Migration Considerations and Requirements As you are planning migrating from an LDAP server to Identity Management, make sure that your LDAP environment is able to work with the Identity Management migration script. 29.1.3.1. LDAP Servers Supported for Migration The migration process from an LDAP server to Identity Management uses a special script, ipa migrate-ds , to perform the migration. This script has certain expectations about the structure of the LDAP directory and LDAP entries in order to work. Migration is supported only for LDAPv3-compliant directory services, which include several common directories: SunONE Directory Server Apache Directory Server OpenLDAP Migration from an LDAP server to Identity Management has been tested with Red Hat Directory Server. Note Migration using the migration script is not supported for Microsoft Active Directory because it is not an LDAPv3-compliant directory. For assistance with migrating from Active Directory, contact Red Hat Professional Services. Note Migration using the migration script is not supported for Microsoft Active Directory because it is not an LDAPv3-compliant directory. 29.1.3.2. Migration Environment Requirements There are many different possible configuration scenarios for both Red Hat Directory Server and Identity Management, and any of those scenarios may affect the migration process. For the example migration procedures in this chapter, these are the assumptions about the environment: A single LDAP directory domain is being migrated to one IdM realm. No consolidation is involved. User passwords are stored as a hash in the LDAP directory that the IdM Directory Server can support. The LDAP directory instance is both the identity store and the authentication method. Client machines are configured to use pam_ldap or nss_ldap to connect to the LDAP server. Entries use only standard LDAP schema. Custom attributes will not be migrated to Identity Management. 29.1.3.3. Migration Tools Identity Management uses a specific command, ipa migrate-ds , to drive the migration process so that LDAP directory data are properly formatted and imported cleanly into the IdM server. The Identity Management server must be configured to run in migration mode, and then the migration script can be used. 29.1.3.4. Migration Sequence There are four major steps when migrating to Identity Management, but the order varies slightly depending on whether you want to migrate the server first or the clients first. With a client-based migration, SSSD is used to change the client configuration while an IdM server is configured: Deploy SSSD. Reconfigure clients to connect to the current LDAP server and then fail over to IdM. Install the IdM server. Migrate the user data using the IdM ipa migrate-ds script. This exports the data from the LDAP directory, formats for the IdM schema, and then imports it into IdM. Take the LDAP server offline and allow clients to fail over to Identity Management transparently. With a server migration, the LDAP to Identity Management migration comes first: Install the IdM server. Migrate the user data using the IdM ipa migrate-ds script. This exports the data from the LDAP directory, formats it for the IdM schema, and then imports it into IdM. Optional. Deploy SSSD. Reconfigure clients to connect to IdM. It is not possible to simply replace the LDAP server. The IdM directory tree - and therefore user entry DNs - is different than the directory tree. While it is required that clients be reconfigured, clients do not need to be reconfigured immediately. Updated clients can point to the IdM server while other clients point to the old LDAP directory, allowing a reasonable testing and transition phase after the data are migrated. Note Do not run both an LDAP directory service and the IdM server for very long in parallel. This introduces the risk of user data being inconsistent between the two services. Both processes provide a general migration procedure, but it may not work in every environment. Set up a test LDAP environment and test the migration process before attempting to migrate the real LDAP environment. [10] It is possible to use LDAP authentication in Identity Management instead of Kerberos authentication, which means that Kerberos hashes are not required for users. However, this limits the capabilities of Identity Management and is not recommended.
|
[
"https://ipaserver.example.com/ipa/migration",
"[jsmith@server ~]USD kinit Password for [email protected]: Password expired. You must change it now. Enter new password: Enter it again:"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/Migrating_from_a_Directory_Server_to_IPA
|
Chapter 4. Additional toolsets for development
|
Chapter 4. Additional toolsets for development 4.1. Using GCC Toolset 4.1.1. What is GCC Toolset Red Hat Enterprise Linux 9 continues support for GCC Toolset, an Application Stream containing more up-to-date versions of development and performance analysis tools. GCC Toolset is similar to Red Hat Developer Toolset for RHEL 7. GCC Toolset is available as an Application Stream in the form of a software collection in the AppStream repository. GCC Toolset is fully supported under Red Hat Enterprise Linux Subscription Level Agreements, is functionally complete, and is intended for production use. Applications and libraries provided by GCC Toolset do not replace the Red Hat Enterprise Linux system versions, do not override them, and do not automatically become default or preferred choices. Using a framework called software collections, an additional set of developer tools is installed into the /opt/ directory and is explicitly enabled by the user on demand using the scl utility. Unless noted otherwise for specific tools or features, GCC Toolset is available for all architectures supported by Red Hat Enterprise Linux. For information about the length of support, see Red Hat Enterprise Linux Application Streams Life Cycle . 4.1.2. Installing GCC Toolset Installing GCC Toolset on a system installs the main tools and all necessary dependencies. Note that some parts of the toolset are not installed by default and must be installed separately. Procedure To install GCC Toolset version N : 4.1.3. Installing individual packages from GCC Toolset To install only certain tools from GCC Toolset instead of the whole toolset, list the available packages and install the selected ones with the dnf package management tool. This procedure is useful also for packages that are not installed by default with the toolset. Procedure List the packages available in GCC Toolset version N : To install any of these packages: Replace package_name with a space-separated list of packages to install. For example, to install the gcc-toolset-13-annobin-annocheck and gcc-toolset-13-binutils-devel packages: 4.1.4. Uninstalling GCC Toolset To remove GCC Toolset from your system, uninstall it using the dnf package management tool. Procedure To uninstall GCC Toolset version N : 4.1.5. Running a tool from GCC Toolset To run a tool from GCC Toolset, use the scl utility. Procedure To run a tool from GCC Toolset version N : 4.1.6. Running a shell session with GCC Toolset GCC Toolset allows running a shell session where the GCC Toolset tool versions are used instead of system versions of these tools, without explicitly using the scl command. This is useful when you need to interactively start the tools many times, such as when setting up or testing a development setup. Procedure To run a shell session where tool versions from GCC Toolset version N override system versions of these tools: 4.1.7. Additional resources Red Hat Developer Toolset User Guide 4.2. GCC Toolset 12 Learn about information specific to GCC Toolset version 12 and the tools contained in this version. 4.2.1. Tools and versions provided by GCC Toolset 12 GCC Toolset 12 provides the following tools and versions: Table 4.1. Tool versions in GCC Toolset 12 Name Version Description GCC 12.2.1 A portable compiler suite with support for C, C++, and Fortran. GDB 11.2 A command-line debugger for programs written in C, C++, and Fortran. binutils 2.38 A collection of binary tools and other utilities to inspect and manipulate object files and binaries. dwz 0.14 A tool to optimize DWARF debugging information contained in ELF shared libraries and ELF executables for size. annobin 11.08 A build security checking tool. 4.2.2. C++ compatibility in GCC Toolset 12 Important The compatibility information presented here apply only to the GCC from GCC Toolset 12. The GCC compiler in GCC Toolset can use the following C++ standards: C++14 This language standard is available in GCC Toolset 12. Using the C++14 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 6 or later. C++11 This language standard is available in GCC Toolset 12. Using the C++11 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 5 or later. C++98 This language standard is available in GCC Toolset 12. Binaries, shared libraries and objects built using this standard can be freely mixed regardless of being built with GCC from GCC Toolset, Red Hat Developer Toolset, and RHEL 5, 6, 7 and 8. C++17 This language standard is available in GCC Toolset 12. This is the default language standard setting for GCC Toolset 12, with GNU extensions, equivalent to explicitly using option -std=gnu++17 . Using the C++17 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 10 or later. C++20 and C++23 This language standard is available in GCC Toolset 12 only as an experimental, unstable, and unsupported capability. Additionally, compatibility of objects, binary files, and libraries built using this standard cannot be guaranteed. To enable C++20 support, add the command-line option -std=c++20 to your g++ command line. To enable C++23 support, add the command-line option -std=c++23 to your g++ command line. All of the language standards are available in both the standard compliant variant or with GNU extensions. When mixing objects built with GCC Toolset with those built with the RHEL toolchain (particularly .o or .a files), GCC Toolset toolchain should be used for any linkage. This ensures any newer library features provided only by GCC Toolset are resolved at link time. 4.2.3. Specifics of GCC in GCC Toolset 12 Static linking of libraries Certain more recent library features are statically linked into applications built with GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum. Important Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons. Specify libraries after object files when linking In GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files: Using a library from GCC Toolset in this manner results in the linker error message undefined reference to symbol . To prevent this problem, follow the standard linking practice and specify the option adding the library after the options specifying the object files: Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of GCC . 4.2.4. Specifics of binutils in GCC Toolset 12 Static linking of libraries Certain more recent library features are statically linked into applications built with GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum. Important Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons. Specify libraries after object files when linking In GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files: Using a library from GCC Toolset in this manner results in the linker error message undefined reference to symbol . To prevent this problem, follow the standard linking practice, and specify the option adding the library after the options specifying the object files: Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of binutils . 4.2.5. Specifics of annobin in GCC Toolset 12 Under some circumstances, due to a synchronization issue between annobin and gcc in GCC Toolset 12, your compilation can fail with an error message that looks similar to the following: To work around the problem, create a symbolic link in the plugin directory from the annobin.so file to the gcc-annobin.so file: Replace architecture with the architecture you use in your system: aarch64 i686 ppc64le s390x x86_64 4.3. GCC Toolset 13 Learn about information specific to GCC Toolset version 13 and the tools contained in this version. 4.3.1. Tools and versions provided by GCC Toolset 13 GCC Toolset 13 provides the following tools and versions: Table 4.2. Tool versions in GCC Toolset 13 Name Version Description GCC 13.2.1 A portable compiler suite with support for C, C++, and Fortran. GDB 12.1 A command-line debugger for programs written in C, C++, and Fortran. binutils 2.40 A collection of binary tools and other utilities to inspect and manipulate object files and binaries. dwz 0.14 A tool to optimize DWARF debugging information contained in ELF shared libraries and ELF executables for size. annobin 12.32 A build security checking tool. 4.3.2. C++ compatibility in GCC Toolset 13 Important The compatibility information presented here apply only to the GCC from GCC Toolset 13. The GCC compiler in GCC Toolset can use the following C++ standards: C++14 This language standard is available in GCC Toolset 13. Using the C++14 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 6 or later. C++11 This language standard is available in GCC Toolset 13. Using the C++11 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 5 or later. C++98 This language standard is available in GCC Toolset 13. Binaries, shared libraries and objects built using this standard can be freely mixed regardless of being built with GCC from GCC Toolset, Red Hat Developer Toolset, and RHEL 5, 6, 7 and 8. C++17 This language standard is available in GCC Toolset 13. This is the default language standard setting for GCC Toolset 13, with GNU extensions, equivalent to explicitly using option -std=gnu++17 . Using the C++17 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 10 or later. C++20 and C++23 These language standards are available in GCC Toolset 13 only as an experimental, unstable, and unsupported capability. Additionally, compatibility of objects, binary files, and libraries built using this standard cannot be guaranteed. To enable the C++20 standard, add the command-line option -std=c++20 to your g++ command line. To enable the C++23 standard, add the command-line option -std=c++23 to your g++ command line. All of the language standards are available in both the standard compliant variant or with GNU extensions. When mixing objects built with GCC Toolset with those built with the RHEL toolchain (particularly .o or .a files), GCC Toolset toolchain should be used for any linkage. This ensures any newer library features provided only by GCC Toolset are resolved at link time. 4.3.3. Specifics of GCC in GCC Toolset 13 Static linking of libraries Certain more recent library features are statically linked into applications built with GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum. Important Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons. Specify libraries after object files when linking In GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files: Using a library from GCC Toolset in this manner results in the linker error message undefined reference to symbol . To prevent this problem, follow the standard linking practice and specify the option adding the library after the options specifying the object files: Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of GCC . 4.3.4. Specifics of binutils in GCC Toolset 13 Static linking of libraries Certain more recent library features are statically linked into applications built with GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum. Important Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons. Specify libraries after object files when linking In GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files: Using a library from GCC Toolset in this manner results in the linker error message undefined reference to symbol . To prevent this problem, follow the standard linking practice, and specify the option adding the library after the options specifying the object files: Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of binutils . 4.3.5. Specifics of annobin in GCC Toolset 13 Under some circumstances, due to a synchronization issue between annobin and gcc in GCC Toolset 13, your compilation can fail with an error message that looks similar to the following: To work around the problem, create a symbolic link in the plugin directory from the annobin.so file to the gcc-annobin.so file: Replace architecture with the architecture you use in your system: aarch64 i686 ppc64le s390x x86_64 4.4. GCC Toolset 14 Learn about information specific to GCC Toolset version 14 and the tools contained in this version. 4.4.1. Tools and versions provided by GCC Toolset 14 GCC Toolset 14 provides the following tools and versions: Table 4.3. Tool versions in GCC Toolset 14 Name Version Description GCC 14.2.1 A portable compiler suite with support for C, C++, and Fortran. binutils 2.41 A collection of binary tools and other utilities to inspect and manipulate object files and binaries. dwz 0.14 A tool to optimize DWARF debugging information contained in ELF shared libraries and ELF executables for size. annobin 12.70 A build security checking tool. Note In RHEL 9.5, the system GDB was rebased to version 14.2, and GDB is no longer included in GCC Toolset. 4.4.2. C++ compatibility in GCC Toolset 14 Important The compatibility information presented here apply only to the GCC from GCC Toolset 14. The GCC compiler in GCC Toolset can use the following C++ standards: C++14 This language standard is available in GCC Toolset 14. Using the C++14 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 6 or later. C++11 This language standard is available in GCC Toolset 14. Using the C++11 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 5 or later. C++98 This language standard is available in GCC Toolset 14. Binaries, shared libraries and objects built using this standard can be freely mixed regardless of being built with GCC from GCC Toolset, Red Hat Developer Toolset, and RHEL 5, 6, 7 and 8. C++17 This language standard is available in GCC Toolset 14. This is the default language standard setting for GCC Toolset 14, with GNU extensions, equivalent to explicitly using option -std=gnu++17 . Using the C++17 language version is supported when all C++ objects compiled with the respective flag have been built using GCC version 10 or later. C++20 and C++23 These language standards are available in GCC Toolset 14 only as an experimental, unstable, and unsupported capability. Additionally, compatibility of objects, binary files, and libraries built using this standard cannot be guaranteed. To enable the C++20 standard, add the command-line option -std=c++20 to your g++ command line. To enable the C++23 standard, add the command-line option -std=c++23 to your g++ command line. All of the language standards are available in both the standard compliant variant or with GNU extensions. When mixing objects built with GCC Toolset with those built with the RHEL toolchain (particularly .o or .a files), GCC Toolset toolchain should be used for any linkage. This ensures any newer library features provided only by GCC Toolset are resolved at link time. 4.4.3. Specifics of GCC in GCC Toolset 14 Static linking of libraries Certain more recent library features are statically linked into applications built with GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum. Important Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons. Specify libraries after object files when linking In GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files: Using a library from GCC Toolset in this manner results in the linker error message undefined reference to symbol . To prevent this problem, follow the standard linking practice and specify the option adding the library after the options specifying the object files: Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of GCC . 4.4.4. Specifics of binutils in GCC Toolset 14 Static linking of libraries Certain more recent library features are statically linked into applications built with GCC Toolset to support execution on multiple versions of Red Hat Enterprise Linux. This creates an additional minor security risk because standard Red Hat Enterprise Linux errata do not change this code. If the need arises for developers to rebuild their applications due to this risk, Red Hat will communicate this using a security erratum. Important Because of this additional security risk, developers are strongly advised not to statically link their entire application for the same reasons. Specify libraries after object files when linking In GCC Toolset, libraries are linked using linker scripts which might specify some symbols through static archives. This is required to ensure compatibility with multiple versions of Red Hat Enterprise Linux. However, the linker scripts use the names of the respective shared object files. As a consequence, the linker uses different symbol handling rules than expected, and does not recognize symbols required by object files when the option adding the library is specified before options specifying the object files: Using a library from GCC Toolset in this manner results in the linker error message undefined reference to symbol . To prevent this problem, follow the standard linking practice, and specify the option adding the library after the options specifying the object files: Note that this recommendation also applies when using the base Red Hat Enterprise Linux version of binutils . 4.4.5. Specifics of annobin in GCC Toolset 14 Under some circumstances, due to a synchronization issue between annobin and gcc in GCC Toolset 14, your compilation can fail with an error message that looks similar to the following: To work around the problem, create a symbolic link in the plugin directory from the annobin.so file to the gcc-annobin.so file: Replace architecture with the architecture you use in your system: aarch64 i686 ppc64le s390x x86_64 4.5. Using the GCC Toolset container image Only the two latest GCC Toolset container images are supported. Container images of earlier GCC Toolset versions are unsupported. The GCC Toolset 13 and GCC Toolset 14 components are available in the GCC Toolset 13 Toolchain and GCC Toolset 14 Toolchain container images, respectively. The GCC Toolset container image is based on the rhel9 base image and is available for all architectures supported by RHEL 9: AMD and Intel 64-bit architectures The 64-bit ARM architecture IBM Power Systems, Little Endian 64-bit IBM Z 4.5.1. GCC Toolset container image contents Tools versions provided in the GCC Toolset 14 container image match the GCC Toolset 14 components versions . The GCC Toolset 14 Toolchain contents The rhel9/gcc-toolset-14-toolchain container image consists of the following components: Component Package gcc gcc-toolset-14-gcc g++ gcc-toolset-14-gcc-c++ gfortran gcc-toolset-14-gcc-gfortran 4.5.2. Accessing and running the GCC Toolset container image The following section describes how to access and run the GCC Toolset container image. Prerequisites Podman is installed. Procedure Access the Red Hat Container Registry using your Customer Portal credentials: Pull the container image you require by running a relevant command as root: Replace toolset_version with the GCC Toolset version, for example 14 . Note You can also set up your system to work with containers as a non-root user. For details, see Setting up rootless containers . Optional: Check that pulling was successful by running a command that lists all container images on your local system: Run a container by launching a bash shell inside a container: The -i option creates an interactive session; without this option the shell opens and instantly exits. The -t option opens a terminal session; without this option you cannot type anything to the shell. Additional resources Building, running, and managing Linux containers on RHEL 9 Understanding root inside and outside a container (Red Hat Blog article) GCC Toolset container entries in the Red Hat Ecosystem Catalog 4.5.3. Example: Using the GCC Toolset 14 Toolchain container image This example shows how to pull and start using the GCC Toolset 14 Toolchain container image. Prerequisites Podman is installed. Procedure Access the Red Hat Container Registry using your Customer Portal credentials: Pull the container image as root: Launch the container image with an interactive shell as root: Run the GCC Toolset tools as expected. For example, to verify the gcc compiler version, run: To list all packages provided in the container, run: 4.6. Compiler toolsets RHEL 9 provides the following compiler toolsets as Application Streams: LLVM Toolset provides the LLVM compiler infrastructure framework, the Clang compiler for the C and C++ languages, the LLDB debugger, and related tools for code analysis. Rust Toolset provides the Rust programming language compiler rustc , the cargo build tool and dependency manager, the cargo-vendor plugin, and required libraries. Go Toolset provides the Go programming language tools and libraries. Go is alternatively known as golang . For more details and information about usage, see the compiler toolsets user guides on the Red Hat Developer Tools page. 4.7. The Annobin project The Annobin project is an implementation of the Watermark specification project. Watermark specification project intends to add markers to Executable and Linkable Format (ELF) objects to determine their properties. The Annobin project consists of the annobin plugin and the annockeck program. The annobin plugin scans the GNU Compiler Collection (GCC) command line, the compilation state, and the compilation process, and generates the ELF notes. The ELF notes record how the binary was built and provide information for the annocheck program to perform security hardening checks. The security hardening checker is part of the annocheck program and is enabled by default. It checks the binary files to determine whether the program was built with necessary security hardening options and compiled correctly. annocheck is able to recursively scan directories, archives, and RPM packages for ELF object files. Note The files must be in ELF format. annocheck does not handle any other binary file types. The following section describes how to: Use the annobin plugin Use the annocheck program Remove redundant annobin notes 4.7.1. Using the annobin plugin The following section describes how to: Enable the annobin plugin Pass options to the annobin plugin 4.7.1.1. Enabling the annobin plugin The following section describes how to enable the annobin plugin via gcc and via clang . Procedure To enable the annobin plugin with gcc , use: If gcc does not find the annobin plugin, use: Replace /path/to/directory/containing/annobin/ with the absolute path to the directory that contains annobin . To find the directory containing the annobin plugin, use: To enable the annobin plugin with clang , use: Replace /path/to/directory/containing/annobin/ with the absolute path to the directory that contains annobin . 4.7.1.2. Passing options to the annobin plugin The following section describes how to pass options to the annobin plugin via gcc and via clang . Procedure To pass options to the annobin plugin with gcc , use: Replace option with the annobin command line arguments and replace file-name with the name of the file. Example To display additional details about what annobin it is doing, use: Replace file-name with the name of the file. To pass options to the annobin plugin with clang , use: Replace option with the annobin command line arguments and replace /path/to/directory/containing/annobin/ with the absolute path to the directory containing annobin . Example To display additional details about what annobin it is doing, use: Replace file-name with the name of the file. 4.7.2. Using the annocheck program The following section describes how to use annocheck to examine: Files Directories RPM packages annocheck extra tools Note annocheck recursively scans directories, archives, and RPM packages for ELF object files. The files have to be in the ELF format. annocheck does not handle any other binary file types. 4.7.2.1. Using annocheck to examine files The following section describes how to examine ELF files using annocheck . Procedure To examine a file, use: Replace file-name with the name of a file. Note The files must be in ELF format. annocheck does not handle any other binary file types. annocheck processes static libraries that contain ELF object files. Additional information For more information about annocheck and possible command line options, see the annocheck man page on your system. 4.7.2.2. Using annocheck to examine directories The following section describes how to examine ELF files in a directory using annocheck . Procedure To scan a directory, use: Replace directory-name with the name of a directory. annocheck automatically examines the contents of the directory, its sub-directories, and any archives and RPM packages within the directory. Note annocheck only looks for ELF files. Other file types are ignored. Additional information For more information about annocheck and possible command line options, see the annocheck man page on your system. 4.7.2.3. Using annocheck to examine RPM packages The following section describes how to examine ELF files in an RPM package using annocheck . Procedure To scan an RPM package, use: Replace rpm-package-name with the name of an RPM package. annocheck recursively scans all the ELF files inside the RPM package. Note annocheck only looks for ELF files. Other file types are ignored. To scan an RPM package with provided debug info RPM, use: Replace rpm-package-name with the name of an RPM package, and debuginfo-rpm with the name of a debug info RPM associated with the binary RPM. Additional information For more information about annocheck and possible command line options, see the annocheck man page on your system. 4.7.2.4. Using annocheck extra tools annocheck includes multiple tools for examining binary files. You can enable these tools with the command-line options. The following section describes how to enable the: built-by tool notes tool section-size tool You can enable multiple tools at the same time. Note The hardening checker is enabled by default. 4.7.2.4.1. Enabling the built-by tool You can use the annocheck built-by tool to find the name of the compiler that built the binary file. Procedure To enable the built-by tool, use: Additional information For more information about the built-by tool, see the --help command-line option. 4.7.2.4.2. Enabling the notes tool You can use the annocheck notes tool to display the notes stored inside a binary file created by the annobin plugin. Procedure To enable the notes tool, use: The notes are displayed in a sequence sorted by the address range. Additional information For more information about the notes tool, see the --help command-line option. 4.7.2.4.3. Enabling the section-size tool You can use the annocheck section-size tool display the size of the named sections. Procedure To enable the section-size tool, use: Replace name with the name of the named section. The output is restricted to specific sections. A cumulative result is produced at the end. Additional information For more information about the section-size tool, see the --help command-line option. 4.7.2.4.4. Hardening checker basics The hardening checker is enabled by default. You can disable the hardening checker with the --disable-hardened command-line option. 4.7.2.4.4.1. Hardening checker options The annocheck program checks the following options: Lazy binding is disabled using the -z now linker option. The program does not have a stack in an executable region of memory. The relocations for the GOT table are set to read only. No program segment has all three of the read, write and execute permission bits set. There are no relocations against executable code. The runpath information for locating shared libraries at runtime includes only directories rooted at /usr. The program was compiled with annobin notes enabled. The program was compiled with the -fstack-protector-strong option enabled. The program was compiled with -D_FORTIFY_SOURCE=2 . The program was compiled with -D_GLIBCXX_ASSERTIONS . The program was compiled with -fexceptions enabled. The program was compiled with -fstack-clash-protection enabled. The program was compiled at -O2 or higher. The program does not have any relocations held in a writeable. Dynamic executables have a dynamic segment. Shared libraries were compiled with -fPIC or -fPIE . Dynamic executables were compiled with -fPIE and linked with -pie . If available, the -fcf-protection=full option was used. If available, the -mbranch-protection option was used. If available, the -mstackrealign option was used. 4.7.2.4.4.2. Disabling the hardening checker The following section describes how to disable the hardening checker. Procedure To scan the notes in a file without the hardening checker, use: Replace file-name with the name of a file. 4.7.3. Removing redundant annobin notes Using annobin increases the size of binaries. To reduce the size of the binaries compiled with annobin you can remove redundant annobin notes. To remove the redundant annobin notes use the objcopy program, which is a part of the binutils package. Procedure To remove the redundant annobin notes, use: Replace file-name with the name of the file. 4.7.4. Specifics of annobin in GCC Toolset 12 Under some circumstances, due to a synchronization issue between annobin and gcc in GCC Toolset 12, your compilation can fail with an error message that looks similar to the following: To work around the problem, create a symbolic link in the plugin directory from the annobin.so file to the gcc-annobin.so file: Replace architecture with the architecture you use in your system: aarch64 i686 ppc64le s390x x86_64
|
[
"dnf install gcc-toolset- N",
"dnf list available gcc-toolset- N -\\*",
"dnf install package_name",
"dnf install gcc-toolset-13-annobin-annocheck gcc-toolset-13-binutils-devel",
"dnf remove gcc-toolset- N \\*",
"scl enable gcc-toolset- N tool",
"scl enable gcc-toolset- N bash",
"scl enable gcc-toolset-12 'gcc -lsomelib objfile.o'",
"scl enable gcc-toolset-12 'gcc objfile.o -lsomelib'",
"scl enable gcc-toolset-12 'ld -lsomelib objfile.o'",
"scl enable gcc-toolset-12 'ld objfile.o -lsomelib'",
"cc1: fatal error: inaccessible plugin file opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin/gcc-annobin.so expanded from short plugin name gcc-annobin: No such file or directory",
"cd /opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin ln -s annobin.so gcc-annobin.so",
"scl enable gcc-toolset-13 'gcc -lsomelib objfile.o'",
"scl enable gcc-toolset-13 'gcc objfile.o -lsomelib'",
"scl enable gcc-toolset-13 'ld -lsomelib objfile.o'",
"scl enable gcc-toolset-13 'ld objfile.o -lsomelib'",
"cc1: fatal error: inaccessible plugin file opt/rh/gcc-toolset-13/root/usr/lib/gcc/ architecture -linux-gnu/13/plugin/gcc-annobin.so expanded from short plugin name gcc-annobin: No such file or directory",
"cd /opt/rh/gcc-toolset-13/root/usr/lib/gcc/ architecture -linux-gnu/13/plugin ln -s annobin.so gcc-annobin.so",
"scl enable gcc-toolset-14 'gcc -lsomelib objfile.o'",
"scl enable gcc-toolset-14 'gcc objfile.o -lsomelib'",
"scl enable gcc-toolset-14 'ld -lsomelib objfile.o'",
"scl enable gcc-toolset-14 'ld objfile.o -lsomelib'",
"cc1: fatal error: inaccessible plugin file opt/rh/gcc-toolset-14/root/usr/lib/gcc/ architecture -linux-gnu/14/plugin/gcc-annobin.so expanded from short plugin name gcc-annobin: No such file or directory",
"cd /opt/rh/gcc-toolset-14/root/usr/lib/gcc/ architecture -linux-gnu/14/plugin ln -s annobin.so gcc-annobin.so",
"podman login registry.redhat.io Username: username Password: ********",
"podman pull registry.redhat.io/rhel8/gcc-toolset- <toolset_version> -toolchain",
"podman images",
"podman run -it image_name /bin/bash",
"podman login registry.redhat.io Username: username Password: ********",
"podman pull registry.redhat.io/rhel9/gcc-toolset-14-toolchain",
"podman run -it registry.redhat.io/rhel9/gcc-toolset-14-toolchain /bin/bash",
"bash-4.4USD gcc -v gcc version 14.2.1 20240801 (Red Hat 14.2.1-1) (GCC)",
"bash-4.4USD rpm -qa",
"gcc -fplugin=annobin",
"gcc -iplugindir= /path/to/directory/containing/annobin/",
"gcc --print-file-name=plugin",
"clang -fplugin= /path/to/directory/containing/annobin/",
"gcc -fplugin=annobin -fplugin-arg-annobin- option file-name",
"gcc -fplugin=annobin -fplugin-arg-annobin-verbose file-name",
"clang -fplugin= /path/to/directory/containing/annobin/ -Xclang -plugin-arg-annobin -Xclang option file-name",
"clang -fplugin=/usr/lib64/clang/10/lib/annobin.so -Xclang -plugin-arg-annobin -Xclang verbose file-name",
"annocheck file-name",
"annocheck directory-name",
"annocheck rpm-package-name",
"annocheck rpm-package-name --debug-rpm debuginfo-rpm",
"annocheck --enable-built-by",
"annocheck --enable-notes",
"annocheck --section-size= name",
"annocheck --enable-notes --disable-hardened file-name",
"objcopy --merge-notes file-name",
"cc1: fatal error: inaccessible plugin file opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin/gcc-annobin.so expanded from short plugin name gcc-annobin: No such file or directory",
"cd /opt/rh/gcc-toolset-12/root/usr/lib/gcc/ architecture -linux-gnu/12/plugin ln -s annobin.so gcc-annobin.so"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/developing_c_and_cpp_applications_in_rhel_9/assembly_additional-toolsets-for-development-rhel-9_developing-applications
|
Chapter 1. Configuring and managing basic network access
|
Chapter 1. Configuring and managing basic network access NetworkManager creates a connection profile for each Ethernet adapter that is installed in a host. By default, this profile uses DHCP for both IPv4 and IPv6 connections. Modify this automatically-created profile or add a new one in the following cases: The network requires custom settings, such as a static IP address configuration. You require multiple profiles because the host roams among different networks. Red Hat Enterprise Linux provides administrators different options to configure Ethernet connections. For example: Use nmcli to configure connections on the command line. Use nmtui to configure connections in a text-based user interface. Use the GNOME Settings menu or nm-connection-editor application to configure connections in a graphical interface. Use nmstatectl to configure connections through the Nmstate API. Use RHEL system roles to automate the configuration of connections on one or multiple hosts. 1.1. Configuring the network and host name in the graphical installation mode Follow the steps in this procedure to configure your network and host name. Procedure From the Installation Summary window, click Network and Host Name . From the list in the left-hand pane, select an interface. The details are displayed in the right-hand pane. Toggle the ON/OFF switch to enable or disable the selected interface. You cannot add or remove interfaces manually. Click + to add a virtual network interface, which can be either: Team, Bond, Bridge, or VLAN. Click - to remove a virtual interface. Click Configure to change settings such as IP addresses, DNS servers, or routing configuration for an existing interface (both virtual and physical). Type a host name for your system in the Host Name field. The host name can either be a fully qualified domain name (FQDN) in the format hostname.domainname , or a short host name without the domain. Many networks have a Dynamic Host Configuration Protocol (DHCP) service that automatically supplies connected systems with a domain name. To allow the DHCP service to assign the domain name to this system, specify only the short host name. Host names can only contain alphanumeric characters and - or . . Host name should be equal to or less than 64 characters. Host names cannot start or end with - and . . To be compliant with DNS, each part of a FQDN should be equal to or less than 63 characters and the FQDN total length, including dots, should not exceed 255 characters. The value localhost means that no specific static host name for the target system is configured, and the actual host name of the installed system is configured during the processing of the network configuration, for example, by NetworkManager using DHCP or DNS. When using static IP and host name configuration, it depends on the planned system use case whether to use a short name or FQDN. Red Hat Identity Management configures FQDN during provisioning but some 3rd party software products may require a short name. In either case, to ensure availability of both forms in all situations, add an entry for the host in /etc/hosts in the format IP FQDN short-alias . Click Apply to apply the host name to the installer environment. Alternatively, in the Network and Hostname window, you can choose the Wireless option. Click Select network in the right-hand pane to select your wifi connection, enter the password if required, and click Done . Additional resources Automatically installing RHEL For more information about network device naming standards, see Configuring and managing networking . 1.2. Configuring an Ethernet connection by using nmcli If you connect a host to the network over Ethernet, you can manage the connection's settings on the command line by using the nmcli utility. Prerequisites A physical or virtual Ethernet Network Interface Controller (NIC) exists in the server's configuration. Procedure List the NetworkManager connection profiles: By default, NetworkManager creates a profile for each NIC in the host. If you plan to connect this NIC only to a specific network, adapt the automatically-created profile. If you plan to connect this NIC to networks with different settings, create individual profiles for each network. If you want to create an additional connection profile, enter: Skip this step to modify an existing profile. Optional: Rename the connection profile: On hosts with multiple profiles, a meaningful name makes it easier to identify the purpose of a profile. Display the current settings of the connection profile: Configure the IPv4 settings: To use DHCP, enter: Skip this step if ipv4.method is already set to auto (default). To set a static IPv4 address, network mask, default gateway, DNS servers, and search domain, enter: Configure the IPv6 settings: To use stateless address autoconfiguration (SLAAC), enter: Skip this step if ipv6.method is already set to auto (default). To set a static IPv6 address, network mask, default gateway, DNS servers, and search domain, enter: To customize other settings in the profile, use the following command: Enclose values with spaces or semicolons in quotes. Activate the profile: Verification Display the IP settings of the NIC: Display the IPv4 default gateway: Display the IPv6 default gateway: Display the DNS settings: If multiple connection profiles are active at the same time, the order of nameserver entries depend on the DNS priority values in these profiles and the connection types. Use the ping utility to verify that this host can send packets to other hosts: Troubleshooting Verify that the network cable is plugged-in to the host and a switch. Check whether the link failure exists only on this host or also on other hosts connected to the same switch. Verify that the network cable and the network interface are working as expected. Perform hardware diagnosis steps and replace defective cables and network interface cards. If the configuration on the disk does not match the configuration on the device, starting or restarting NetworkManager creates an in-memory connection that reflects the configuration of the device. For further details and how to avoid this problem, see the Red Hat Knowledgebase solution NetworkManager duplicates a connection after restart of NetworkManager service . Additional resources nm-settings(5) man page on your system 1.3. Configuring an Ethernet connection by using nmtui If you connect a host to the network over Ethernet, you can manage the connection's settings in a text-based user interface by using the nmtui application. Use nmtui to create new profiles and to update existing ones on a host without a graphical interface. Note In nmtui : Navigate by using the cursor keys. Press a button by selecting it and hitting Enter . Select and clear checkboxes by using Space . To return to the screen, use ESC . Prerequisites A physical or virtual Ethernet Network Interface Controller (NIC) exists in the server's configuration. Procedure If you do not know the network device name you want to use in the connection, display the available devices: Start nmtui : Select Edit a connection , and press Enter . Choose whether to add a new connection profile or to modify an existing one: To create a new profile: Press Add . Select Ethernet from the list of network types, and press Enter . To modify an existing profile, select the profile from the list, and press Enter . Optional: Update the name of the connection profile. On hosts with multiple profiles, a meaningful name makes it easier to identify the purpose of a profile. If you create a new connection profile, enter the network device name into the Device field. Depending on your environment, configure the IP address settings in the IPv4 configuration and IPv6 configuration areas accordingly. For this, press the button to these areas, and select: Disabled , if this connection does not require an IP address. Automatic , if a DHCP server dynamically assigns an IP address to this NIC. Manual , if the network requires static IP address settings. In this case, you must fill further fields: Press Show to the protocol you want to configure to display additional fields. Press Add to Addresses , and enter the IP address and the subnet mask in Classless Inter-Domain Routing (CIDR) format. If you do not specify a subnet mask, NetworkManager sets a /32 subnet mask for IPv4 addresses and /64 for IPv6 addresses. Enter the address of the default gateway. Press Add to DNS servers , and enter the DNS server address. Press Add to Search domains , and enter the DNS search domain. Figure 1.1. Example of an Ethernet connection with static IP address settings Press OK to create and automatically activate the new connection. Press Back to return to the main menu. Select Quit , and press Enter to close the nmtui application. Verification Display the IP settings of the NIC: Display the IPv4 default gateway: Display the IPv6 default gateway: Display the DNS settings: If multiple connection profiles are active at the same time, the order of nameserver entries depend on the DNS priority values in these profiles and the connection types. Use the ping utility to verify that this host can send packets to other hosts: Troubleshooting Verify that the network cable is plugged-in to the host and a switch. Check whether the link failure exists only on this host or also on other hosts connected to the same switch. Verify that the network cable and the network interface are working as expected. Perform hardware diagnosis steps and replace defective cables and network interface cards. If the configuration on the disk does not match the configuration on the device, starting or restarting NetworkManager creates an in-memory connection that reflects the configuration of the device. For further details and how to avoid this problem, see the Red Hat Knowledgebase solution NetworkManager duplicates a connection after restart of NetworkManager service . Additional resources Configuring NetworkManager to avoid using a specific profile to provide a default gateway Configuring the order of DNS servers 1.4. Configuring an Ethernet connection with a dynamic IP address by using the network RHEL system role with an interface name To connect a Red Hat Enterprise Linux host to an Ethernet network, create a NetworkManager connection profile for the network device. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use the network RHEL system role to configure an Ethernet connection that retrieves its IP addresses, gateways, and DNS settings from a DHCP server and IPv6 stateless address autoconfiguration (SLAAC). With this role you can assign the connection profile to the specified interface name. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. A physical or virtual Ethernet device exists in the servers' configuration. A DHCP server and SLAAC are available in the network. The managed nodes use the NetworkManager service to configure the network. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 interface_name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up The settings specified in the example playbook include the following: dhcp4: yes Enables automatic IPv4 address assignment from DHCP, PPP, or similar services. auto6: yes Enables IPv6 auto-configuration. By default, NetworkManager uses Router Advertisements. If the router announces the managed flag, NetworkManager requests an IPv6 address and prefix from a DHCPv6 server. For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Query the Ansible facts of the managed node and verify that the interface received IP addresses and DNS settings: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory 1.5. Additional resources Configuring and managing networking
|
[
"nmcli connection show NAME UUID TYPE DEVICE Wired connection 1 a5eb6490-cc20-3668-81f8-0314a27f3f75 ethernet enp1s0",
"nmcli connection add con-name <connection-name> ifname <device-name> type ethernet",
"nmcli connection modify \"Wired connection 1\" connection.id \"Internal-LAN\"",
"nmcli connection show Internal-LAN connection.interface-name: enp1s0 connection.autoconnect: yes ipv4.method: auto ipv6.method: auto",
"nmcli connection modify Internal-LAN ipv4.method auto",
"nmcli connection modify Internal-LAN ipv4.method manual ipv4.addresses 192.0.2.1/24 ipv4.gateway 192.0.2.254 ipv4.dns 192.0.2.200 ipv4.dns-search example.com",
"nmcli connection modify Internal-LAN ipv6.method auto",
"nmcli connection modify Internal-LAN ipv6.method manual ipv6.addresses 2001:db8:1::fffe/64 ipv6.gateway 2001:db8:1::fffe ipv6.dns 2001:db8:1::ffbb ipv6.dns-search example.com",
"nmcli connection modify <connection-name> <setting> <value>",
"nmcli connection up Internal-LAN",
"ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever",
"ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102",
"ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium",
"cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb",
"ping <host-name-or-IP-address>",
"nmcli device status DEVICE TYPE STATE CONNECTION enp1s0 ethernet unavailable --",
"nmtui",
"ip address show enp1s0 2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000 link/ether 52:54:00:17:b8:b6 brd ff:ff:ff:ff:ff:ff inet 192.0.2.1/24 brd 192.0.2.255 scope global noprefixroute enp1s0 valid_lft forever preferred_lft forever inet6 2001:db8:1::fffe/64 scope global noprefixroute valid_lft forever preferred_lft forever",
"ip route show default default via 192.0.2.254 dev enp1s0 proto static metric 102",
"ip -6 route show default default via 2001:db8:1::ffee dev enp1s0 proto static metric 102 pref medium",
"cat /etc/resolv.conf search example.com nameserver 192.0.2.200 nameserver 2001:db8:1::ffbb",
"ping <host-name-or-IP-address>",
"--- - name: Configure the network hosts: managed-node-01.example.com tasks: - name: Ethernet connection profile with dynamic IP address settings ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 interface_name: enp1s0 type: ethernet autoconnect: yes ip: dhcp4: yes auto6: yes state: up",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"ansible managed-node-01.example.com -m ansible.builtin.setup \"ansible_default_ipv4\": { \"address\": \"192.0.2.1\", \"alias\": \"enp1s0\", \"broadcast\": \"192.0.2.255\", \"gateway\": \"192.0.2.254\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"netmask\": \"255.255.255.0\", \"network\": \"192.0.2.0\", \"prefix\": \"24\", \"type\": \"ether\" }, \"ansible_default_ipv6\": { \"address\": \"2001:db8:1::1\", \"gateway\": \"2001:db8:1::fffe\", \"interface\": \"enp1s0\", \"macaddress\": \"52:54:00:17:b8:b6\", \"mtu\": 1500, \"prefix\": \"64\", \"scope\": \"global\", \"type\": \"ether\" }, \"ansible_dns\": { \"nameservers\": [ \"192.0.2.1\", \"2001:db8:1::ffbb\" ], \"search\": [ \"example.com\" ] },"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_basic_system_settings/assembly_configuring-and-managing-network-access_configuring-basic-system-settings
|
Chapter 10. Error handling
|
Chapter 10. Error handling Errors in AMQ Python can be handled in two different ways: Catching exceptions Overriding event-handling functions to intercept AMQP protocol or connection errors 10.1. Catching exceptions All of the exceptions that AMQ Python throws inherit from the ProtonException class, which in turn inherits from the Python Exception class. The following example illustrates how to catch any exception thrown from AMQ Python: Example: API-specific exception handling try: # Something that might throw an exception except ProtonException as e: # Handle Proton-specific problems here except Exception as e: # Handle more general problems here } If you do not require API-specific exception handling, you only need to catch Exception , since ProtonException inherits from it. 10.2. Handling connection and protocol errors You can handle protocol-level errors by overriding the following messaging_handler methods: on_transport_error(event) on_connection_error(event) on_session_error(event) on_link_error(event) These event-handling functions are called whenever there is an error condition with the specific object that is in the event. After calling the error handler, the appropriate close handler is also called. Note Because the close handlers are called in the event of any error, only the error itself needs to be handled within the error handler. Resource cleanup can be managed by close handlers. If there is no error handling that is specific to a particular object, it is typical to use the general on_error handler and not have a more specific handler. Note When reconnect is enabled and the remote server closes a connection with the amqp:connection:forced condition, the client does not treat it as an error and thus does not fire the on_connection_error handler. The client instead begins the reconnection process.
|
[
"try: # Something that might throw an exception except ProtonException as e: # Handle Proton-specific problems here except Exception as e: # Handle more general problems here }"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_python_client/error_handling
|
Chapter 1. User Management
|
Chapter 1. User Management 1.1. User Management As a cloud administrator, you can add, modify, and delete users in the dashboard. Users can be members of one or more projects. You can manage projects and users independently from each other. 1.1.1. Create a User Use this procedure to create users in the dashboard. You can assign a primary project and role to the user. Note that users created in the dashboard are Keystone users by default. To integrate Active Directory users, you can configure the LDAP provider included in the Red Hat OpenStack Platform Identity service. As an admin user in the dashboard, select Identity > Users . Click Create User . Enter a user name, email, and preliminary password for the user. Select a project from the Primary Project list. Select a role for the user from the Role list (the default role is _member_ ). Click Create User . 1.1.2. Edit a User Use this procedure to update the user's details, including the primary project. As an admin user in the dashboard, select Identity > Users . In the User's Actions column, click Edit . In the Update User window, you can update the User Name , Email , and Primary Project . Click Update User . 1.1.3. Enable or Disable a User Use this procedure to enable or disable a user. You can disable or enable only one user at a time. A disabled user cannot log in to the dashboard, and does not have access to any OpenStack services. Also, a disabled user's primary project cannot be set as active. A disabled user can be enabled again, unlike deleting a user where the action cannot be reversed. A disabled user must be re-enabled for any user-project action in the dashboard. As an admin user in the dashboard, select Identity > Users . In the Actions column, click the arrow, and select Enable User or Disable User . In the Enabled column, the value then updates to either True or False . 1.1.4. Delete a User As an admin user, use this procedure to delete a user using the dashboard. This action cannot be reversed, unlike disabling a user. Deleted users get delisted from a project's members' list for projects it belongs to. All roles associated with the user-project pair are also lost. As an admin user in the dashboard, select Identity > Users . Select the users you want to delete. Click Delete Users . The Confirm Delete Users window is displayed. Click Delete Users to confirm the action.
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/users_and_identity_management_guide/users_roles
|
B.71. poppler
|
B.71. poppler B.71.1. RHSA-2010:0859 - Important: poppler security update Updated poppler packages that fix three security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Poppler is a Portable Document Format (PDF) rendering library, used by applications such as Evince. CVE-2010-3702 , CVE-2010-3703 Two uninitialized pointer use flaws were discovered in poppler. An attacker could create a malicious PDF file that, when opened, would cause applications that use poppler (such as Evince) to crash or, potentially, execute arbitrary code. CVE-2010-3704 An array index error was found in the way poppler parsed PostScript Type 1 fonts embedded in PDF documents. An attacker could create a malicious PDF file that, when opened, would cause applications that use poppler (such as Evince) to crash or, potentially, execute arbitrary code. Users are advised to upgrade to these updated packages, which contain backported patches to correct these issues.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/poppler
|
Providing feedback on Red Hat build of OpenJDK documentation
|
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
| null |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.7/proc-providing-feedback-on-redhat-documentation
|
D.2. Enforcing Resource Timeouts
|
D.2. Enforcing Resource Timeouts There is no timeout for starting, stopping, or failing over resources. Some resources take an indeterminately long amount of time to start or stop. Unfortunately, a failure to stop (including a timeout) renders the service inoperable (failed state). You can, if desired, turn on timeout enforcement on each resource in a service individually by adding __enforce_timeouts="1" to the reference in the cluster.conf file. The following example shows a cluster service that has been configured with the __enforce_timeouts attribute set for the netfs resource. With this attribute set, then if it takes more than 30 seconds to unmount the NFS file system during a recovery process the operation will time out, causing the service to enter the failed state.
|
[
"</screen> <rm> <failoverdomains/> <resources> <netfs export=\"/nfstest\" force_unmount=\"1\" fstype=\"nfs\" host=\"10.65.48.65\" mountpoint=\"/data/nfstest\" name=\"nfstest_data\" options=\"rw,sync,soft\"/> </resources> <service autostart=\"1\" exclusive=\"0\" name=\"nfs_client_test\" recovery=\"relocate\"> <netfs ref=\"nfstest_data\" __enforce_timeouts=\"1\"/> </service> </rm>"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/resource-timeout-ca
|
function::task_utime
|
function::task_utime Name function::task_utime - User time of the current task Synopsis Arguments None Description Returns the user time of the current task in cputime. Does not include any time used by other tasks in this process, nor does it include any time of the children of this task.
|
[
"function task_utime:long()"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-task-utime
|
6.2. Resource Properties
|
6.2. Resource Properties The properties that you define for a resource tell the cluster which script to use for the resource, where to find that script and what standards it conforms to. Table 6.1, "Resource Properties" describes these properties. Table 6.1. Resource Properties Field Description resource_id Your name for the resource standard The standard the script conforms to. Allowed values: ocf , service , upstart , systemd , lsb , stonith type The name of the Resource Agent you wish to use, for example IPaddr or Filesystem provider The OCF spec allows multiple vendors to supply the same resource agent. Most of the agents shipped by Red Hat use heartbeat as the provider. Table 6.2, "Commands to Display Resource Properties" . summarizes the commands that display the available resource properties. Table 6.2. Commands to Display Resource Properties pcs Display Command Output pcs resource list Displays a list of all available resources. pcs resource standards Displays a list of available resources agent standards. pcs resource providers Displays a list of available resources agent providers. pcs resource list string Displays a list of available resources filtered by the specified string. You can use this command to display resources filtered by the name of a standard, a provider, or a type.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-resourceprops-haar
|
8.3.3. Nikto
|
8.3.3. Nikto Nikto is an excellent common gateway interface (CGI) script scanner. Nikto not only checks for CGI vulnerabilities but does so in an evasive manner, so as to elude intrusion detection systems. It comes with thorough documentation which should be carefully reviewed prior to running the program. If you have Web servers serving up CGI scripts, Nikto can be an excellent resource for checking the security of these servers. Note Nikto is not included with Red Hat Enterprise Linux and is not supported. It has been included in this document as a reference to users who may be interested in using this popular application. More information about Nikto can be found at the following URL: http://www.cirt.net/code/nikto.shtml
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-vuln-tools-cgi
|
Chapter 2. Installing a cluster on IBM Power
|
Chapter 2. Installing a cluster on IBM Power In OpenShift Container Platform version 4.14, you can install a cluster on IBM Power(R) infrastructure that you provision. Important Additional considerations exist for non-bare metal platforms. Review the information in the guidelines for deploying OpenShift Container Platform on non-tested platforms before you install an OpenShift Container Platform cluster. 2.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . Before you begin the installation process, you must clean the installation directory. This ensures that the required installation files are created and updated during the installation process. You provisioned persistent storage using OpenShift Data Foundation or other supported storage protocols for your cluster. To deploy a private image registry, you must set up persistent storage with ReadWriteMany access. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 2.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 2.3. Requirements for a cluster with user-provisioned infrastructure For a cluster that contains user-provisioned infrastructure, you must deploy all of the required machines. This section describes the requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. 2.3.1. Required machines for cluster installation The smallest OpenShift Container Platform clusters require the following hosts: Table 2.1. Minimum required hosts Hosts Description One temporary bootstrap machine The cluster requires the bootstrap machine to deploy the OpenShift Container Platform cluster on the three control plane machines. You can remove the bootstrap machine after you install the cluster. Three control plane machines The control plane machines run the Kubernetes and OpenShift Container Platform services that form the control plane. At least two compute machines, which are also known as worker machines. The workloads requested by OpenShift Container Platform users run on the compute machines. Important To maintain high availability of your cluster, use separate physical hosts for these cluster machines. The bootstrap, control plane, and compute machines must use Red Hat Enterprise Linux CoreOS (RHCOS) as the operating system. Note that RHCOS is based on Red Hat Enterprise Linux (RHEL) 9.2 and inherits all of its hardware certifications and requirements. See Red Hat Enterprise Linux technology capabilities and limits . 2.3.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 2.2. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 2 16 GB 100 GB 300 Control plane RHCOS 2 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 2.3.3. Minimum IBM Power requirements You can install OpenShift Container Platform version 4.14 on the following IBM(R) hardware: IBM Power(R)9 or IBM Power(R)10 processor-based systems Note Support for RHCOS functionality for all IBM Power(R)8 models, IBM Power(R) AC922, IBM Power(R) IC922, and IBM Power(R) LC922 is deprecated in OpenShift Container Platform 4.14. Red Hat recommends that you use later hardware models. Hardware requirements Six logical partitions (LPARs) across multiple PowerVM servers Operating system requirements One instance of an IBM Power(R)9 or Power10 processor-based system On your IBM Power(R) instance, set up: Three LPARs for OpenShift Container Platform control plane machines Two LPARs for OpenShift Container Platform compute machines One LPAR for the temporary OpenShift Container Platform bootstrap machine Disk storage for the IBM Power guest virtual machines Local storage, or storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools) Network for the PowerVM guest virtual machines Dedicated physical adapter, or SR-IOV virtual function Available by the Virtual I/O Server using Shared Ethernet Adapter Virtualized by the Virtual I/O Server using IBM(R) vNIC Storage / main memory 100 GB / 16 GB for OpenShift Container Platform control plane machines 100 GB / 8 GB for OpenShift Container Platform compute machines 100 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.4. Recommended IBM Power system requirements Hardware requirements Six LPARs across multiple PowerVM servers Operating system requirements One instance of an IBM Power(R)9 or IBM Power(R)10 processor-based system On your IBM Power(R) instance, set up: Three LPARs for OpenShift Container Platform control plane machines Two LPARs for OpenShift Container Platform compute machines One LPAR for the temporary OpenShift Container Platform bootstrap machine Disk storage for the IBM Power guest virtual machines Local storage, or storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools) Network for the PowerVM guest virtual machines Dedicated physical adapter, or SR-IOV virtual function Available by the Virtual I/O Server using Shared Ethernet Adapter Virtualized by the Virtual I/O Server using IBM(R) vNIC Storage / main memory 120 GB / 32 GB for OpenShift Container Platform control plane machines 120 GB / 32 GB for OpenShift Container Platform compute machines 120 GB / 16 GB for the temporary OpenShift Container Platform bootstrap machine 2.3.5. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 2.3.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. 2.3.6.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 2.3.6.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 2.3. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 2.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 2.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. Additional resources Configuring chrony time service 2.3.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 2.3.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 2.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 2.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 2.3.8. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 2.7. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 2.8. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 2.3.8.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 2.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 2.4. Preparing the user-provisioned infrastructure Before you install OpenShift Container Platform on user-provisioned infrastructure, you must prepare the underlying infrastructure. This section provides details about the high-level steps required to set up your cluster infrastructure in preparation for an OpenShift Container Platform installation. This includes configuring IP networking and network connectivity for your cluster nodes, enabling the required ports through your firewall, and setting up the required DNS and load balancing infrastructure. After preparation, your cluster infrastructure must meet the requirements outlined in the Requirements for a cluster with user-provisioned infrastructure section. Prerequisites You have reviewed the OpenShift Container Platform 4.x Tested Integrations page. You have reviewed the infrastructure requirements detailed in the Requirements for a cluster with user-provisioned infrastructure section. Procedure If you are using DHCP to provide the IP networking configuration to your cluster nodes, configure your DHCP service. Add persistent IP addresses for the nodes to your DHCP server configuration. In your configuration, match the MAC address of the relevant network interface to the intended IP address for each node. When you use DHCP to configure IP addressing for the cluster machines, the machines also obtain the DNS server information through DHCP. Define the persistent DNS server address that is used by the cluster nodes through your DHCP server configuration. Note If you are not using a DHCP service, you must provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. Define the hostnames of your cluster nodes in your DHCP server configuration. See the Setting the cluster node hostnames through DHCP section for details about hostname considerations. Note If you are not using a DHCP service, the cluster nodes obtain their hostname through a reverse DNS lookup. Ensure that your network infrastructure provides the required network connectivity between the cluster components. See the Networking requirements for user-provisioned infrastructure section for details about the requirements. Configure your firewall to enable the ports required for the OpenShift Container Platform cluster components to communicate. See Networking requirements for user-provisioned infrastructure section for details about the ports that are required. Important By default, port 1936 is accessible for an OpenShift Container Platform cluster, because each control plane node needs access to this port. Avoid using the Ingress load balancer to expose this port, because doing so might result in the exposure of sensitive information, such as statistics and metrics, related to Ingress Controllers. Setup the required DNS infrastructure for your cluster. Configure DNS name resolution for the Kubernetes API, the application wildcard, the bootstrap machine, the control plane machines, and the compute machines. Configure reverse DNS resolution for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. See the User-provisioned DNS requirements section for more information about the OpenShift Container Platform DNS requirements. Validate your DNS configuration. From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses in the responses correspond to the correct components. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names in the responses correspond to the correct components. See the Validating DNS resolution for user-provisioned infrastructure section for detailed DNS validation steps. Provision the required API and application ingress load balancing infrastructure. See the Load balancing requirements for user-provisioned infrastructure section for more information about the requirements. Note Some load balancing solutions require the DNS name resolution for the cluster nodes to be in place before the load balancing is initialized. 2.5. Validating DNS resolution for user-provisioned infrastructure You can validate your DNS configuration before installing OpenShift Container Platform on user-provisioned infrastructure. Important The validation steps detailed in this section must succeed before you install your cluster. Prerequisites You have configured the required DNS records for your user-provisioned infrastructure. Procedure From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components. Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1 1 Replace <nameserver_ip> with the IP address of the nameserver, <cluster_name> with your cluster name, and <base_domain> with your base domain name. Example output api.ocp4.example.com. 604800 IN A 192.168.1.5 Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer: USD dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain> Example output api-int.ocp4.example.com. 604800 IN A 192.168.1.5 Test an example *.apps.<cluster_name>.<base_domain> DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer: USD dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain> Example output random.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Note In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. You can replace random with another wildcard value. For example, you can query the route to the OpenShift Container Platform console: USD dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain> Example output console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node: USD dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain> Example output bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node. From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components. Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.5 Example output 5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2 1 Provides the record name for the Kubernetes internal API. 2 Provides the record name for the Kubernetes API. Note A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer. Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node: USD dig +noall +answer @<nameserver_ip> -x 192.168.1.96 Example output 96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node. 2.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 2.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 2.8. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 2.9. Manually creating the installation configuration file Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Power(R) 2.9.1. Sample install-config.yaml file for IBM Power You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for IBM Power(R) infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 15 The pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 2.9.2. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 2.9.3. Configuring a three-node cluster Optionally, you can deploy zero compute machines in a bare metal cluster that consists of three control plane machines only. This provides smaller, more resource efficient clusters for cluster administrators and developers to use for testing, development, and production. In three-node OpenShift Container Platform environments, the three control plane machines are schedulable, which means that your application workloads are scheduled to run on them. Prerequisites You have an existing install-config.yaml file. Procedure Ensure that the number of compute replicas is set to 0 in your install-config.yaml file, as shown in the following compute stanza: compute: - name: worker platform: {} replicas: 0 Note You must set the value of the replicas parameter for the compute machines to 0 when you install OpenShift Container Platform on user-provisioned infrastructure, regardless of the number of compute machines you are deploying. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. This does not apply to user-provisioned installations, where the compute machines are deployed manually. For three-node cluster installations, follow these steps: If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. See the Load balancing requirements for user-provisioned infrastructure section for more information. When you create the Kubernetes manifest files in the following procedure, ensure that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml file is set to true . This enables your application workloads to run on the control plane nodes. Do not deploy any compute nodes when you create the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 2.10. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 2.10.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 2.9. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 2.10. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. You can change this value by migrating from OpenShift SDN to OVN-Kubernetes. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 2.11. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 2.12. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 2.13. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 2.14. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 2.15. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 2.16. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 2.17. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 2.18. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 2.19. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 2.11. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Note The installation program that generates the manifest and Ignition files is architecture specific and can be obtained from the client image mirror . The Linux version of the installation program (without an architecture postfix) runs on ppc64le only. This installer program is also available as a Mac OS version. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Warning If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable. Important When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: 2.12. Installing RHCOS and starting the OpenShift Container Platform bootstrap process To install OpenShift Container Platform on IBM Power(R) infrastructure that you provision, you must install Red Hat Enterprise Linux CoreOS (RHCOS) on the machines. When you install RHCOS, you must provide the Ignition config file that was generated by the OpenShift Container Platform installation program for the type of machine you are installing. If you have configured suitable networking, DNS, and load balancing infrastructure, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS machines have rebooted. Follow either the steps to use an ISO image or network PXE booting to install RHCOS on the machines. 2.12.1. Installing RHCOS by using an ISO image You can use an ISO image to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Obtain the SHA512 digest for each of your Ignition config files. For example, you can use the following on a system running Linux to get the SHA512 digest for your bootstrap.ign Ignition config file: USD sha512sum <installation_directory>/bootstrap.ign The digests are provided to the coreos-installer in a later step to validate the authenticity of the Ignition config files on the cluster nodes. Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS images that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS images are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep '\.iso[^.]' Example output "location": "<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso", "location": "<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso", "location": "<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso", "location": "<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live.x86_64.iso", Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Use only ISO images for this procedure. RHCOS qcow2 images are not supported for this installation type. ISO file names resemble the following example: rhcos-<version>-live.<architecture>.iso Use the ISO to start the RHCOS installation. Use one of the following installation options: Burn the ISO image to a disk and boot it directly. Use ISO redirection by using a lights-out management (LOM) interface. Boot the RHCOS ISO image without specifying any options or interrupting the live boot sequence. Wait for the installer to boot into a shell prompt in the RHCOS live environment. Note It is possible to interrupt the RHCOS installation boot process to add kernel arguments. However, for this ISO procedure you should use the coreos-installer command as outlined in the following steps, instead of adding kernel arguments. Run the coreos-installer command and specify the options that meet your installation requirements. At a minimum, you must specify the URL that points to the Ignition config file for the node type, and the device that you are installing to: USD sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2 1 1 You must run the coreos-installer command by using sudo , because the core user does not have the required root privileges to perform the installation. 2 The --ignition-hash option is required when the Ignition config file is obtained through an HTTP URL to validate the authenticity of the Ignition config file on the cluster node. <digest> is the Ignition config file SHA512 digest obtained in a preceding step. Note If you want to provide your Ignition config files through an HTTPS server that uses TLS, you can add the internal certificate authority (CA) to the system trust store before running coreos-installer . The following example initializes a bootstrap node installation to the /dev/sda device. The Ignition config file for the bootstrap node is obtained from an HTTP web server with the IP address 192.168.1.2: USD sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, you must reboot the system. During the system reboot, it applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the other machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install OpenShift Container Platform. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 2.12.1.1. Advanced RHCOS installation reference This section illustrates the networking configuration and other advanced options that allow you to modify the Red Hat Enterprise Linux CoreOS (RHCOS) manual installation process. The following tables describe the kernel arguments and command-line options you can use with the RHCOS live installer and the coreos-installer command. 2.12.1.1.1. Networking and bonding options for ISO installations If you install RHCOS from an ISO image, you can add kernel arguments manually when you boot the image to configure networking for a node. If no networking arguments are specified, DHCP is activated in the initramfs when RHCOS detects that networking is required to fetch the Ignition config file. Important When adding networking arguments manually, you must also add the rd.neednet=1 kernel argument to bring the network up in the initramfs. The following information provides examples for configuring networking and bonding on your RHCOS nodes for ISO installations. The examples describe how to use the ip= , nameserver= , and bond= kernel arguments. Note Ordering is important when adding the kernel arguments: ip= , nameserver= , and then bond= . The networking options are passed to the dracut tool during system boot. For more information about the networking options supported by dracut , see the dracut.cmdline manual page . The following examples are the networking options for ISO installation. Configuring DHCP or static IP addresses To configure an IP address, either use DHCP ( ip=dhcp ) or set an individual static IP address ( ip=<host_ip> ). If setting a static IP, you must then identify the DNS server IP address ( nameserver=<dns_ip> ) on each node. The following example sets: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The hostname to core0.example.com The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41 Note When you use DHCP to configure IP addressing for the RHCOS machines, the machines also obtain the DNS server information through DHCP. For DHCP-based deployments, you can define the DNS server address that is used by the RHCOS nodes through your DHCP server configuration. Configuring an IP address without a static hostname You can configure an IP address without assigning a static hostname. If a static hostname is not set by the user, it will be picked up and automatically set by a reverse DNS lookup. To configure an IP address without a static hostname refer to the following example: The node's IP address to 10.10.10.2 The gateway address to 10.10.10.254 The netmask to 255.255.255.0 The DNS server address to 4.4.4.41 The auto-configuration value to none . No auto-configuration is required when IP networking is configured statically. ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41 Specifying multiple network interfaces You can specify multiple network interfaces by setting multiple ip= entries. ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring default gateway and route Optional: You can configure routes to additional networks by setting an rd.route= value. Note When you configure one or multiple networks, one default gateway is required. If the additional network gateway is different from the primary network gateway, the default gateway must be the primary network gateway. Run the following command to configure the default gateway: ip=::10.10.10.254:::: Enter the following command to configure the route for the additional network: rd.route=20.20.20.0/24:20.20.20.254:enp2s0 Disabling DHCP on a single interface You can disable DHCP on a single interface, such as when there are two or more network interfaces and only one interface is being used. In the example, the enp1s0 interface has a static networking configuration and DHCP is disabled for enp2s0 , which is not used: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none Combining DHCP and static IP configurations You can combine DHCP and static IP configurations on systems with multiple network interfaces, for example: ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none Configuring VLANs on individual interfaces Optional: You can configure VLANs on individual interfaces by using the vlan= parameter. To configure a VLAN on a network interface and use a static IP address, run the following command: ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0 To configure a VLAN on a network interface and to use DHCP, run the following command: ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0 Providing multiple DNS servers You can provide multiple DNS servers by adding a nameserver= entry for each server, for example: nameserver=1.1.1.1 nameserver=8.8.8.8 Bonding multiple network interfaces to a single interface Optional: You can bond multiple network interfaces to a single interface by using the bond= option. Refer to the following examples: The syntax for configuring a bonded interface is: bond=<name>[:<network_interfaces>][:options] <name> is the bonding device name ( bond0 ), <network_interfaces> represents a comma-separated list of physical (ethernet) interfaces ( em1,em2 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none Bonding multiple SR-IOV network interfaces to a dual port NIC interface Important Support for Day 1 operations associated with enabling NIC partitioning for SR-IOV devices is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Optional: You can bond multiple SR-IOV network interfaces to a dual port NIC interface by using the bond= option. On each node, you must perform the following tasks: Create the SR-IOV virtual functions (VFs) following the guidance in Managing SR-IOV devices . Follow the procedure in the "Attaching SR-IOV networking devices to virtual machines" section. Create the bond, attach the desired VFs to the bond and set the bond link state up following the guidance in Configuring network bonding . Follow any of the described procedures to create the bond. The following examples illustrate the syntax you must use: The syntax for configuring a bonded interface is bond=<name>[:<network_interfaces>][:options] . <name> is the bonding device name ( bond0 ), <network_interfaces> represents the virtual functions (VFs) by their known name in the kernel and shown in the output of the ip link command( eno1f0 , eno2f0 ), and options is a comma-separated list of bonding options. Enter modinfo bonding to see available options. When you create a bonded interface using bond= , you must specify how the IP address is assigned and other information for the bonded interface. To configure the bonded interface to use DHCP, set the bond's IP address to dhcp . For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp To configure the bonded interface to use a static IP address, enter the specific IP address you want and related information. For example: bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none 2.12.2. Installing RHCOS by using PXE booting You can use PXE booting to install RHCOS on the machines. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have configured suitable PXE infrastructure. You have an HTTP server that can be accessed from your computer, and from the machines that you create. You have reviewed the Advanced RHCOS installation configuration section for different ways to configure features, such as networking and disk partitioning. Procedure Upload the bootstrap, control plane, and compute node Ignition config files that the installation program created to your HTTP server. Note the URLs of these files. Important You can add or change configuration settings in your Ignition configs before saving them to your HTTP server. If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. From the installation host, validate that the Ignition config files are available on the URLs. The following example gets the Ignition config file for the bootstrap node: USD curl -k http://<HTTP_server>/bootstrap.ign 1 Example output % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{"ignition":{"version":"3.2.0"},"passwd":{"users":[{"name":"core","sshAuthorizedKeys":["ssh-rsa... Replace bootstrap.ign with master.ign or worker.ign in the command to validate that the Ignition config files for the control plane and compute nodes are also available. Although it is possible to obtain the RHCOS kernel , initramfs and rootfs files that are required for your preferred method of installing operating system instances from the RHCOS image mirror page, the recommended way to obtain the correct version of your RHCOS files are from the output of openshift-install command: USD openshift-install coreos print-stream-json | grep -Eo '"https.*(kernel-|initramfs.|rootfs.)\w+(\.img)?"' Example output "<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64" "<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img" "<url>/art/storage/releases/rhcos-4.14-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le" "<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img" "<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x" "<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img" "<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img" "<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-kernel-x86_64" "<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img" "<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img" Important The RHCOS artifacts might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Only use the appropriate kernel , initramfs , and rootfs artifacts described below for this procedure. RHCOS QCOW2 images are not supported for this installation type. The file names contain the OpenShift Container Platform version number. They resemble the following examples: kernel : rhcos-<version>-live-kernel-<architecture> initramfs : rhcos-<version>-live-initramfs.<architecture>.img rootfs : rhcos-<version>-live-rootfs.<architecture>.img Upload the rootfs , kernel , and initramfs files to your HTTP server. Important If you plan to add more compute machines to your cluster after you finish installation, do not delete these files. Configure the network boot infrastructure so that the machines boot from their local disks after RHCOS is installed on them. Configure PXE installation for the RHCOS images and begin the installation. Modify the following example menu entry for your environment and verify that the image and Ignition files are properly accessible: 1 1 Specify the location of the live kernel file that you uploaded to your HTTP server. The URL must be HTTP, TFTP, or FTP; HTTPS and NFS are not supported. 2 If you use multiple NICs, specify a single interface in the ip option. For example, to use DHCP on a NIC that is named eno1 , set ip=eno1:dhcp . 3 Specify the locations of the RHCOS files that you uploaded to your HTTP server. The initrd parameter value is the location of the initramfs file, the coreos.live.rootfs_url parameter value is the location of the rootfs file, and the coreos.inst.ignition_url parameter value is the location of the bootstrap Ignition config file. You can also add more kernel arguments to the APPEND line to configure networking or other boot options. Note This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more console= arguments to the APPEND line. For example, add console=tty0 console=ttyS0 to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see How does one set up a serial terminal and/or console in Red Hat Enterprise Linux? and "Enabling the serial console for PXE and ISO installation" in the "Advanced RHCOS installation configuration" section. Monitor the progress of the RHCOS installation on the console of the machine. Important Be sure that the installation is successful on each node before commencing with the OpenShift Container Platform installation. Observing the installation process can also help to determine the cause of RHCOS installation issues that might arise. After RHCOS installs, the system reboots. During reboot, the system applies the Ignition config file that you specified. Check the console output to verify that Ignition ran. Example command Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied Continue to create the machines for your cluster. Important You must create the bootstrap and control plane machines at this time. If the control plane machines are not made schedulable, also create at least two compute machines before you install the cluster. If the required network, DNS, and load balancer infrastructure are in place, the OpenShift Container Platform bootstrap process begins automatically after the RHCOS nodes have rebooted. Note RHCOS nodes do not include a default password for the core user. You can access the nodes by running ssh core@<node>.<cluster_name>.<base_domain> as a user with access to the SSH private key that is paired to the public key that you specified in your install_config.yaml file. OpenShift Container Platform 4 cluster nodes running RHCOS are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery. 2.12.3. Enabling multipathing with kernel arguments on RHCOS In OpenShift Container Platform version 4.14, during installation, you can enable multipathing for provisioned nodes. RHCOS supports multipathing on the primary disk. Multipathing provides added benefits of stronger resilience to hardware failure to achieve higher host availability. During the initial cluster creation, you might want to add kernel arguments to all master or worker nodes. To add kernel arguments to master or worker nodes, you can create a MachineConfig object and inject that object into the set of manifest files used by Ignition during cluster setup. Procedure Change to the directory that contains the installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> Decide if you want to add kernel arguments to worker or control plane nodes. Create a machine config file. For example, create a 99-master-kargs-mpath.yaml that instructs the cluster to add the master label and identify the multipath kernel argument: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "master" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' To enable multipathing on worker nodes: Create a machine config file. For example, create a 99-worker-kargs-mpath.yaml that instructs the cluster to add the worker label and identify the multipath kernel argument: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: "worker" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root' You can now continue on to create the cluster. Important Additional postinstallation steps are required to fully enable multipathing. For more information, see "Enabling multipathing with kernel arguments on RHCOS" in Postinstallation machine configuration tasks . In case of MPIO failure, use the bootlist command to update the boot device list with alternate logical device names. The command displays a boot list and it designates the possible boot devices for when the system is booted in normal mode. To display a boot list and specify the possible boot devices if the system is booted in normal mode, enter the following command: USD bootlist -m normal -o sda To update the boot list for normal mode and add alternate device names, enter the following command: USD bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde If the original boot disk path is down, the node reboots from the alternate device registered in the normal boot device list. 2.13. Waiting for the bootstrap process to complete The OpenShift Container Platform bootstrap process begins after the cluster nodes first boot into the persistent RHCOS environment that has been installed to disk. The configuration information provided through the Ignition config files is used to initialize the bootstrap process and install OpenShift Container Platform on the machines. You must wait for the bootstrap process to complete. Prerequisites You have created the Ignition config files for your cluster. You have configured suitable network, DNS and load balancing infrastructure. You have obtained the installation program and generated the Ignition config files for your cluster. You installed RHCOS on your cluster machines and provided the Ignition config files that the OpenShift Container Platform installation program generated. Your machines have direct internet access or have an HTTP or HTTPS proxy available. Procedure Monitor the bootstrap process: USD ./openshift-install --dir <installation_directory> wait-for bootstrap-complete \ 1 --log-level=info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . Example output INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443... INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources The command succeeds when the Kubernetes API server signals that it has been bootstrapped on the control plane machines. After the bootstrap process is complete, remove the bootstrap machine from the load balancer. Important You must remove the bootstrap machine from the load balancer at this point. You can also remove or reformat the bootstrap machine itself. 2.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 2.15. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 2.16. Initial Operator configuration After the control plane initializes, you must immediately configure some Operators so that they all become available. Prerequisites Your control plane has initialized. Procedure Watch the cluster components come online: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m Configure the Operators that are not available. 2.16.1. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 2.16.1.1. Configuring registry storage for IBM Power As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have a cluster on IBM Power(R). You have provisioned persistent storage for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have 100Gi capacity. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resources found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: Leave the claim field blank to allow the automatic creation of an image-registry-storage PVC. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.14 True False False 6h50m Ensure that your registry is set to managed to enable building and pushing of images. Run: Then, change the line to 2.16.1.2. Configuring storage for the image registry in non-production clusters You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry. Procedure To set the image registry storage to an empty directory: USD oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}' Warning Configure this option for only non-production clusters. If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error: Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found Wait a few minutes and run the command again. 2.17. Completing installation on user-provisioned infrastructure After you complete the Operator configuration, you can finish installing the cluster on infrastructure that you provide. Prerequisites Your control plane has initialized. You have completed the initial Operator configuration. Procedure Confirm that all the cluster components are online with the following command: USD watch -n5 oc get clusteroperators Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m Alternatively, the following command notifies you when all of the clusters are available. It also retrieves and displays credentials: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Example output INFO Waiting up to 30m0s for the cluster to initialize... The command succeeds when the Cluster Version Operator finishes deploying the OpenShift Container Platform cluster from Kubernetes API server. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Confirm that the Kubernetes API server is communicating with the pods. To view a list of all pods, use the following command: USD oc get pods --all-namespaces Example output NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m ... View the logs for a pod that is listed in the output of the command by using the following command: USD oc logs <pod_name> -n <namespace> 1 1 Specify the pod name and namespace, as shown in the output of the command. If the pod logs display, the Kubernetes API server can communicate with the cluster machines. Additional steps are required to enable multipathing. Do not enable multipathing during installation. See "Enabling multipathing with kernel arguments on RHCOS" in the Postinstallation machine configuration tasks documentation for more information. 2.18. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 2.19. steps Enabling multipathing with kernel arguments on RHCOS . Customize your cluster . If necessary, you can opt out of remote health reporting .
|
[
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain> 1",
"api.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>",
"api-int.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>",
"random.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>",
"console-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5",
"dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>",
"bootstrap.ocp4.example.com. 604800 IN A 192.168.1.96",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.5",
"5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com. 2",
"dig +noall +answer @<nameserver_ip> -x 192.168.1.96",
"96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com.",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 architecture: ppc64le controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 architecture: ppc64le metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"compute: - name: worker platform: {} replicas: 0",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory> 1",
"./openshift-install create ignition-configs --dir <installation_directory> 1",
". ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"sha512sum <installation_directory>/bootstrap.ign",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep '\\.iso[^.]'",
"\"location\": \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live.aarch64.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live.ppc64le.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live.s390x.iso\", \"location\": \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live.x86_64.iso\",",
"sudo coreos-installer install --ignition-url=http://<HTTP_server>/<node_type>.ign <device> --ignition-hash=sha512-<digest> 1 2",
"sudo coreos-installer install --ignition-url=http://192.168.1.2:80/installation_directory/bootstrap.ign /dev/sda --ignition-hash=sha512-a5a2d43879223273c9b60af66b44202a1d1248fc01cf156c46d4a79f552b6bad47bc8cc78ddf0116e80c59d2ea9e32ba53bc807afbca581aa059311def2c3e3b",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0::enp1s0:none nameserver=4.4.4.41",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=10.10.10.3::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=::10.10.10.254::::",
"rd.route=20.20.20.0/24:20.20.20.254:enp2s0",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp1s0:none ip=::::core0.example.com:enp2s0:none",
"ip=enp1s0:dhcp ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0:none",
"ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:enp2s0.100:none vlan=enp2s0.100:enp2s0",
"ip=enp2s0.100:dhcp vlan=enp2s0.100:enp2s0",
"nameserver=1.1.1.1 nameserver=8.8.8.8",
"bond=bond0:em1,em2:mode=active-backup ip=bond0:dhcp",
"bond=bond0:em1,em2:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=bond0:dhcp",
"bond=bond0:eno1f0,eno2f0:mode=active-backup ip=10.10.10.2::10.10.10.254:255.255.255.0:core0.example.com:bond0:none",
"curl -k http://<HTTP_server>/bootstrap.ign 1",
"% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{\"ignition\":{\"version\":\"3.2.0\"},\"passwd\":{\"users\":[{\"name\":\"core\",\"sshAuthorizedKeys\":[\"ssh-rsa",
"openshift-install coreos print-stream-json | grep -Eo '\"https.*(kernel-|initramfs.|rootfs.)\\w+(\\.img)?\"'",
"\"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-kernel-aarch64\" \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-initramfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.14-aarch64/<release>/aarch64/rhcos-<release>-live-rootfs.aarch64.img\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/49.84.202110081256-0/ppc64le/rhcos-<release>-live-kernel-ppc64le\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-initramfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.14-ppc64le/<release>/ppc64le/rhcos-<release>-live-rootfs.ppc64le.img\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-kernel-s390x\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-initramfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.14-s390x/<release>/s390x/rhcos-<release>-live-rootfs.s390x.img\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-kernel-x86_64\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-initramfs.x86_64.img\" \"<url>/art/storage/releases/rhcos-4.14/<release>/x86_64/rhcos-<release>-live-rootfs.x86_64.img\"",
"DEFAULT pxeboot TIMEOUT 20 PROMPT 0 LABEL pxeboot KERNEL http://<HTTP_server>/rhcos-<version>-live-kernel-<architecture> 1 APPEND initrd=http://<HTTP_server>/rhcos-<version>-live-initramfs.<architecture>.img coreos.live.rootfs_url=http://<HTTP_server>/rhcos-<version>-live-rootfs.<architecture>.img coreos.inst.install_dev=/dev/sda coreos.inst.ignition_url=http://<HTTP_server>/bootstrap.ign 2 3",
"Ignition: ran on 2022/03/14 14:48:33 UTC (this boot) Ignition: user-provided config was applied",
"./openshift-install create manifests --dir <installation_directory>",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"master\" name: 99-master-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: \"worker\" name: 99-worker-kargs-mpath spec: kernelArguments: - 'rd.multipath=default' - 'root=/dev/disk/by-label/dm-mpath-root'",
"bootlist -m normal -o sda",
"bootlist -m normal -o /dev/sdc /dev/sdd /dev/sde sdc sdd sde",
"./openshift-install --dir <installation_directory> wait-for bootstrap-complete \\ 1 --log-level=info 2",
"INFO Waiting up to 30m0s for the Kubernetes API at https://api.test.example.com:6443 INFO API v1.27.3 up INFO Waiting up to 30m0s for bootstrapping to complete INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.27.3 master-1 Ready master 63m v1.27.3 master-2 Ready master 64m v1.27.3",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve",
"oc get csr",
"NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending",
"oc adm certificate approve <csr_name> 1",
"oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve",
"oc get nodes",
"NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.27.3 master-1 Ready master 73m v1.27.3 master-2 Ready master 74m v1.27.3 worker-0 Ready worker 11m v1.27.3 worker-1 Ready worker 11m v1.27.3",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m",
"oc get pod -n openshift-image-registry -l docker-registry=default",
"No resources found in openshift-image-registry namespace",
"oc edit configs.imageregistry.operator.openshift.io",
"storage: pvc: claim:",
"oc get clusteroperator image-registry",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.14 True False False 6h50m",
"oc edit configs.imageregistry/cluster",
"managementState: Removed",
"managementState: Managed",
"oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'",
"Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found",
"watch -n5 oc get clusteroperators",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.14.0 True False False 19m baremetal 4.14.0 True False False 37m cloud-credential 4.14.0 True False False 40m cluster-autoscaler 4.14.0 True False False 37m config-operator 4.14.0 True False False 38m console 4.14.0 True False False 26m csi-snapshot-controller 4.14.0 True False False 37m dns 4.14.0 True False False 37m etcd 4.14.0 True False False 36m image-registry 4.14.0 True False False 31m ingress 4.14.0 True False False 30m insights 4.14.0 True False False 31m kube-apiserver 4.14.0 True False False 26m kube-controller-manager 4.14.0 True False False 36m kube-scheduler 4.14.0 True False False 36m kube-storage-version-migrator 4.14.0 True False False 37m machine-api 4.14.0 True False False 29m machine-approver 4.14.0 True False False 37m machine-config 4.14.0 True False False 36m marketplace 4.14.0 True False False 37m monitoring 4.14.0 True False False 29m network 4.14.0 True False False 38m node-tuning 4.14.0 True False False 37m openshift-apiserver 4.14.0 True False False 32m openshift-controller-manager 4.14.0 True False False 30m openshift-samples 4.14.0 True False False 32m operator-lifecycle-manager 4.14.0 True False False 37m operator-lifecycle-manager-catalog 4.14.0 True False False 37m operator-lifecycle-manager-packageserver 4.14.0 True False False 32m service-ca 4.14.0 True False False 38m storage 4.14.0 True False False 37m",
"./openshift-install --dir <installation_directory> wait-for install-complete 1",
"INFO Waiting up to 30m0s for the cluster to initialize",
"oc get pods --all-namespaces",
"NAMESPACE NAME READY STATUS RESTARTS AGE openshift-apiserver-operator openshift-apiserver-operator-85cb746d55-zqhs8 1/1 Running 1 9m openshift-apiserver apiserver-67b9g 1/1 Running 0 3m openshift-apiserver apiserver-ljcmx 1/1 Running 0 1m openshift-apiserver apiserver-z25h4 1/1 Running 0 2m openshift-authentication-operator authentication-operator-69d5d8bf84-vh2n8 1/1 Running 0 5m",
"oc logs <pod_name> -n <namespace> 1"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_ibm_power/installing-ibm-power
|
Chapter 9. FeatureGate [config.openshift.io/v1]
|
Chapter 9. FeatureGate [config.openshift.io/v1] Description Feature holds cluster-wide information about feature gates. The canonical name is cluster Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required spec 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object spec holds user settable values for configuration status object status holds observed values from the cluster. They may not be overridden. 9.1.1. .spec Description spec holds user settable values for configuration Type object Property Type Description customNoUpgrade `` customNoUpgrade allows the enabling or disabling of any feature. Turning this feature set on IS NOT SUPPORTED, CANNOT BE UNDONE, and PREVENTS UPGRADES. Because of its nature, this setting cannot be validated. If you have any typos or accidentally apply invalid combinations your cluster may fail in an unrecoverable way. featureSet must equal "CustomNoUpgrade" must be set to use this field. featureSet string featureSet changes the list of features in the cluster. The default is empty. Be very careful adjusting this setting. Turning on or off features may cause irreversible changes in your cluster which cannot be undone. 9.1.2. .status Description status holds observed values from the cluster. They may not be overridden. Type object Property Type Description conditions array conditions represent the observations of the current state. Known .status.conditions.type are: "DeterminationDegraded" conditions[] object Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } featureGates array featureGates contains a list of enabled and disabled featureGates that are keyed by payloadVersion. Operators other than the CVO and cluster-config-operator, must read the .status.featureGates, locate the version they are managing, find the enabled/disabled featuregates and make the operand and operator match. The enabled/disabled values for a particular version may change during the life of the cluster as various .spec.featureSet values are selected. Operators may choose to restart their processes to pick up these changes, but remembering past enable/disable lists is beyond the scope of this API and is the responsibility of individual operators. Only featureGates with .version in the ClusterVersion.status will be present in this list. featureGates[] object 9.1.3. .status.conditions Description conditions represent the observations of the current state. Known .status.conditions.type are: "DeterminationDegraded" Type array 9.1.4. .status.conditions[] Description Condition contains details for one aspect of the current state of this API Resource. --- This struct is intended for direct use as an array at the field path .status.conditions. For example, type FooStatus struct{ // Represents the observations of a foo's current state. // Known .status.conditions.type are: "Available", "Progressing", and "Degraded" // +patchMergeKey=type // +patchStrategy=merge // +listType=map // +listMapKey=type Conditions []metav1.Condition json:"conditions,omitempty" patchStrategy:"merge" patchMergeKey:"type" protobuf:"bytes,1,rep,name=conditions" // other fields } Type object Required lastTransitionTime message reason status type Property Type Description lastTransitionTime string lastTransitionTime is the last time the condition transitioned from one status to another. This should be when the underlying condition changed. If that is not known, then using the time when the API field changed is acceptable. message string message is a human readable message indicating details about the transition. This may be an empty string. observedGeneration integer observedGeneration represents the .metadata.generation that the condition was set based upon. For instance, if .metadata.generation is currently 12, but the .status.conditions[x].observedGeneration is 9, the condition is out of date with respect to the current state of the instance. reason string reason contains a programmatic identifier indicating the reason for the condition's last transition. Producers of specific condition types may define expected values and meanings for this field, and whether the values are considered a guaranteed API. The value should be a CamelCase string. This field may not be empty. status string status of the condition, one of True, False, Unknown. type string type of condition in CamelCase or in foo.example.com/CamelCase. --- Many .condition.type values are consistent across resources like Available, but because arbitrary conditions can be useful (see .node.status.conditions), the ability to deconflict is important. The regex it matches is (dns1123SubdomainFmt/)?(qualifiedNameFmt) 9.1.5. .status.featureGates Description featureGates contains a list of enabled and disabled featureGates that are keyed by payloadVersion. Operators other than the CVO and cluster-config-operator, must read the .status.featureGates, locate the version they are managing, find the enabled/disabled featuregates and make the operand and operator match. The enabled/disabled values for a particular version may change during the life of the cluster as various .spec.featureSet values are selected. Operators may choose to restart their processes to pick up these changes, but remembering past enable/disable lists is beyond the scope of this API and is the responsibility of individual operators. Only featureGates with .version in the ClusterVersion.status will be present in this list. Type array 9.1.6. .status.featureGates[] Description Type object Required version Property Type Description disabled array disabled is a list of all feature gates that are disabled in the cluster for the named version. disabled[] object enabled array enabled is a list of all feature gates that are enabled in the cluster for the named version. enabled[] object version string version matches the version provided by the ClusterVersion and in the ClusterOperator.Status.Versions field. 9.1.7. .status.featureGates[].disabled Description disabled is a list of all feature gates that are disabled in the cluster for the named version. Type array 9.1.8. .status.featureGates[].disabled[] Description Type object Required name Property Type Description name string name is the name of the FeatureGate. 9.1.9. .status.featureGates[].enabled Description enabled is a list of all feature gates that are enabled in the cluster for the named version. Type array 9.1.10. .status.featureGates[].enabled[] Description Type object Required name Property Type Description name string name is the name of the FeatureGate. 9.2. API endpoints The following API endpoints are available: /apis/config.openshift.io/v1/featuregates DELETE : delete collection of FeatureGate GET : list objects of kind FeatureGate POST : create a FeatureGate /apis/config.openshift.io/v1/featuregates/{name} DELETE : delete a FeatureGate GET : read the specified FeatureGate PATCH : partially update the specified FeatureGate PUT : replace the specified FeatureGate /apis/config.openshift.io/v1/featuregates/{name}/status GET : read status of the specified FeatureGate PATCH : partially update status of the specified FeatureGate PUT : replace status of the specified FeatureGate 9.2.1. /apis/config.openshift.io/v1/featuregates HTTP method DELETE Description delete collection of FeatureGate Table 9.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind FeatureGate Table 9.2. HTTP responses HTTP code Reponse body 200 - OK FeatureGateList schema 401 - Unauthorized Empty HTTP method POST Description create a FeatureGate Table 9.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.4. Body parameters Parameter Type Description body FeatureGate schema Table 9.5. HTTP responses HTTP code Reponse body 200 - OK FeatureGate schema 201 - Created FeatureGate schema 202 - Accepted FeatureGate schema 401 - Unauthorized Empty 9.2.2. /apis/config.openshift.io/v1/featuregates/{name} Table 9.6. Global path parameters Parameter Type Description name string name of the FeatureGate HTTP method DELETE Description delete a FeatureGate Table 9.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 9.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified FeatureGate Table 9.9. HTTP responses HTTP code Reponse body 200 - OK FeatureGate schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified FeatureGate Table 9.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.11. HTTP responses HTTP code Reponse body 200 - OK FeatureGate schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified FeatureGate Table 9.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.13. Body parameters Parameter Type Description body FeatureGate schema Table 9.14. HTTP responses HTTP code Reponse body 200 - OK FeatureGate schema 201 - Created FeatureGate schema 401 - Unauthorized Empty 9.2.3. /apis/config.openshift.io/v1/featuregates/{name}/status Table 9.15. Global path parameters Parameter Type Description name string name of the FeatureGate HTTP method GET Description read status of the specified FeatureGate Table 9.16. HTTP responses HTTP code Reponse body 200 - OK FeatureGate schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified FeatureGate Table 9.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.18. HTTP responses HTTP code Reponse body 200 - OK FeatureGate schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified FeatureGate Table 9.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 9.20. Body parameters Parameter Type Description body FeatureGate schema Table 9.21. HTTP responses HTTP code Reponse body 200 - OK FeatureGate schema 201 - Created FeatureGate schema 401 - Unauthorized Empty
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/config_apis/featuregate-config-openshift-io-v1
|
Chapter 1. Installation Overview
|
Chapter 1. Installation Overview The self-hosted engine installation uses Ansible and the RHV-M Appliance (a pre-configured Manager virtual machine image) to automate the following tasks: Configuring the first self-hosted engine node Installing a Red Hat Enterprise Linux virtual machine on that node Installing and configuring the Red Hat Virtualization Manager on that virtual machine Configuring the self-hosted engine storage domain Note The RHV-M Appliance is only used during installation. It is not used to upgrade the Manager. Installing a self-hosted engine environment involves the following steps: Prepare storage to use for the self-hosted engine storage domain and for standard storage domains. You can use one of the following storage types: NFS iSCSI Fibre Channel (FCP) Red Hat Gluster Storage Install a deployment host to run the installation on. This host will become the first self-hosted engine node. You can use either host type: Red Hat Virtualization Host Red Hat Enterprise Linux Install and configure the Red Hat Virtualization Manager: Install the self-hosted engine using the hosted-engine --deploy command on the deployment host. Register the Manager with the Content Delivery Network and enable the Red Hat Virtualization Manager repositories. Connect to the Administration Portal to add hosts and storage domains. Add more self-hosted engine nodes and standard hosts to the Manager. Self-hosted engine nodes can run the Manager virtual machine and other virtual machines. Standard hosts can run all other virtual machines, but not the Manager virtual machine. Use either host type, or both: Red Hat Virtualization Host Red Hat Enterprise Linux Add hosts to the Manager as self-hosted engine nodes. Add hosts to the Manager as standard hosts. Add more storage domains to the Manager. The self-hosted engine storage domain is not recommended for use by anything other than the Manager virtual machine. If you want to host any databases or services on a server separate from the Manager, you can migrate them after the installation is complete. Important Keep the environment up to date. See https://access.redhat.com/articles/2974891 for more information. Since bug fixes for known issues are frequently released, Red Hat recommends using scheduled tasks to update the hosts and the Manager.
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/install_overview_she_cli_deploy
|
Chapter 71. Kubernetes Persistent Volume
|
Chapter 71. Kubernetes Persistent Volume Since Camel 2.17 Only producer is supported The Kubernetes Persistent Volume component is one of the Kubernetes Components which provides a producer to execute Kubernetes Persistent Volume operations. 71.1. Dependencies When using kubernetes-persistent-volumes with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 71.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 71.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 71.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 71.3. Component Options The Kubernetes Persistent Volume component supports 3 options, which are listed below. Name Description Default Type kubernetesClient (producer) Autowired To use an existing kubernetes client. KubernetesClient lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 71.4. Endpoint Options The Kubernetes Persistent Volume endpoint is configured using URI syntax: with the following path and query parameters: 71.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (producer) Required Kubernetes Master url. String 71.4.2. Query Parameters (21 parameters) Name Description Default Type apiVersion (producer) The Kubernetes API Version to use. String dnsDomain (producer) The dns domain, used for ServiceCall EIP. String kubernetesClient (producer) Default KubernetesClient to use if provided. KubernetesClient namespace (producer) The namespace. String operation (producer) Producer operation to do on Kubernetes. String portName (producer) The port name, used for ServiceCall EIP. String portProtocol (producer) The port protocol, used for ServiceCall EIP. tcp String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 71.5. Message Headers The Kubernetes Persistent Volume component supports 3 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesPersistentVolumesLabels (producer) Constant: KUBERNETES_PERSISTENT_VOLUMES_LABELS The persistent volume labels. Map CamelKubernetesPersistentVolumeName (producer) Constant: KUBERNETES_PERSISTENT_VOLUME_NAME The persistent volume name. String 71.6. Supported producer operation listPersistentVolumes listPersistentVolumesByLabels getPersistentVolume 71.7. Kubernetes Persistent Volumes Producer Examples listPersistentVolumes: this operation lists the pv on a kubernetes cluster. from("direct:list"). toF("kubernetes-persistent-volumes:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumes"). to("mock:result"); This operation returns a List of pv from your cluster. listPersistentVolumesByLabels: this operation lists the pv by labels on a kubernetes cluster from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_PERSISTENT_VOLUMES_LABELS, labels); } }); toF("kubernetes-persistent-volumes:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesByLabels"). to("mock:result"); This operation returns a List of pv from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 71.8. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
|
[
"<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>",
"kubernetes-persistent-volumes:masterUrl",
"from(\"direct:list\"). toF(\"kubernetes-persistent-volumes:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumes\"). to(\"mock:result\");",
"from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_PERSISTENT_VOLUMES_LABELS, labels); } }); toF(\"kubernetes-persistent-volumes:///?kubernetesClient=#kubernetesClient&operation=listPersistentVolumesByLabels\"). to(\"mock:result\");"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-persistent-volume-component-starter
|
Chapter 27. Load balancing with MetalLB
|
Chapter 27. Load balancing with MetalLB 27.1. Configuring MetalLB address pools As a cluster administrator, you can add, modify, and delete address pools. The MetalLB Operator uses the address pool custom resources to set the IP addresses that MetalLB can assign to services. The namespace used in the examples assume the namespace is metallb-system . For more information about how to install the MetalLB Operator, see About MetalLB and the MetalLB Operator . 27.1.1. About the IPAddressPool custom resource The fields for the IPAddressPool custom resource are described in the following tables. Table 27.1. MetalLB IPAddressPool pool custom resource Field Type Description metadata.name string Specifies the name for the address pool. When you add a service, you can specify this pool name in the metallb.universe.tf/address-pool annotation to select an IP address from a specific pool. The names doc-example , silver , and gold are used throughout the documentation. metadata.namespace string Specifies the namespace for the address pool. Specify the same namespace that the MetalLB Operator uses. metadata.label string Optional: Specifies the key value pair assigned to the IPAddressPool . This can be referenced by the ipAddressPoolSelectors in the BGPAdvertisement and L2Advertisement CRD to associate the IPAddressPool with the advertisement spec.addresses string Specifies a list of IP addresses for MetalLB Operator to assign to services. You can specify multiple ranges in a single pool; they will all share the same settings. Specify each range in CIDR notation or as starting and ending IP addresses separated with a hyphen. spec.autoAssign boolean Optional: Specifies whether MetalLB automatically assigns IP addresses from this pool. Specify false if you want explicitly request an IP address from this pool with the metallb.universe.tf/address-pool annotation. The default value is true . spec.avoidBuggyIPs boolean Optional: This ensures when enabled that IP addresses ending .0 and .255 are not allocated from the pool. The default value is false . Some older consumer network equipment mistakenly block IP addresses ending in .0 and .255. You can assign IP addresses from an IPAddressPool to services and namespaces by configuring the spec.serviceAllocation specification. Table 27.2. MetalLB IPAddressPool custom resource spec.serviceAllocation subfields Field Type Description priority int Optional: Defines the priority between IP address pools when more than one IP address pool matches a service or namespace. A lower number indicates a higher priority. namespaces array (string) Optional: Specifies a list of namespaces that you can assign to IP addresses in an IP address pool. namespaceSelectors array (LabelSelector) Optional: Specifies namespace labels that you can assign to IP addresses from an IP address pool by using label selectors in a list format. serviceSelectors array (LabelSelector) Optional: Specifies service labels that you can assign to IP addresses from an address pool by using label selectors in a list format. 27.1.2. Configuring an address pool As a cluster administrator, you can add address pools to your cluster to control the IP addresses that MetalLB can assign to load-balancer services. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example labels: 1 zone: east spec: addresses: - 203.0.113.1-203.0.113.10 - 203.0.113.65-203.0.113.75 1 This label assigned to the IPAddressPool can be referenced by the ipAddressPoolSelectors in the BGPAdvertisement CRD to associate the IPAddressPool with the advertisement. Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Verification View the address pool: USD oc describe -n metallb-system IPAddressPool doc-example Example output Name: doc-example Namespace: metallb-system Labels: zone=east Annotations: <none> API Version: metallb.io/v1beta1 Kind: IPAddressPool Metadata: ... Spec: Addresses: 203.0.113.1-203.0.113.10 203.0.113.65-203.0.113.75 Auto Assign: true Events: <none> Confirm that the address pool name, such as doc-example , and the IP address ranges appear in the output. 27.1.3. Configure MetalLB address pool for VLAN As a cluster administrator, you can add address pools to your cluster to control the IP addresses on a created VLAN that MetalLB can assign to load-balancer services Prerequisites Install the OpenShift CLI ( oc ). Configure a separate VLAN. Log in as a user with cluster-admin privileges. Procedure Create a file, such as ipaddresspool-vlan.yaml , that is similar to the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-vlan labels: zone: east 1 spec: addresses: - 192.168.100.1-192.168.100.254 2 1 This label assigned to the IPAddressPool can be referenced by the ipAddressPoolSelectors in the BGPAdvertisement CRD to associate the IPAddressPool with the advertisement. 2 This IP range must match the subnet assigned to the VLAN on your network. To support layer 2 (L2) mode, the IP address range must be within the same subnet as the cluster nodes. Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool-vlan.yaml To ensure this configuration applies to the VLAN you need to set the spec gatewayConfig.ipForwarding to Global . Run the following command to edit the network configuration custom resource (CR): USD oc edit network.config.openshift/cluster Update the spec.defaultNetwork.ovnKubernetesConfig section to include the gatewayConfig.ipForwarding set to Global . It should look something like this: Example ... spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: gatewayConfig: ipForwarding: Global ... 27.1.4. Example address pool configurations 27.1.4.1. Example: IPv4 and CIDR ranges You can specify a range of IP addresses in CIDR notation. You can combine CIDR notation with the notation that uses a hyphen to separate lower and upper bounds. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-cidr namespace: metallb-system spec: addresses: - 192.168.100.0/24 - 192.168.200.0/24 - 192.168.255.1-192.168.255.5 27.1.4.2. Example: Reserve IP addresses You can set the autoAssign field to false to prevent MetalLB from automatically assigning the IP addresses from the pool. When you add a service, you can request a specific IP address from the pool or you can specify the pool name in an annotation to request any IP address from the pool. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-reserved namespace: metallb-system spec: addresses: - 10.0.100.0/28 autoAssign: false 27.1.4.3. Example: IPv4 and IPv6 addresses You can add address pools that use IPv4 and IPv6. You can specify multiple ranges in the addresses list, just like several IPv4 examples. Whether the service is assigned a single IPv4 address, a single IPv6 address, or both is determined by how you add the service. The spec.ipFamilies and spec.ipFamilyPolicy fields control how IP addresses are assigned to the service. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-combined namespace: metallb-system spec: addresses: - 10.0.100.0/28 - 2002:2:2::1-2002:2:2::100 27.1.4.4. Example: Assign IP address pools to services or namespaces You can assign IP addresses from an IPAddressPool to services and namespaces that you specify. If you assign a service or namespace to more than one IP address pool, MetalLB uses an available IP address from the higher-priority IP address pool. If no IP addresses are available from the assigned IP address pools with a high priority, MetalLB uses available IP addresses from an IP address pool with lower priority or no priority. Note You can use the matchLabels label selector, the matchExpressions label selector, or both, for the namespaceSelectors and serviceSelectors specifications. This example demonstrates one label selector for each specification. apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-service-allocation namespace: metallb-system spec: addresses: - 192.168.20.0/24 serviceAllocation: priority: 50 1 namespaces: 2 - namespace-a - namespace-b namespaceSelectors: 3 - matchLabels: zone: east serviceSelectors: 4 - matchExpressions: - key: security operator: In values: - S1 1 Assign a priority to the address pool. A lower number indicates a higher priority. 2 Assign one or more namespaces to the IP address pool in a list format. 3 Assign one or more namespace labels to the IP address pool by using label selectors in a list format. 4 Assign one or more service labels to the IP address pool by using label selectors in a list format. 27.1.5. steps Configuring MetalLB with an L2 advertisement and label Configuring MetalLB BGP peers Configuring services to use MetalLB 27.2. About advertising for the IP address pools You can configure MetalLB so that the IP address is advertised with layer 2 protocols, the BGP protocol, or both. With layer 2, MetalLB provides a fault-tolerant external IP address. With BGP, MetalLB provides fault-tolerance for the external IP address and load balancing. MetalLB supports advertising using L2 and BGP for the same set of IP addresses. MetalLB provides the flexibility to assign address pools to specific BGP peers effectively to a subset of nodes on the network. This allows for more complex configurations, for example facilitating the isolation of nodes or the segmentation of the network. 27.2.1. About the BGPAdvertisement custom resource The fields for the BGPAdvertisements object are defined in the following table: Table 27.3. BGPAdvertisements configuration Field Type Description metadata.name string Specifies the name for the BGP advertisement. metadata.namespace string Specifies the namespace for the BGP advertisement. Specify the same namespace that the MetalLB Operator uses. spec.aggregationLength integer Optional: Specifies the number of bits to include in a 32-bit CIDR mask. To aggregate the routes that the speaker advertises to BGP peers, the mask is applied to the routes for several service IP addresses and the speaker advertises the aggregated route. For example, with an aggregation length of 24 , the speaker can aggregate several 10.0.1.x/32 service IP addresses and advertise a single 10.0.1.0/24 route. spec.aggregationLengthV6 integer Optional: Specifies the number of bits to include in a 128-bit CIDR mask. For example, with an aggregation length of 124 , the speaker can aggregate several fc00:f853:0ccd:e799::x/128 service IP addresses and advertise a single fc00:f853:0ccd:e799::0/124 route. spec.communities string Optional: Specifies one or more BGP communities. Each community is specified as two 16-bit values separated by the colon character. Well-known communities must be specified as 16-bit values: NO_EXPORT : 65535:65281 NO_ADVERTISE : 65535:65282 NO_EXPORT_SUBCONFED : 65535:65283 Note You can also use community objects that are created along with the strings. spec.localPref integer Optional: Specifies the local preference for this advertisement. This BGP attribute applies to BGP sessions within the Autonomous System. spec.ipAddressPools string Optional: The list of IPAddressPools to advertise with this advertisement, selected by name. spec.ipAddressPoolSelectors string Optional: A selector for the IPAddressPools that gets advertised with this advertisement. This is for associating the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. If no IPAddressPool is selected by this or by the list, the advertisement is applied to all the IPAddressPools . spec.nodeSelectors string Optional: NodeSelectors allows to limit the nodes to announce as hops for the load balancer IP. When empty, all the nodes are announced as hops. spec.peers string Optional: Use a list to specify the metadata.name values for each BGPPeer resource that receives advertisements for the MetalLB service IP address. The MetalLB service IP address is assigned from the IP address pool. By default, the MetalLB service IP address is advertised to all configured BGPPeer resources. Use this field to limit the advertisement to specific BGPpeer resources. 27.2.2. Configuring MetalLB with a BGP advertisement and a basic use case Configure MetalLB as follows so that the peer BGP routers receive one 203.0.113.200/32 route and one fc00:f853:ccd:e799::1/128 route for each load-balancer IP address that MetalLB assigns to a service. Because the localPref and communities fields are not specified, the routes are advertised with localPref set to zero and no BGP communities. 27.2.2.1. Example: Advertise a basic address pool configuration with BGP Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-basic spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a BGP advertisement. Create a file, such as bgpadvertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-basic namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-basic Apply the configuration: USD oc apply -f bgpadvertisement.yaml 27.2.3. Configuring MetalLB with a BGP advertisement and an advanced use case Configure MetalLB as follows so that MetalLB assigns IP addresses to load-balancer services in the ranges between 203.0.113.200 and 203.0.113.203 and between fc00:f853:ccd:e799::0 and fc00:f853:ccd:e799::f . To explain the two BGP advertisements, consider an instance when MetalLB assigns the IP address of 203.0.113.200 to a service. With that IP address as an example, the speaker advertises two routes to BGP peers: 203.0.113.200/32 , with localPref set to 100 and the community set to the numeric value of the NO_ADVERTISE community. This specification indicates to the peer routers that they can use this route but they should not propagate information about this route to BGP peers. 203.0.113.200/30 , aggregates the load-balancer IP addresses assigned by MetalLB into a single route. MetalLB advertises the aggregated route to BGP peers with the community attribute set to 8000:800 . BGP peers propagate the 203.0.113.200/30 route to other BGP peers. When traffic is routed to a node with a speaker, the 203.0.113.200/32 route is used to forward the traffic into the cluster and to a pod that is associated with the service. As you add more services and MetalLB assigns more load-balancer IP addresses from the pool, peer routers receive one local route, 203.0.113.20x/32 , for each service, as well as the 203.0.113.200/30 aggregate route. Each service that you add generates the /30 route, but MetalLB deduplicates the routes to one BGP advertisement before communicating with peer routers. 27.2.3.1. Example: Advertise an advanced address pool configuration with BGP Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-adv labels: zone: east spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 autoAssign: false Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a BGP advertisement. Create a file, such as bgpadvertisement1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-1 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 65535:65282 aggregationLength: 32 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement1.yaml Create a file, such as bgpadvertisement2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-2 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 8000:800 aggregationLength: 30 aggregationLengthV6: 124 Apply the configuration: USD oc apply -f bgpadvertisement2.yaml 27.2.4. Advertising an IP address pool from a subset of nodes To advertise an IP address from an IP addresses pool, from a specific set of nodes only, use the .spec.nodeSelector specification in the BGPAdvertisement custom resource. This specification associates a pool of IP addresses with a set of nodes in the cluster. This is useful when you have nodes on different subnets in a cluster and you want to advertise an IP addresses from an address pool from a specific subnet, for example a public-facing subnet only. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool by using a custom resource: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400 Control which nodes in the cluster the IP address from pool1 advertises from by defining the .spec.nodeSelector value in the BGPAdvertisement custom resource: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example spec: ipAddressPools: - pool1 nodeSelector: - matchLabels: kubernetes.io/hostname: NodeA - matchLabels: kubernetes.io/hostname: NodeB In this example, the IP address from pool1 advertises from NodeA and NodeB only. 27.2.5. About the L2Advertisement custom resource The fields for the l2Advertisements object are defined in the following table: Table 27.4. L2 advertisements configuration Field Type Description metadata.name string Specifies the name for the L2 advertisement. metadata.namespace string Specifies the namespace for the L2 advertisement. Specify the same namespace that the MetalLB Operator uses. spec.ipAddressPools string Optional: The list of IPAddressPools to advertise with this advertisement, selected by name. spec.ipAddressPoolSelectors string Optional: A selector for the IPAddressPools that gets advertised with this advertisement. This is for associating the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. If no IPAddressPool is selected by this or by the list, the advertisement is applied to all the IPAddressPools . spec.nodeSelectors string Optional: NodeSelectors limits the nodes to announce as hops for the load balancer IP. When empty, all the nodes are announced as hops. Important Limiting the nodes to announce as hops is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . spec.interfaces string Optional: The list of interfaces that are used to announce the load balancer IP. 27.2.6. Configuring MetalLB with an L2 advertisement Configure MetalLB as follows so that the IPAddressPool is advertised with the L2 protocol. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement. Create a file, such as l2advertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 Apply the configuration: USD oc apply -f l2advertisement.yaml 27.2.7. Configuring MetalLB with a L2 advertisement and label The ipAddressPoolSelectors field in the BGPAdvertisement and L2Advertisement custom resource definitions is used to associate the IPAddressPool to the advertisement based on the label assigned to the IPAddressPool instead of the name itself. This example shows how to configure MetalLB so that the IPAddressPool is advertised with the L2 protocol by configuring the ipAddressPoolSelectors field. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2-label labels: zone: east spec: addresses: - 172.31.249.87/32 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement advertising the IP using ipAddressPoolSelectors . Create a file, such as l2advertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement-label namespace: metallb-system spec: ipAddressPoolSelectors: - matchExpressions: - key: zone operator: In values: - east Apply the configuration: USD oc apply -f l2advertisement.yaml 27.2.8. Configuring MetalLB with an L2 advertisement for selected interfaces By default, the IP addresses from IP address pool that has been assigned to the service, is advertised from all the network interfaces. The interfaces field in the L2Advertisement custom resource definition is used to restrict those network interfaces that advertise the IP address pool. This example shows how to configure MetalLB so that the IP address pool is advertised only from the network interfaces listed in the interfaces field of all nodes. Prerequisites You have installed the OpenShift CLI ( oc ). You are logged in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , and enter the configuration details like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false Apply the configuration for the IP address pool like the following example: USD oc apply -f ipaddresspool.yaml Create a L2 advertisement advertising the IP with interfaces selector. Create a YAML file, such as l2advertisement.yaml , and enter the configuration details like the following example: apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 interfaces: - interfaceA - interfaceB Apply the configuration for the advertisement like the following example: USD oc apply -f l2advertisement.yaml Important The interface selector does not affect how MetalLB chooses the node to announce a given IP by using L2. The chosen node does not announce the service if the node does not have the selected interface. 27.2.9. Configuring MetalLB with secondary networks From OpenShift Container Platform 4.14 the default network behavior is to not allow forwarding of IP packets between network interfaces. Therefore, when MetalLB is configured on a secondary interface, you need to add a machine configuration to enable IP forwarding for only the required interfaces. Note OpenShift Container Platform clusters upgraded from 4.13 are not affected because a global parameter is set during upgrade to enable global IP forwarding. To enable IP forwarding for the secondary interface, you have two options: Enable IP forwarding for all interfaces. Enable IP forwarding for a specific interface. Note Enabling IP forwarding for a specific interface provides more granular control, while enabling it for all interfaces applies a global setting. Procedure Enable forwarding for a specific secondary interface, such as bridge-net by creating and applying a MachineConfig CR. Create the MachineConfig CR to enable IP forwarding for the specified secondary interface named bridge-net . Save the following YAML in the enable-ip-forward.yaml file: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: <node_role> 1 name: 81-enable-global-forwarding spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,`echo -e "net.ipv4.conf.bridge-net.forwarding = 1\nnet.ipv6.conf.bridge-net.forwarding = 1\nnet.ipv4.conf.bridge-net.rp_filter = 0\nnet.ipv6.conf.bridge-net.rp_filter = 0" | base64 -w0` verification: {} filesystem: root mode: 644 path: /etc/sysctl.d/enable-global-forwarding.conf osImageURL: "" 1 Node role where you want to enable IP forwarding, for example, worker Apply the configuration by running the following command: USD oc apply -f enable-ip-forward.yaml Alternatively, you can enable IP forwarding globally by running the following command: USD oc patch network.operator cluster -p '{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"gatewayConfig":{"ipForwarding": "Global"}}}}} 27.2.10. Additional resources Configuring a community alias . 27.3. Configuring MetalLB BGP peers As a cluster administrator, you can add, modify, and delete Border Gateway Protocol (BGP) peers. The MetalLB Operator uses the BGP peer custom resources to identify which peers that MetalLB speaker pods contact to start BGP sessions. The peers receive the route advertisements for the load-balancer IP addresses that MetalLB assigns to services. 27.3.1. About the BGP peer custom resource The fields for the BGP peer custom resource are described in the following table. Table 27.5. MetalLB BGP peer custom resource Field Type Description metadata.name string Specifies the name for the BGP peer custom resource. metadata.namespace string Specifies the namespace for the BGP peer custom resource. spec.myASN integer Specifies the Autonomous System number for the local end of the BGP session. Specify the same value in all BGP peer custom resources that you add. The range is 0 to 4294967295 . spec.peerASN integer Specifies the Autonomous System number for the remote end of the BGP session. The range is 0 to 4294967295 . spec.peerAddress string Specifies the IP address of the peer to contact for establishing the BGP session. spec.sourceAddress string Optional: Specifies the IP address to use when establishing the BGP session. The value must be an IPv4 address. spec.peerPort integer Optional: Specifies the network port of the peer to contact for establishing the BGP session. The range is 0 to 16384 . spec.holdTime string Optional: Specifies the duration for the hold time to propose to the BGP peer. The minimum value is 3 seconds ( 3s ). The common units are seconds and minutes, such as 3s , 1m , and 5m30s . To detect path failures more quickly, also configure BFD. spec.keepaliveTime string Optional: Specifies the maximum interval between sending keep-alive messages to the BGP peer. If you specify this field, you must also specify a value for the holdTime field. The specified value must be less than the value for the holdTime field. spec.routerID string Optional: Specifies the router ID to advertise to the BGP peer. If you specify this field, you must specify the same value in every BGP peer custom resource that you add. spec.password string Optional: Specifies the MD5 password to send to the peer for routers that enforce TCP MD5 authenticated BGP sessions. spec.passwordSecret string Optional: Specifies name of the authentication secret for the BGP Peer. The secret must live in the metallb namespace and be of type basic-auth. spec.bfdProfile string Optional: Specifies the name of a BFD profile. spec.nodeSelectors object[] Optional: Specifies a selector, using match expressions and match labels, to control which nodes can connect to the BGP peer. spec.ebgpMultiHop boolean Optional: Specifies that the BGP peer is multiple network hops away. If the BGP peer is not directly connected to the same network, the speaker cannot establish a BGP session unless this field is set to true . This field applies to external BGP . External BGP is the term that is used to describe when a BGP peer belongs to a different Autonomous System. Note The passwordSecret field is mutually exclusive with the password field, and contains a reference to a secret containing the password to use. Setting both fields results in a failure of the parsing. 27.3.2. Configuring a BGP peer As a cluster administrator, you can add a BGP peer custom resource to exchange routing information with network routers and advertise the IP addresses for services. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Configure MetalLB with a BGP advertisement. Procedure Create a file, such as bgppeer.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer.yaml 27.3.3. Configure a specific set of BGP peers for a given address pool This procedure illustrates how to: Configure a set of address pools ( pool1 and pool2 ). Configure a set of BGP peers ( peer1 and peer2 ). Configure BGP advertisement to assign pool1 to peer1 and pool2 to peer2 . Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create address pool pool1 . Create a file, such as ipaddresspool1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400 Apply the configuration for the IP address pool pool1 : USD oc apply -f ipaddresspool1.yaml Create address pool pool2 . Create a file, such as ipaddresspool2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool2 spec: addresses: - 5.5.5.100-5.5.5.200 - 2001:100:5::200-2001:100:5::400 Apply the configuration for the IP address pool pool2 : USD oc apply -f ipaddresspool2.yaml Create BGP peer1 . Create a file, such as bgppeer1.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer1 spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer1.yaml Create BGP peer2 . Create a file, such as bgppeer2.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer2 spec: peerAddress: 10.0.0.2 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer2: USD oc apply -f bgppeer2.yaml Create BGP advertisement 1. Create a file, such as bgpadvertisement1.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: - pool1 peers: - peer1 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement1.yaml Create BGP advertisement 2. Create a file, such as bgpadvertisement2.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-2 namespace: metallb-system spec: ipAddressPools: - pool2 peers: - peer2 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100 Apply the configuration: USD oc apply -f bgpadvertisement2.yaml 27.3.4. Exposing a service through a network VRF You can expose a service through a virtual routing and forwarding (VRF) instance by associating a VRF on a network interface with a BGP peer. Important Exposing a service through a VRF on a BGP peer is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . By using a VRF on a network interface to expose a service through a BGP peer, you can segregate traffic to the service, configure independent routing decisions, and enable multi-tenancy support on a network interface. Note By establishing a BGP session through an interface belonging to a network VRF, MetalLB can advertise services through that interface and enable external traffic to reach the service through this interface. However, the network VRF routing table is different from the default VRF routing table used by OVN-Kubernetes. Therefore, the traffic cannot reach the OVN-Kubernetes network infrastructure. To enable the traffic directed to the service to reach the OVN-Kubernetes network infrastructure, you must configure routing rules to define the hops for network traffic. See the NodeNetworkConfigurationPolicy resource in "Managing symmetric routing with MetalLB" in the Additional resources section for more information. These are the high-level steps to expose a service through a network VRF with a BGP peer: Define a BGP peer and add a network VRF instance. Specify an IP address pool for MetalLB. Configure a BGP route advertisement for MetalLB to advertise a route using the specified IP address pool and the BGP peer associated with the VRF instance. Deploy a service to test the configuration. Prerequisites You installed the OpenShift CLI ( oc ). You logged in as a user with cluster-admin privileges. You defined a NodeNetworkConfigurationPolicy to associate a Virtual Routing and Forwarding (VRF) instance with a network interface. For more information about completing this prerequisite, see the Additional resources section. You installed MetalLB on your cluster. Procedure Create a BGPPeer custom resources (CR): Create a file, such as frrviavrf.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: frrviavrf namespace: metallb-system spec: myASN: 100 peerASN: 200 peerAddress: 192.168.130.1 vrf: ens4vrf 1 1 Specifies the network VRF instance to associate with the BGP peer. MetalLB can advertise services and make routing decisions based on the routing information in the VRF. Note You must configure this network VRF instance in a NodeNetworkConfigurationPolicy CR. See the Additional resources for more information. Apply the configuration for the BGP peer by running the following command: USD oc apply -f frrviavrf.yaml Create an IPAddressPool CR: Create a file, such as first-pool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.169.10.0/32 Apply the configuration for the IP address pool by running the following command: USD oc apply -f first-pool.yaml Create a BGPAdvertisement CR: Create a file, such as first-adv.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: first-adv namespace: metallb-system spec: ipAddressPools: - first-pool peers: - frrviavrf 1 1 In this example, MetalLB advertises a range of IP addresses from the first-pool IP address pool to the frrviavrf BGP peer. Apply the configuration for the BGP advertisement by running the following command: USD oc apply -f first-adv.yaml Create a Namespace , Deployment , and Service CR: Create a file, such as deploy-service.yaml , with content like the following example: apiVersion: v1 kind: Namespace metadata: name: test --- apiVersion: apps/v1 kind: Deployment metadata: name: server namespace: test spec: selector: matchLabels: app: server template: metadata: labels: app: server spec: containers: - name: server image: registry.redhat.io/ubi9/ubi ports: - name: http containerPort: 30100 command: ["/bin/sh", "-c"] args: ["sleep INF"] --- apiVersion: v1 kind: Service metadata: name: server1 namespace: test spec: ports: - name: http port: 30100 protocol: TCP targetPort: 30100 selector: app: server type: LoadBalancer Apply the configuration for the namespace, deployment, and service by running the following command: USD oc apply -f deploy-service.yaml Verification Identify a MetalLB speaker pod by running the following command: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-c6c5f 6/6 Running 0 69m Verify that the state of the BGP session is Established in the speaker pod by running the following command, replacing the variables to match your configuration: USD oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c "show bgp vrf <vrf_name> neigh" Example output BGP neighbor is 192.168.30.1, remote AS 200, local AS 100, external link BGP version 4, remote router ID 192.168.30.1, local router ID 192.168.30.71 BGP state = Established, up for 04:20:09 ... Verify that the service is advertised correctly by running the following command: USD oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c "show bgp vrf <vrf_name> ipv4" Additional resources About virtual routing and forwarding Example: Network interface with a VRF instance node network configuration policy Configuring an egress service Managing symmetric routing with MetalLB 27.3.5. Example BGP peer configurations 27.3.5.1. Example: Limit which nodes connect to a BGP peer You can specify the node selectors field to control which nodes can connect to a BGP peer. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-nodesel namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 nodeSelectors: - matchExpressions: - key: kubernetes.io/hostname operator: In values: [compute-1.example.com, compute-2.example.com] 27.3.5.2. Example: Specify a BFD profile for a BGP peer You can specify a BFD profile to associate with BGP peers. BFD compliments BGP by providing more rapid detection of communication failures between peers than BGP alone. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-peer-bfd namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 holdTime: "10s" bfdProfile: doc-example-bfd-profile-full Note Deleting the bidirectional forwarding detection (BFD) profile and removing the bfdProfile added to the border gateway protocol (BGP) peer resource does not disable the BFD. Instead, the BGP peer starts using the default BFD profile. To disable BFD from a BGP peer resource, delete the BGP peer configuration and recreate it without a BFD profile. For more information, see BZ#2050824 . 27.3.5.3. Example: Specify BGP peers for dual-stack networking To support dual-stack networking, add one BGP peer custom resource for IPv4 and one BGP peer custom resource for IPv6. apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv4 namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64500 myASN: 64500 --- apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv6 namespace: metallb-system spec: peerAddress: 2620:52:0:88::104 peerASN: 64500 myASN: 64500 27.3.6. steps Configuring services to use MetalLB 27.4. Configuring community alias As a cluster administrator, you can configure a community alias and use it across different advertisements. 27.4.1. About the community custom resource The community custom resource is a collection of aliases for communities. Users can define named aliases to be used when advertising ipAddressPools using the BGPAdvertisement . The fields for the community custom resource are described in the following table. Note The community CRD applies only to BGPAdvertisement. Table 27.6. MetalLB community custom resource Field Type Description metadata.name string Specifies the name for the community . metadata.namespace string Specifies the namespace for the community . Specify the same namespace that the MetalLB Operator uses. spec.communities string Specifies a list of BGP community aliases that can be used in BGPAdvertisements. A community alias consists of a pair of name (alias) and value (number:number). Link the BGPAdvertisement to a community alias by referring to the alias name in its spec.communities field. Table 27.7. CommunityAlias Field Type Description name string The name of the alias for the community . value string The BGP community value corresponding to the given name. 27.4.2. Configuring MetalLB with a BGP advertisement and community alias Configure MetalLB as follows so that the IPAddressPool is advertised with the BGP protocol and the community alias set to the numeric value of the NO_ADVERTISE community. In the following example, the peer BGP router doc-example-peer-community receives one 203.0.113.200/32 route and one fc00:f853:ccd:e799::1/128 route for each load-balancer IP address that MetalLB assigns to a service. A community alias is configured with the NO_ADVERTISE community. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create an IP address pool. Create a file, such as ipaddresspool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-community spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 Apply the configuration for the IP address pool: USD oc apply -f ipaddresspool.yaml Create a community alias named community1 . apiVersion: metallb.io/v1beta1 kind: Community metadata: name: community1 namespace: metallb-system spec: communities: - name: NO_ADVERTISE value: '65535:65282' Create a BGP peer named doc-example-bgp-peer . Create a file, such as bgppeer.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-bgp-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10 Apply the configuration for the BGP peer: USD oc apply -f bgppeer.yaml Create a BGP advertisement with the community alias. Create a file, such as bgpadvertisement.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgp-community-sample namespace: metallb-system spec: aggregationLength: 32 aggregationLengthV6: 128 communities: - NO_ADVERTISE 1 ipAddressPools: - doc-example-bgp-community peers: - doc-example-peer 1 Specify the CommunityAlias.name here and not the community custom resource (CR) name. Apply the configuration: USD oc apply -f bgpadvertisement.yaml 27.5. Configuring MetalLB BFD profiles As a cluster administrator, you can add, modify, and delete Bidirectional Forwarding Detection (BFD) profiles. The MetalLB Operator uses the BFD profile custom resources to identify which BGP sessions use BFD to provide faster path failure detection than BGP alone provides. 27.5.1. About the BFD profile custom resource The fields for the BFD profile custom resource are described in the following table. Table 27.8. BFD profile custom resource Field Type Description metadata.name string Specifies the name for the BFD profile custom resource. metadata.namespace string Specifies the namespace for the BFD profile custom resource. spec.detectMultiplier integer Specifies the detection multiplier to determine packet loss. The remote transmission interval is multiplied by this value to determine the connection loss detection timer. For example, when the local system has the detect multiplier set to 3 and the remote system has the transmission interval set to 300 , the local system detects failures only after 900 ms without receiving packets. The range is 2 to 255 . The default value is 3 . spec.echoMode boolean Specifies the echo transmission mode. If you are not using distributed BFD, echo transmission mode works only when the peer is also FRR. The default value is false and echo transmission mode is disabled. When echo transmission mode is enabled, consider increasing the transmission interval of control packets to reduce bandwidth usage. For example, consider increasing the transmit interval to 2000 ms. spec.echoInterval integer Specifies the minimum transmission interval, less jitter, that this system uses to send and receive echo packets. The range is 10 to 60000 . The default value is 50 ms. spec.minimumTtl integer Specifies the minimum expected TTL for an incoming control packet. This field applies to multi-hop sessions only. The purpose of setting a minimum TTL is to make the packet validation requirements more stringent and avoid receiving control packets from other sessions. The default value is 254 and indicates that the system expects only one hop between this system and the peer. spec.passiveMode boolean Specifies whether a session is marked as active or passive. A passive session does not attempt to start the connection. Instead, a passive session waits for control packets from a peer before it begins to reply. Marking a session as passive is useful when you have a router that acts as the central node of a star network and you want to avoid sending control packets that you do not need the system to send. The default value is false and marks the session as active. spec.receiveInterval integer Specifies the minimum interval that this system is capable of receiving control packets. The range is 10 to 60000 . The default value is 300 ms. spec.transmitInterval integer Specifies the minimum transmission interval, less jitter, that this system uses to send control packets. The range is 10 to 60000 . The default value is 300 ms. 27.5.2. Configuring a BFD profile As a cluster administrator, you can add a BFD profile and configure a BGP peer to use the profile. BFD provides faster path failure detection than BGP alone. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a file, such as bfdprofile.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: doc-example-bfd-profile-full namespace: metallb-system spec: receiveInterval: 300 transmitInterval: 300 detectMultiplier: 3 echoMode: false passiveMode: true minimumTtl: 254 Apply the configuration for the BFD profile: USD oc apply -f bfdprofile.yaml 27.5.3. steps Configure a BGP peer to use the BFD profile. 27.6. Configuring services to use MetalLB As a cluster administrator, when you add a service of type LoadBalancer , you can control how MetalLB assigns an IP address. 27.6.1. Request a specific IP address Like some other load-balancer implementations, MetalLB accepts the spec.loadBalancerIP field in the service specification. If the requested IP address is within a range from any address pool, MetalLB assigns the requested IP address. If the requested IP address is not within any range, MetalLB reports a warning. Example service YAML for a specific IP address apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerIP: <ip_address> If MetalLB cannot assign the requested IP address, the EXTERNAL-IP for the service reports <pending> and running oc describe service <service_name> includes an event like the following example. Example event when MetalLB cannot assign a requested IP address ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for "default/invalid-request": "4.3.2.1" is not allowed in config 27.6.2. Request an IP address from a specific pool To assign an IP address from a specific range, but you are not concerned with the specific IP address, then you can use the metallb.universe.tf/address-pool annotation to request an IP address from the specified address pool. Example service YAML for an IP address from a specific pool apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer If the address pool that you specify for <address_pool_name> does not exist, MetalLB attempts to assign an IP address from any pool that permits automatic assignment. 27.6.3. Accept any IP address By default, address pools are configured to permit automatic assignment. MetalLB assigns an IP address from these address pools. To accept any IP address from any pool that is configured for automatic assignment, no special annotation or configuration is required. Example service YAML for accepting any IP address apiVersion: v1 kind: Service metadata: name: <service_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer 27.6.4. Share a specific IP address By default, services do not share IP addresses. However, if you need to colocate services on a single IP address, you can enable selective IP sharing by adding the metallb.universe.tf/allow-shared-ip annotation to the services. apiVersion: v1 kind: Service metadata: name: service-http annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: "web-server-svc" 1 spec: ports: - name: http port: 80 2 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 3 type: LoadBalancer loadBalancerIP: 172.31.249.7 4 --- apiVersion: v1 kind: Service metadata: name: service-https annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: "web-server-svc" spec: ports: - name: https port: 443 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> type: LoadBalancer loadBalancerIP: 172.31.249.7 1 Specify the same value for the metallb.universe.tf/allow-shared-ip annotation. This value is referred to as the sharing key . 2 Specify different port numbers for the services. 3 Specify identical pod selectors if you must specify externalTrafficPolicy: local so the services send traffic to the same set of pods. If you use the cluster external traffic policy, then the pod selectors do not need to be identical. 4 Optional: If you specify the three preceding items, MetalLB might colocate the services on the same IP address. To ensure that services share an IP address, specify the IP address to share. By default, Kubernetes does not allow multiprotocol load balancer services. This limitation would normally make it impossible to run a service like DNS that needs to listen on both TCP and UDP. To work around this limitation of Kubernetes with MetalLB, create two services: For one service, specify TCP and for the second service, specify UDP. In both services, specify the same pod selector. Specify the same sharing key and spec.loadBalancerIP value to colocate the TCP and UDP services on the same IP address. 27.6.5. Configuring a service with MetalLB You can configure a load-balancing service to use an external IP address from an address pool. Prerequisites Install the OpenShift CLI ( oc ). Install the MetalLB Operator and start MetalLB. Configure at least one address pool. Configure your network to route traffic from the clients to the host network for the cluster. Procedure Create a <service_name>.yaml file. In the file, ensure that the spec.type field is set to LoadBalancer . Refer to the examples for information about how to request the external IP address that MetalLB assigns to the service. Create the service: USD oc apply -f <service_name>.yaml Example output service/<service_name> created Verification Describe the service: USD oc describe service <service_name> Example output 1 The annotation is present if you request an IP address from a specific pool. 2 The service type must indicate LoadBalancer . 3 The load-balancer ingress field indicates the external IP address if the service is assigned correctly. 4 The events field indicates the node name that is assigned to announce the external IP address. If you experience an error, the events field indicates the reason for the error. 27.7. Managing symmetric routing with MetalLB As a cluster administrator, you can effectively manage traffic for pods behind a MetalLB load-balancer service with multiple host interfaces by implementing features from MetalLB, NMState, and OVN-Kubernetes. By combining these features in this context, you can provide symmetric routing, traffic segregation, and support clients on different networks with overlapping CIDR addresses. To achieve this functionality, learn how to implement virtual routing and forwarding (VRF) instances with MetalLB, and configure egress services. Important Configuring symmetric traffic by using a VRF instance with MetalLB and an egress service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . 27.7.1. Challenges of managing symmetric routing with MetalLB When you use MetalLB with multiple host interfaces, MetalLB exposes and announces a service through all available interfaces on the host. This can present challenges relating to network isolation, asymmetric return traffic and overlapping CIDR addresses. One option to ensure that return traffic reaches the correct client is to use static routes. However, with this solution, MetalLB cannot isolate the services and then announce each service through a different interface. Additionally, static routing requires manual configuration and requires maintenance if remote sites are added. A further challenge of symmetric routing when implementing a MetalLB service is scenarios where external systems expect the source and destination IP address for an application to be the same. The default behavior for OpenShift Container Platform is to assign the IP address of the host network interface as the source IP address for traffic originating from pods. This is problematic with multiple host interfaces. You can overcome these challenges by implementing a configuration that combines features from MetalLB, NMState, and OVN-Kubernetes. 27.7.2. Overview of managing symmetric routing by using VRFs with MetalLB You can overcome the challenges of implementing symmetric routing by using NMState to configure a VRF instance on a host, associating the VRF instance with a MetalLB BGPPeer resource, and configuring an egress service for egress traffic with OVN-Kubernetes. Figure 27.1. Network overview of managing symmetric routing by using VRFs with MetalLB The configuration process involves three stages: 1. Define a VRF and routing rules Configure a NodeNetworkConfigurationPolicy custom resource (CR) to associate a VRF instance with a network interface. Use the VRF routing table to direct ingress and egress traffic. 2. Link the VRF to a MetalLB BGPPeer Configure a MetalLB BGPPeer resource to use the VRF instance on a network interface. By associating the BGPPeer resource with the VRF instance, the designated network interface becomes the primary interface for the BGP session, and MetalLB advertises the services through this interface. 3. Configure an egress service Configure an egress service to choose the network associated with the VRF instance for egress traffic. Optional: Configure an egress service to use the IP address of the MetalLB load-balancer service as the source IP for egress traffic. 27.7.3. Configuring symmetric routing by using VRFs with MetalLB You can configure symmetric network routing for applications behind a MetalLB service that require the same ingress and egress network paths. This example associates a VRF routing table with MetalLB and an egress service to enable symmetric routing for ingress and egress traffic for pods behind a LoadBalancer service. Note If you use the sourceIPBy: "LoadBalancerIP" setting in the EgressService CR, you must specify the load-balancer node in the BGPAdvertisement custom resource (CR). You can use the sourceIPBy: "Network" setting on clusters that use OVN-Kubernetes configured with the gatewayConfig.routingViaHost specification set to true only. Additionally, if you use the sourceIPBy: "Network" setting, you must schedule the application workload on nodes configured with the network VRF instance. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Install the Kubernetes NMState Operator. Install the MetalLB Operator. Procedure Create a NodeNetworkConfigurationPolicy CR to define the VRF instance: Create a file, such as node-network-vrf.yaml , with content like the following example: apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vrfpolicy 1 spec: nodeSelector: vrf: "true" 2 maxUnavailable: 3 desiredState: interfaces: - name: ens4vrf 3 type: vrf 4 state: up vrf: port: - ens4 5 route-table-id: 2 6 - name: ens4 7 type: ethernet state: up ipv4: address: - ip: 192.168.130.130 prefix-length: 24 dhcp: false enabled: true routes: 8 config: - destination: 0.0.0.0/0 metric: 150 -hop-address: 192.168.130.1 -hop-interface: ens4 table-id: 2 route-rules: 9 config: - ip-to: 172.30.0.0/16 priority: 998 route-table: 254 10 - ip-to: 10.132.0.0/14 priority: 998 route-table: 254 1 The name of the policy. 2 This example applies the policy to all nodes with the label vrf:true . 3 The name of the interface. 4 The type of interface. This example creates a VRF instance. 5 The node interface that the VRF attaches to. 6 The name of the route table ID for the VRF. 7 The IPv4 address of the interface associated with the VRF. 8 Defines the configuration for network routes. The -hop-address field defines the IP address of the hop for the route. The -hop-interface field defines the outgoing interface for the route. In this example, the VRF routing table is 2 , which references the ID that you define in the EgressService CR. 9 Defines additional route rules. The ip-to fields must match the Cluster Network CIDR and Service Network CIDR. You can view the values for these CIDR address specifications by running the following command: oc describe network.config/cluster . 10 The main routing table that the Linux kernel uses when calculating routes has the ID 254 . Apply the policy by running the following command: USD oc apply -f node-network-vrf.yaml Create a BGPPeer custom resource (CR): Create a file, such as frr-via-vrf.yaml , with content like the following example: apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: frrviavrf namespace: metallb-system spec: myASN: 100 peerASN: 200 peerAddress: 192.168.130.1 vrf: ens4vrf 1 1 Specifies the VRF instance to associate with the BGP peer. MetalLB can advertise services and make routing decisions based on the routing information in the VRF. Apply the configuration for the BGP peer by running the following command: USD oc apply -f frr-via-vrf.yaml Create an IPAddressPool CR: Create a file, such as first-pool.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.169.10.0/32 Apply the configuration for the IP address pool by running the following command: USD oc apply -f first-pool.yaml Create a BGPAdvertisement CR: Create a file, such as first-adv.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: first-adv namespace: metallb-system spec: ipAddressPools: - first-pool peers: - frrviavrf 1 nodeSelectors: - matchLabels: egress-service.k8s.ovn.org/test-server1: "" 2 1 In this example, MetalLB advertises a range of IP addresses from the first-pool IP address pool to the frrviavrf BGP peer. 2 In this example, the EgressService CR configures the source IP address for egress traffic to use the load-balancer service IP address. Therefore, you must specify the load-balancer node for return traffic to use the same return path for the traffic originating from the pod. Apply the configuration for the BGP advertisement by running the following command: USD oc apply -f first-adv.yaml Create an EgressService CR: Create a file, such as egress-service.yaml , with content like the following example: apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: server1 1 namespace: test 2 spec: sourceIPBy: "LoadBalancerIP" 3 nodeSelector: matchLabels: vrf: "true" 4 network: "2" 5 1 Specify the name for the egress service. The name of the EgressService resource must match the name of the load-balancer service that you want to modify. 2 Specify the namespace for the egress service. The namespace for the EgressService must match the namespace of the load-balancer service that you want to modify. The egress service is namespace-scoped. 3 This example assigns the LoadBalancer service ingress IP address as the source IP address for egress traffic. 4 If you specify LoadBalancer for the sourceIPBy specification, a single node handles the LoadBalancer service traffic. In this example, only a node with the label vrf: "true" can handle the service traffic. If you do not specify a node, OVN-Kubernetes selects a worker node to handle the service traffic. When a node is selected, OVN-Kubernetes labels the node in the following format: egress-service.k8s.ovn.org/<svc_namespace>-<svc_name>: "" . 5 Specify the routing table for egress traffic. Apply the configuration for the egress service by running the following command: USD oc apply -f egress-service.yaml Verification Verify that you can access the application endpoint of the pods running behind the MetalLB service by running the following command: USD curl <external_ip_address>:<port_number> 1 1 Update the external IP address and port number to suit your application endpoint. Optional: If you assigned the LoadBalancer service ingress IP address as the source IP address for egress traffic, verify this configuration by using tools such as tcpdump to analyze packets received at the external client. Additional resources About virtual routing and forwarding Exposing a service through a network VRF Example: Network interface with a VRF instance node network configuration policy Configuring an egress service 27.8. Configuring the integration of MetalLB and FRR-K8s Important The FRRConfiguration custom resource is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . FRRouting (FRR) is a free, open source internet routing protocol suite for Linux and UNIX platforms. FRR-K8s is a Kubernetes based DaemonSet that exposes a subset of the FRR API in a Kubernetes-compliant manner. As a cluster administrator, you can use the FRRConfiguration custom resource (CR) to configure MetalLB to use FRR-K8s as the backend. You can use this to avail of FRR services, for example, receiving routes. If you run MetalLB with FRR-K8s as a backend, MetalLB generates the FRR-K8s configuration corresponding to the MetalLB configuration applied. 27.8.1. Activating the integration of MetalLB and FRR-K8s The following procedure shows you how to activate FRR-K8s as the backend for MetalLB . Prerequisites You have a cluster installed on bare-metal hardware. You have installed the OpenShift CLI ( oc ). You have logged in as a user with cluster-admin privileges. Procedure Set the bgpBackend field of the MetalLB CR to frr-k8s as in the following example: apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: bgpBackend: frr-k8s 27.8.2. FRR configurations You can create multiple FRRConfiguration CRs to use FRR services in MetalLB . MetalLB generates an FRRConfiguration object which FRR-K8s merges with all other configurations that all users have created. For example, you can configure FRR-K8s to receive all of the prefixes advertised by a given neighbor. The following example configures FRR-K8s to receive all of the prefixes advertised by a BGPPeer with host 172.18.0.5 : Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: metallb-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 toReceive: allowed: mode: all You can also configure FRR-K8s to always block a set of prefixes, regardless of the configuration applied. This can be useful to avoid routes towards the pods or ClusterIPs CIDRs that might result in cluster malfunctions. The following example blocks the set of prefixes 192.168.1.0/24 : Example MetalLB CR apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: bgpBackend: frr-k8s frrk8sConfig: alwaysBlock: - 192.168.1.0/24 You can set FRR-K8s to block the Cluster Network CIDR and Service Network CIDR. You can view the values for these CIDR address specifications by running the following command: USD oc describe network.config/cluster 27.8.3. Configuring the FRRConfiguration CRD The following section provides reference examples that use the FRRConfiguration custom resource (CR). 27.8.3.1. The routers field You can use the routers field to configure multiple routers, one for each Virtual Routing and Forwarding (VRF) resource. For each router, you must define the Autonomous System Number (ASN). You can also define a list of Border Gateway Protocol (BGP) neighbors to connect to, as in the following example: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 - address: 172.18.0.6 asn: 4200000000 port: 179 27.8.3.2. The toAdvertise field By default, FRR-K8s does not advertise the prefixes configured as part of a router configuration. In order to advertise them, you use the toAdvertise field. You can advertise a subset of the prefixes, as in the following example: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 toAdvertise: allowed: prefixes: 1 - 192.168.2.0/24 prefixes: - 192.168.2.0/24 - 192.169.2.0/24 1 Advertises a subset of prefixes. The following example shows you how to advertise all of the prefixes: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 toAdvertise: allowed: mode: all 1 prefixes: - 192.168.2.0/24 - 192.169.2.0/24 1 Advertises all prefixes. 27.8.3.3. The toReceive field By default, FRR-K8s does not process any prefixes advertised by a neighbor. You can use the toReceive field to process such addresses. You can configure for a subset of the prefixes, as in this example: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 port: 179 toReceive: allowed: prefixes: - prefix: 192.168.1.0/24 - prefix: 192.169.2.0/24 ge: 25 1 le: 28 2 1 2 The prefix is applied if the prefix length is less than or equal to the le prefix length and greater than or equal to the ge prefix length. The following example configures FRR to handle all the prefixes announced: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 port: 179 toReceive: allowed: mode: all 27.8.3.4. The bgp field You can use the bgp field to define various BFD profiles and associate them with a neighbor. In the following example, BFD backs up the BGP session and FRR can detect link failures: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 64512 port: 180 bfdProfile: defaultprofile bfdProfiles: - name: defaultprofile 27.8.3.5. The nodeSelector field By default, FRR-K8s applies the configuration to all nodes where the daemon is running. You can use the nodeSelector field to specify the nodes to which you want to apply the configuration. For example: Example FRRConfiguration CR apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 nodeSelector: labelSelector: foo: "bar" The fields for the FRRConfiguration custom resource are described in the following table: Table 27.9. MetalLB FRRConfiguration custom resource Field Type Description spec.bgp.routers array Specifies the routers that FRR is to configure (one per VRF). spec.bgp.routers.asn integer The autonomous system number to use for the local end of the session. spec.bgp.routers.id string Specifies the ID of the bgp router. spec.bgp.routers.vrf string Specifies the host vrf used to establish sessions from this router. spec.bgp.routers.neighbors array Specifies the neighbors to establish BGP sessions with. spec.bgp.routers.neighbors.asn integer Specifies the autonomous system number to use for the local end of the session. spec.bgp.routers.neighbors.address string Specifies the IP address to establish the session with. spec.bgp.routers.neighbors.port integer Specifies the port to dial when establishing the session. Defaults to 179. spec.bgp.routers.neighbors.password string Specifies the password to use for establishing the BGP session. Password and PasswordSecret are mutually exclusive. spec.bgp.routers.neighbors.passwordSecret string Specifies the name of the authentication secret for the neighbor. The secret must be of type "kubernetes.io/basic-auth", and in the same namespace as the FRR-K8s daemon. The key "password" stores the password in the secret. Password and PasswordSecret are mutually exclusive. spec.bgp.routers.neighbors.holdTime duration Specifies the requested BGP hold time, per RFC4271. Defaults to 180s. spec.bgp.routers.neighbors.keepaliveTime duration Specifies the requested BGP keepalive time, per RFC4271. Defaults to 60s . spec.bgp.routers.neighbors.connectTime duration Specifies how long BGP waits between connection attempts to a neighbor. spec.bgp.routers.neighbors.ebgpMultiHop boolean Indicates if the BGPPeer is multi-hops away. spec.bgp.routers.neighbors.bfdProfile string Specifies the name of the BFD Profile to use for the BFD session associated with the BGP session. If not set, the BFD session is not set up. spec.bgp.routers.neighbors.toAdvertise.allowed array Represents the list of prefixes to advertise to a neighbor, and the associated properties. spec.bgp.routers.neighbors.toAdvertise.allowed.prefixes string array Specifies the list of prefixes to advertise to a neighbor. This list must match the prefixes that you define in the router. spec.bgp.routers.neighbors.toAdvertise.allowed.mode string Specifies the mode to use when handling the prefixes. You can set to filtered to allow only the prefixes in the prefixes list. You can set to all to allow all the prefixes configured on the router. spec.bgp.routers.neighbors.toAdvertise.withLocalPref array Specifies the prefixes associated with an advertised local preference. You must specify the prefixes associated with a local preference in the prefixes allowed to be advertised. spec.bgp.routers.neighbors.toAdvertise.withLocalPref.prefixes string array Specifies the prefixes associated with the local preference. spec.bgp.routers.neighbors.toAdvertise.withLocalPref.localPref integer Specifies the local preference associated with the prefixes. spec.bgp.routers.neighbors.toAdvertise.withCommunity array Specifies the prefixes associated with an advertised BGP community. You must include the prefixes associated with a local preference in the list of prefixes that you want to advertise. spec.bgp.routers.neighbors.toAdvertise.withCommunity.prefixes string array Specifies the prefixes associated with the community. spec.bgp.routers.neighbors.toAdvertise.withCommunity.community string Specifies the community associated with the prefixes. spec.bgp.routers.neighbors.toReceive array Specifies the prefixes to receive from a neighbor. spec.bgp.routers.neighbors.toReceive.allowed array Specifies the information that you want to receive from a neighbor. spec.bgp.routers.neighbors.toReceive.allowed.prefixes array Specifies the prefixes allowed from a neighbor. spec.bgp.routers.neighbors.toReceive.allowed.mode string Specifies the mode to use when handling the prefixes. When set to filtered , only the prefixes in the prefixes list are allowed. When set to all , all the prefixes configured on the router are allowed. spec.bgp.routers.neighbors.disableMP boolean Disables MP BGP to prevent it from separating IPv4 and IPv6 route exchanges into distinct BGP sessions. spec.bgp.routers.prefixes string array Specifies all prefixes to advertise from this router instance. spec.bgp.bfdProfiles array Specifies the list of bfd profiles to use when configuring the neighbors. spec.bgp.bfdProfiles.name string The name of the BFD Profile to be referenced in other parts of the configuration. spec.bgp.bfdProfiles.receiveInterval integer Specifies the minimum interval at which this system can receive control packets, in milliseconds. Defaults to 300ms . spec.bgp.bfdProfiles.transmitInterval integer Specifies the minimum transmission interval, excluding jitter, that this system wants to use to send BFD control packets, in milliseconds. Defaults to 300ms . spec.bgp.bfdProfiles.detectMultiplier integer Configures the detection multiplier to determine packet loss. To determine the connection loss-detection timer, multiply the remote transmission interval by this value. spec.bgp.bfdProfiles.echoInterval integer Configures the minimal echo receive transmission-interval that this system can handle, in milliseconds. Defaults to 50ms . spec.bgp.bfdProfiles.echoMode boolean Enables or disables the echo transmission mode. This mode is disabled by default, and not supported on multihop setups. spec.bgp.bfdProfiles.passiveMode boolean Mark session as passive. A passive session does not attempt to start the connection and waits for control packets from peers before it begins replying. spec.bgp.bfdProfiles.MinimumTtl integer For multihop sessions only. Configures the minimum expected TTL for an incoming BFD control packet. spec.nodeSelector string Limits the nodes that attempt to apply this configuration. If specified, only those nodes whose labels match the specified selectors attempt to apply the configuration. If it is not specified, all nodes attempt to apply this configuration. status string Defines the observed state of FRRConfiguration. 27.8.4. How FRR-K8s merges multiple configurations In a case where multiple users add configurations that select the same node, FRR-K8s merges the configurations. Each configuration can only extend others. This means that it is possible to add a new neighbor to a router, or to advertise an additional prefix to a neighbor, but not possible to remove a component added by another configuration. 27.8.4.1. Configuration conflicts Certain configurations can cause conflicts, leading to errors, for example: different ASN for the same router (in the same VRF) different ASN for the same neighbor (with the same IP / port) multiple BFD profiles with the same name but different values When the daemon finds an invalid configuration for a node, it reports the configuration as invalid and reverts to the valid FRR configuration. 27.8.4.2. Merging When merging, it is possible to do the following actions: Extend the set of IPs that you want to advertise to a neighbor. Add an extra neighbor with its set of IPs. Extend the set of IPs to which you want to associate a community. Allow incoming routes for a neighbor. Each configuration must be self contained. This means, for example, that it is not possible to allow prefixes that are not defined in the router section by leveraging prefixes coming from another configuration. If the configurations to be applied are compatible, merging works as follows: FRR-K8s combines all the routers. FRR-K8s merges all prefixes and neighbors for each router. FRR-K8s merges all filters for each neighbor. Note A less restrictive filter has precedence over a stricter one. For example, a filter accepting some prefixes has precedence over a filter not accepting any, and a filter accepting all prefixes has precedence over one that accepts some. 27.9. MetalLB logging, troubleshooting, and support If you need to troubleshoot MetalLB configuration, see the following sections for commonly used commands. 27.9.1. Setting the MetalLB logging levels MetalLB uses FRRouting (FRR) in a container with the default setting of info generates a lot of logging. You can control the verbosity of the logs generated by setting the logLevel as illustrated in this example. Gain a deeper insight into MetalLB by setting the logLevel to debug as follows: Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a file, such as setdebugloglevel.yaml , with content like the following example: apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug nodeSelector: node-role.kubernetes.io/worker: "" Apply the configuration: USD oc replace -f setdebugloglevel.yaml Note Use oc replace as the understanding is the metallb CR is already created and here you are changing the log level. Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-2m9pm 4/4 Running 0 9m19s speaker-7m4qw 3/4 Running 0 19s speaker-szlmx 4/4 Running 0 9m19s Note Speaker and controller pods are recreated to ensure the updated logging level is applied. The logging level is modified for all the components of MetalLB. View the speaker logs: USD oc logs -n metallb-system speaker-7m4qw -c speaker Example output View the FRR logs: USD oc logs -n metallb-system speaker-7m4qw -c frr Example output 27.9.1.1. FRRouting (FRR) log levels The following table describes the FRR logging levels. Table 27.10. Log levels Log level Description all Supplies all logging information for all logging levels. debug Information that is diagnostically helpful to people. Set to debug to give detailed troubleshooting information. info Provides information that always should be logged but under normal circumstances does not require user intervention. This is the default logging level. warn Anything that can potentially cause inconsistent MetalLB behaviour. Usually MetalLB automatically recovers from this type of error. error Any error that is fatal to the functioning of MetalLB . These errors usually require administrator intervention to fix. none Turn off all logging. 27.9.2. Troubleshooting BGP issues The BGP implementation that Red Hat supports uses FRRouting (FRR) in a container in the speaker pods. As a cluster administrator, if you need to troubleshoot BGP configuration issues, you need to run commands in the FRR container. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 56m speaker-gvfnf 4/4 Running 0 56m ... Display the running configuration for FRR: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show running-config" Example output 1 The router bgp section indicates the ASN for MetalLB. 2 Confirm that a neighbor <ip-address> remote-as <peer-ASN> line exists for each BGP peer custom resource that you added. 3 If you configured BFD, confirm that the BFD profile is associated with the correct BGP peer and that the BFD profile appears in the command output. 4 Confirm that the network <ip-address-range> lines match the IP address ranges that you specified in address pool custom resources that you added. Display the BGP summary: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bgp summary" Example output 1 Confirm that the output includes a line for each BGP peer custom resource that you added. 2 Output that shows 0 messages received and messages sent indicates a BGP peer that does not have a BGP session. Check network connectivity and the BGP configuration of the BGP peer. Display the BGP peers that received an address pool: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bgp ipv4 unicast 203.0.113.200/30" Replace ipv4 with ipv6 to display the BGP peers that received an IPv6 address pool. Replace 203.0.113.200/30 with an IPv4 or IPv6 IP address range from an address pool. Example output 1 Confirm that the output includes an IP address for a BGP peer. 27.9.3. Troubleshooting BFD issues The Bidirectional Forwarding Detection (BFD) implementation that Red Hat supports uses FRRouting (FRR) in a container in the speaker pods. The BFD implementation relies on BFD peers also being configured as BGP peers with an established BGP session. As a cluster administrator, if you need to troubleshoot BFD configuration issues, you need to run commands in the FRR container. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Display the names of the speaker pods: USD oc get -n metallb-system pods -l component=speaker Example output NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 26m speaker-gvfnf 4/4 Running 0 26m ... Display the BFD peers: USD oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c "show bfd peers brief" Example output <.> Confirm that the PeerAddress column includes each BFD peer. If the output does not list a BFD peer IP address that you expected the output to include, troubleshoot BGP connectivity with the peer. If the status field indicates down , check for connectivity on the links and equipment between the node and the peer. You can determine the node name for the speaker pod with a command like oc get pods -n metallb-system speaker-66bth -o jsonpath='{.spec.nodeName}' . 27.9.4. MetalLB metrics for BGP and BFD OpenShift Container Platform captures the following Prometheus metrics for MetalLB that relate to BGP peers and BFD profiles. Table 27.11. MetalLB BFD metrics Name Description metallb_bfd_control_packet_input Counts the number of BFD control packets received from each BFD peer. metallb_bfd_control_packet_output Counts the number of BFD control packets sent to each BFD peer. metallb_bfd_echo_packet_input Counts the number of BFD echo packets received from each BFD peer. metallb_bfd_echo_packet_output Counts the number of BFD echo packets sent to each BFD. metallb_bfd_session_down_events Counts the number of times the BFD session with a peer entered the down state. metallb_bfd_session_up Indicates the connection state with a BFD peer. 1 indicates the session is up and 0 indicates the session is down . metallb_bfd_session_up_events Counts the number of times the BFD session with a peer entered the up state. metallb_bfd_zebra_notifications Counts the number of BFD Zebra notifications for each BFD peer. Table 27.12. MetalLB BGP metrics Name Description metallb_bgp_announced_prefixes_total Counts the number of load balancer IP address prefixes that are advertised to BGP peers. The terms prefix and aggregated route have the same meaning. metallb_bgp_session_up Indicates the connection state with a BGP peer. 1 indicates the session is up and 0 indicates the session is down . metallb_bgp_updates_total Counts the number of BGP update messages sent to each BGP peer. metallb_bgp_opens_sent Counts the number of BGP open messages sent to each BGP peer. metallb_bgp_opens_received Counts the number of BGP open messages received from each BGP peer. metallb_bgp_notifications_sent Counts the number of BGP notification messages sent to each BGP peer. metallb_bgp_updates_total_received Counts the number of BGP update messages received from each BGP peer. metallb_bgp_keepalives_sent Counts the number of BGP keepalive messages sent to each BGP peer. metallb_bgp_keepalives_received Counts the number of BGP keepalive messages received from each BGP peer. metallb_bgp_route_refresh_sent Counts the number of BGP route refresh messages sent to each BGP peer. metallb_bgp_total_sent Counts the number of total BGP messages sent to each BGP peer. metallb_bgp_total_received Counts the number of total BGP messages received from each BGP peer. Additional resources See Querying metrics for all projects with the monitoring dashboard for information about using the monitoring dashboard. 27.9.5. About collecting MetalLB data You can use the oc adm must-gather CLI command to collect information about your cluster, your MetalLB configuration, and the MetalLB Operator. The following features and objects are associated with MetalLB and the MetalLB Operator: The namespace and child objects that the MetalLB Operator is deployed in All MetalLB Operator custom resource definitions (CRDs) The oc adm must-gather CLI command collects the following information from FRRouting (FRR) that Red Hat uses to implement BGP and BFD: /etc/frr/frr.conf /etc/frr/frr.log /etc/frr/daemons configuration file /etc/frr/vtysh.conf The log and configuration files in the preceding list are collected from the frr container in each speaker pod. In addition to the log and configuration files, the oc adm must-gather CLI command collects the output from the following vtysh commands: show running-config show bgp ipv4 show bgp ipv6 show bgp neighbor show bfd peer No additional configuration is required when you run the oc adm must-gather CLI command. Additional resources Gathering data about your cluster
|
[
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example labels: 1 zone: east spec: addresses: - 203.0.113.1-203.0.113.10 - 203.0.113.65-203.0.113.75",
"oc apply -f ipaddresspool.yaml",
"oc describe -n metallb-system IPAddressPool doc-example",
"Name: doc-example Namespace: metallb-system Labels: zone=east Annotations: <none> API Version: metallb.io/v1beta1 Kind: IPAddressPool Metadata: Spec: Addresses: 203.0.113.1-203.0.113.10 203.0.113.65-203.0.113.75 Auto Assign: true Events: <none>",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-vlan labels: zone: east 1 spec: addresses: - 192.168.100.1-192.168.100.254 2",
"oc apply -f ipaddresspool-vlan.yaml",
"oc edit network.config.openshift/cluster",
"spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: gatewayConfig: ipForwarding: Global",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-cidr namespace: metallb-system spec: addresses: - 192.168.100.0/24 - 192.168.200.0/24 - 192.168.255.1-192.168.255.5",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-reserved namespace: metallb-system spec: addresses: - 10.0.100.0/28 autoAssign: false",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-combined namespace: metallb-system spec: addresses: - 10.0.100.0/28 - 2002:2:2::1-2002:2:2::100",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: doc-example-service-allocation namespace: metallb-system spec: addresses: - 192.168.20.0/24 serviceAllocation: priority: 50 1 namespaces: 2 - namespace-a - namespace-b namespaceSelectors: 3 - matchLabels: zone: east serviceSelectors: 4 - matchExpressions: - key: security operator: In values: - S1",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-basic spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-basic namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-basic",
"oc apply -f bgpadvertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-adv labels: zone: east spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124 autoAssign: false",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-1 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 65535:65282 aggregationLength: 32 localPref: 100",
"oc apply -f bgpadvertisement1.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-adv-2 namespace: metallb-system spec: ipAddressPools: - doc-example-bgp-adv communities: - 8000:800 aggregationLength: 30 aggregationLengthV6: 124",
"oc apply -f bgpadvertisement2.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: example spec: ipAddressPools: - pool1 nodeSelector: - matchLabels: kubernetes.io/hostname: NodeA - matchLabels: kubernetes.io/hostname: NodeB",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2",
"oc apply -f l2advertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2-label labels: zone: east spec: addresses: - 172.31.249.87/32",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement-label namespace: metallb-system spec: ipAddressPoolSelectors: - matchExpressions: - key: zone operator: In values: - east",
"oc apply -f l2advertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-l2 spec: addresses: - 4.4.4.0/24 autoAssign: false",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - doc-example-l2 interfaces: - interfaceA - interfaceB",
"oc apply -f l2advertisement.yaml",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: <node_role> 1 name: 81-enable-global-forwarding spec: config: ignition: version: 3.2.0 storage: files: - contents: source: data:text/plain;charset=utf-8;base64,`echo -e \"net.ipv4.conf.bridge-net.forwarding = 1\\nnet.ipv6.conf.bridge-net.forwarding = 1\\nnet.ipv4.conf.bridge-net.rp_filter = 0\\nnet.ipv6.conf.bridge-net.rp_filter = 0\" | base64 -w0` verification: {} filesystem: root mode: 644 path: /etc/sysctl.d/enable-global-forwarding.conf osImageURL: \"\"",
"oc apply -f enable-ip-forward.yaml",
"oc patch network.operator cluster -p '{\"spec\":{\"defaultNetwork\":{\"ovnKubernetesConfig\":{\"gatewayConfig\":{\"ipForwarding\": \"Global\"}}}}}",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool1 spec: addresses: - 4.4.4.100-4.4.4.200 - 2001:100:4::200-2001:100:4::400",
"oc apply -f ipaddresspool1.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: pool2 spec: addresses: - 5.5.5.100-5.5.5.200 - 2001:100:5::200-2001:100:5::400",
"oc apply -f ipaddresspool2.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer1 spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer1.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: peer2 spec: peerAddress: 10.0.0.2 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer2.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-1 namespace: metallb-system spec: ipAddressPools: - pool1 peers: - peer1 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100",
"oc apply -f bgpadvertisement1.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgpadvertisement-2 namespace: metallb-system spec: ipAddressPools: - pool2 peers: - peer2 communities: - 65535:65282 aggregationLength: 32 aggregationLengthV6: 128 localPref: 100",
"oc apply -f bgpadvertisement2.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: frrviavrf namespace: metallb-system spec: myASN: 100 peerASN: 200 peerAddress: 192.168.130.1 vrf: ens4vrf 1",
"oc apply -f frrviavrf.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.169.10.0/32",
"oc apply -f first-pool.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: first-adv namespace: metallb-system spec: ipAddressPools: - first-pool peers: - frrviavrf 1",
"oc apply -f first-adv.yaml",
"apiVersion: v1 kind: Namespace metadata: name: test --- apiVersion: apps/v1 kind: Deployment metadata: name: server namespace: test spec: selector: matchLabels: app: server template: metadata: labels: app: server spec: containers: - name: server image: registry.redhat.io/ubi9/ubi ports: - name: http containerPort: 30100 command: [\"/bin/sh\", \"-c\"] args: [\"sleep INF\"] --- apiVersion: v1 kind: Service metadata: name: server1 namespace: test spec: ports: - name: http port: 30100 protocol: TCP targetPort: 30100 selector: app: server type: LoadBalancer",
"oc apply -f deploy-service.yaml",
"oc get -n metallb-system pods -l component=speaker",
"NAME READY STATUS RESTARTS AGE speaker-c6c5f 6/6 Running 0 69m",
"oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c \"show bgp vrf <vrf_name> neigh\"",
"BGP neighbor is 192.168.30.1, remote AS 200, local AS 100, external link BGP version 4, remote router ID 192.168.30.1, local router ID 192.168.30.71 BGP state = Established, up for 04:20:09",
"oc exec -n metallb-system <speaker_pod> -c frr -- vtysh -c \"show bgp vrf <vrf_name> ipv4\"",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-nodesel namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 nodeSelectors: - matchExpressions: - key: kubernetes.io/hostname operator: In values: [compute-1.example.com, compute-2.example.com]",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-peer-bfd namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64501 myASN: 64500 holdTime: \"10s\" bfdProfile: doc-example-bfd-profile-full",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv4 namespace: metallb-system spec: peerAddress: 10.0.20.1 peerASN: 64500 myASN: 64500 --- apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: doc-example-dual-stack-ipv6 namespace: metallb-system spec: peerAddress: 2620:52:0:88::104 peerASN: 64500 myASN: 64500",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: namespace: metallb-system name: doc-example-bgp-community spec: addresses: - 203.0.113.200/30 - fc00:f853:ccd:e799::/124",
"oc apply -f ipaddresspool.yaml",
"apiVersion: metallb.io/v1beta1 kind: Community metadata: name: community1 namespace: metallb-system spec: communities: - name: NO_ADVERTISE value: '65535:65282'",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: namespace: metallb-system name: doc-example-bgp-peer spec: peerAddress: 10.0.0.1 peerASN: 64501 myASN: 64500 routerID: 10.10.10.10",
"oc apply -f bgppeer.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: bgp-community-sample namespace: metallb-system spec: aggregationLength: 32 aggregationLengthV6: 128 communities: - NO_ADVERTISE 1 ipAddressPools: - doc-example-bgp-community peers: - doc-example-peer",
"oc apply -f bgpadvertisement.yaml",
"apiVersion: metallb.io/v1beta1 kind: BFDProfile metadata: name: doc-example-bfd-profile-full namespace: metallb-system spec: receiveInterval: 300 transmitInterval: 300 detectMultiplier: 3 echoMode: false passiveMode: true minimumTtl: 254",
"oc apply -f bfdprofile.yaml",
"apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer loadBalancerIP: <ip_address>",
"Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning AllocationFailed 3m16s metallb-controller Failed to allocate IP for \"default/invalid-request\": \"4.3.2.1\" is not allowed in config",
"apiVersion: v1 kind: Service metadata: name: <service_name> annotations: metallb.universe.tf/address-pool: <address_pool_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer",
"apiVersion: v1 kind: Service metadata: name: <service_name> spec: selector: <label_key>: <label_value> ports: - port: 8080 targetPort: 8080 protocol: TCP type: LoadBalancer",
"apiVersion: v1 kind: Service metadata: name: service-http annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: \"web-server-svc\" 1 spec: ports: - name: http port: 80 2 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> 3 type: LoadBalancer loadBalancerIP: 172.31.249.7 4 --- apiVersion: v1 kind: Service metadata: name: service-https annotations: metallb.universe.tf/address-pool: doc-example metallb.universe.tf/allow-shared-ip: \"web-server-svc\" spec: ports: - name: https port: 443 protocol: TCP targetPort: 8080 selector: <label_key>: <label_value> type: LoadBalancer loadBalancerIP: 172.31.249.7",
"oc apply -f <service_name>.yaml",
"service/<service_name> created",
"oc describe service <service_name>",
"Name: <service_name> Namespace: default Labels: <none> Annotations: metallb.universe.tf/address-pool: doc-example 1 Selector: app=service_name Type: LoadBalancer 2 IP Family Policy: SingleStack IP Families: IPv4 IP: 10.105.237.254 IPs: 10.105.237.254 LoadBalancer Ingress: 192.168.100.5 3 Port: <unset> 80/TCP TargetPort: 8080/TCP NodePort: <unset> 30550/TCP Endpoints: 10.244.0.50:8080 Session Affinity: None External Traffic Policy: Cluster Events: 4 Type Reason Age From Message ---- ------ ---- ---- ------- Normal nodeAssigned 32m (x2 over 32m) metallb-speaker announcing from node \"<node_name>\"",
"apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: vrfpolicy 1 spec: nodeSelector: vrf: \"true\" 2 maxUnavailable: 3 desiredState: interfaces: - name: ens4vrf 3 type: vrf 4 state: up vrf: port: - ens4 5 route-table-id: 2 6 - name: ens4 7 type: ethernet state: up ipv4: address: - ip: 192.168.130.130 prefix-length: 24 dhcp: false enabled: true routes: 8 config: - destination: 0.0.0.0/0 metric: 150 next-hop-address: 192.168.130.1 next-hop-interface: ens4 table-id: 2 route-rules: 9 config: - ip-to: 172.30.0.0/16 priority: 998 route-table: 254 10 - ip-to: 10.132.0.0/14 priority: 998 route-table: 254",
"oc apply -f node-network-vrf.yaml",
"apiVersion: metallb.io/v1beta2 kind: BGPPeer metadata: name: frrviavrf namespace: metallb-system spec: myASN: 100 peerASN: 200 peerAddress: 192.168.130.1 vrf: ens4vrf 1",
"oc apply -f frr-via-vrf.yaml",
"apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: first-pool namespace: metallb-system spec: addresses: - 192.169.10.0/32",
"oc apply -f first-pool.yaml",
"apiVersion: metallb.io/v1beta1 kind: BGPAdvertisement metadata: name: first-adv namespace: metallb-system spec: ipAddressPools: - first-pool peers: - frrviavrf 1 nodeSelectors: - matchLabels: egress-service.k8s.ovn.org/test-server1: \"\" 2",
"oc apply -f first-adv.yaml",
"apiVersion: k8s.ovn.org/v1 kind: EgressService metadata: name: server1 1 namespace: test 2 spec: sourceIPBy: \"LoadBalancerIP\" 3 nodeSelector: matchLabels: vrf: \"true\" 4 network: \"2\" 5",
"oc apply -f egress-service.yaml",
"curl <external_ip_address>:<port_number> 1",
"apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: bgpBackend: frr-k8s",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: metallb-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 toReceive: allowed: mode: all",
"apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: bgpBackend: frr-k8s frrk8sConfig: alwaysBlock: - 192.168.1.0/24",
"oc describe network.config/cluster",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 - address: 172.18.0.6 asn: 4200000000 port: 179",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 toAdvertise: allowed: prefixes: 1 - 192.168.2.0/24 prefixes: - 192.168.2.0/24 - 192.169.2.0/24",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 4200000000 ebgpMultiHop: true port: 180 toAdvertise: allowed: mode: all 1 prefixes: - 192.168.2.0/24 - 192.169.2.0/24",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 port: 179 toReceive: allowed: prefixes: - prefix: 192.168.1.0/24 - prefix: 192.169.2.0/24 ge: 25 1 le: 28 2",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.18.0.5 asn: 64512 port: 179 toReceive: allowed: mode: all",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 neighbors: - address: 172.30.0.3 asn: 64512 port: 180 bfdProfile: defaultprofile bfdProfiles: - name: defaultprofile",
"apiVersion: frrk8s.metallb.io/v1beta1 kind: FRRConfiguration metadata: name: test namespace: frr-k8s-system spec: bgp: routers: - asn: 64512 nodeSelector: labelSelector: foo: \"bar\"",
"apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug nodeSelector: node-role.kubernetes.io/worker: \"\"",
"oc replace -f setdebugloglevel.yaml",
"oc get -n metallb-system pods -l component=speaker",
"NAME READY STATUS RESTARTS AGE speaker-2m9pm 4/4 Running 0 9m19s speaker-7m4qw 3/4 Running 0 19s speaker-szlmx 4/4 Running 0 9m19s",
"oc logs -n metallb-system speaker-7m4qw -c speaker",
"{\"branch\":\"main\",\"caller\":\"main.go:92\",\"commit\":\"3d052535\",\"goversion\":\"gc / go1.17.1 / amd64\",\"level\":\"info\",\"msg\":\"MetalLB speaker starting (commit 3d052535, branch main)\",\"ts\":\"2022-05-17T09:55:05Z\",\"version\":\"\"} {\"caller\":\"announcer.go:110\",\"event\":\"createARPResponder\",\"interface\":\"ens4\",\"level\":\"info\",\"msg\":\"created ARP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:119\",\"event\":\"createNDPResponder\",\"interface\":\"ens4\",\"level\":\"info\",\"msg\":\"created NDP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:110\",\"event\":\"createARPResponder\",\"interface\":\"tun0\",\"level\":\"info\",\"msg\":\"created ARP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} {\"caller\":\"announcer.go:119\",\"event\":\"createNDPResponder\",\"interface\":\"tun0\",\"level\":\"info\",\"msg\":\"created NDP responder for interface\",\"ts\":\"2022-05-17T09:55:05Z\"} I0517 09:55:06.515686 95 request.go:665] Waited for 1.026500832s due to client-side throttling, not priority and fairness, request: GET:https://172.30.0.1:443/apis/operators.coreos.com/v1alpha1?timeout=32s {\"Starting Manager\":\"(MISSING)\",\"caller\":\"k8s.go:389\",\"level\":\"info\",\"ts\":\"2022-05-17T09:55:08Z\"} {\"caller\":\"speakerlist.go:310\",\"level\":\"info\",\"msg\":\"node event - forcing sync\",\"node addr\":\"10.0.128.4\",\"node event\":\"NodeJoin\",\"node name\":\"ci-ln-qb8t3mb-72292-7s7rh-worker-a-vvznj\",\"ts\":\"2022-05-17T09:55:08Z\"} {\"caller\":\"service_controller.go:113\",\"controller\":\"ServiceReconciler\",\"enqueueing\":\"openshift-kube-controller-manager-operator/metrics\",\"epslice\":\"{\\\"metadata\\\":{\\\"name\\\":\\\"metrics-xtsxr\\\",\\\"generateName\\\":\\\"metrics-\\\",\\\"namespace\\\":\\\"openshift-kube-controller-manager-operator\\\",\\\"uid\\\":\\\"ac6766d7-8504-492c-9d1e-4ae8897990ad\\\",\\\"resourceVersion\\\":\\\"9041\\\",\\\"generation\\\":4,\\\"creationTimestamp\\\":\\\"2022-05-17T07:16:53Z\\\",\\\"labels\\\":{\\\"app\\\":\\\"kube-controller-manager-operator\\\",\\\"endpointslice.kubernetes.io/managed-by\\\":\\\"endpointslice-controller.k8s.io\\\",\\\"kubernetes.io/service-name\\\":\\\"metrics\\\"},\\\"annotations\\\":{\\\"endpoints.kubernetes.io/last-change-trigger-time\\\":\\\"2022-05-17T07:21:34Z\\\"},\\\"ownerReferences\\\":[{\\\"apiVersion\\\":\\\"v1\\\",\\\"kind\\\":\\\"Service\\\",\\\"name\\\":\\\"metrics\\\",\\\"uid\\\":\\\"0518eed3-6152-42be-b566-0bd00a60faf8\\\",\\\"controller\\\":true,\\\"blockOwnerDeletion\\\":true}],\\\"managedFields\\\":[{\\\"manager\\\":\\\"kube-controller-manager\\\",\\\"operation\\\":\\\"Update\\\",\\\"apiVersion\\\":\\\"discovery.k8s.io/v1\\\",\\\"time\\\":\\\"2022-05-17T07:20:02Z\\\",\\\"fieldsType\\\":\\\"FieldsV1\\\",\\\"fieldsV1\\\":{\\\"f:addressType\\\":{},\\\"f:endpoints\\\":{},\\\"f:metadata\\\":{\\\"f:annotations\\\":{\\\".\\\":{},\\\"f:endpoints.kubernetes.io/last-change-trigger-time\\\":{}},\\\"f:generateName\\\":{},\\\"f:labels\\\":{\\\".\\\":{},\\\"f:app\\\":{},\\\"f:endpointslice.kubernetes.io/managed-by\\\":{},\\\"f:kubernetes.io/service-name\\\":{}},\\\"f:ownerReferences\\\":{\\\".\\\":{},\\\"k:{\\\\\\\"uid\\\\\\\":\\\\\\\"0518eed3-6152-42be-b566-0bd00a60faf8\\\\\\\"}\\\":{}}},\\\"f:ports\\\":{}}}]},\\\"addressType\\\":\\\"IPv4\\\",\\\"endpoints\\\":[{\\\"addresses\\\":[\\\"10.129.0.7\\\"],\\\"conditions\\\":{\\\"ready\\\":true,\\\"serving\\\":true,\\\"terminating\\\":false},\\\"targetRef\\\":{\\\"kind\\\":\\\"Pod\\\",\\\"namespace\\\":\\\"openshift-kube-controller-manager-operator\\\",\\\"name\\\":\\\"kube-controller-manager-operator-6b98b89ddd-8d4nf\\\",\\\"uid\\\":\\\"dd5139b8-e41c-4946-a31b-1a629314e844\\\",\\\"resourceVersion\\\":\\\"9038\\\"},\\\"nodeName\\\":\\\"ci-ln-qb8t3mb-72292-7s7rh-master-0\\\",\\\"zone\\\":\\\"us-central1-a\\\"}],\\\"ports\\\":[{\\\"name\\\":\\\"https\\\",\\\"protocol\\\":\\\"TCP\\\",\\\"port\\\":8443}]}\",\"level\":\"debug\",\"ts\":\"2022-05-17T09:55:08Z\"}",
"oc logs -n metallb-system speaker-7m4qw -c frr",
"Started watchfrr 2022/05/17 09:55:05 ZEBRA: client 16 says hello and bids fair to announce only bgp routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 31 says hello and bids fair to announce only vnc routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 38 says hello and bids fair to announce only static routes vrf=0 2022/05/17 09:55:05 ZEBRA: client 43 says hello and bids fair to announce only bfd routes vrf=0 2022/05/17 09:57:25.089 BGP: Creating Default VRF, AS 64500 2022/05/17 09:57:25.090 BGP: dup addr detect enable max_moves 5 time 180 freeze disable freeze_time 0 2022/05/17 09:57:25.090 BGP: bgp_get: Registering BGP instance (null) to zebra 2022/05/17 09:57:25.090 BGP: Registering VRF 0 2022/05/17 09:57:25.091 BGP: Rx Router Id update VRF 0 Id 10.131.0.1/32 2022/05/17 09:57:25.091 BGP: RID change : vrf VRF default(0), RTR ID 10.131.0.1 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF br0 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ens4 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr 10.0.128.4/32 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF ens4 addr fe80::c9d:84da:4d86:5618/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF lo 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF ovs-system 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF tun0 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr 10.131.0.1/23 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF tun0 addr fe80::40f1:d1ff:feb6:5322/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2da49fed 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2da49fed addr fe80::24bd:d1ff:fec1:d88/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth2fa08c8c 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth2fa08c8c addr fe80::6870:ff:fe96:efc8/64 2022/05/17 09:57:25.091 BGP: Rx Intf add VRF 0 IF veth41e356b7 2022/05/17 09:57:25.091 BGP: Rx Intf address add VRF 0 IF veth41e356b7 addr fe80::48ff:37ff:fede:eb4b/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth1295c6e2 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth1295c6e2 addr fe80::b827:a2ff:feed:637/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth9733c6dc 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth9733c6dc addr fe80::3cf4:15ff:fe11:e541/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF veth336680ea 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF veth336680ea addr fe80::94b1:8bff:fe7e:488c/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vetha0a907b7 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vetha0a907b7 addr fe80::3855:a6ff:fe73:46c3/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf35a4398 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf35a4398 addr fe80::40ef:2fff:fe57:4c4d/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vethf831b7f4 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vethf831b7f4 addr fe80::f0d9:89ff:fe7c:1d32/64 2022/05/17 09:57:25.092 BGP: Rx Intf add VRF 0 IF vxlan_sys_4789 2022/05/17 09:57:25.092 BGP: Rx Intf address add VRF 0 IF vxlan_sys_4789 addr fe80::80c1:82ff:fe4b:f078/64 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Timer (start timer expire). 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] BGP_Start (Idle->Connect), fd -1 2022/05/17 09:57:26.094 BGP: Allocated bnc 10.0.0.1/32(0)(VRF default) peer 0x7f807f7631a0 2022/05/17 09:57:26.094 BGP: sendmsg_zebra_rnh: sending cmd ZEBRA_NEXTHOP_REGISTER for 10.0.0.1/32 (vrf VRF default) 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] Waiting for NHT 2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Connect established_peers 0 2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Idle to Connect 2022/05/17 09:57:26.094 BGP: 10.0.0.1 [FSM] TCP_connection_open_failed (Connect->Active), fd -1 2022/05/17 09:57:26.094 BGP: bgp_fsm_change_status : vrf default(0), Status: Active established_peers 0 2022/05/17 09:57:26.094 BGP: 10.0.0.1 went from Connect to Active 2022/05/17 09:57:26.094 ZEBRA: rnh_register msg from client bgp: hdr->length=8, type=nexthop vrf=0 2022/05/17 09:57:26.094 ZEBRA: 0: Add RNH 10.0.0.1/32 type Nexthop 2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: Evaluate RNH, type Nexthop (force) 2022/05/17 09:57:26.094 ZEBRA: 0:10.0.0.1/32: NH has become unresolved 2022/05/17 09:57:26.094 ZEBRA: 0: Client bgp registers for RNH 10.0.0.1/32 type Nexthop 2022/05/17 09:57:26.094 BGP: VRF default(0): Rcvd NH update 10.0.0.1/32(0) - metric 0/0 #nhops 0/0 flags 0x6 2022/05/17 09:57:26.094 BGP: NH update for 10.0.0.1/32(0)(VRF default) - flags 0x6 chgflags 0x0 - evaluate paths 2022/05/17 09:57:26.094 BGP: evaluate_paths: Updating peer (10.0.0.1(VRF default)) status with NHT 2022/05/17 09:57:30.081 ZEBRA: Event driven route-map update triggered 2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-out 2022/05/17 09:57:30.081 ZEBRA: Event handler for route-map: 10.0.0.1-in 2022/05/17 09:57:31.104 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0 2022/05/17 09:57:31.104 ZEBRA: Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring 2022/05/17 09:57:31.105 ZEBRA: netlink_parse_info: netlink-listen (NS 0) type RTM_NEWNEIGH(28), len=76, seq=0, pid=0 2022/05/17 09:57:31.105 ZEBRA: Neighbor Entry received is not on a VLAN or a BRIDGE, ignoring",
"oc get -n metallb-system pods -l component=speaker",
"NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 56m speaker-gvfnf 4/4 Running 0 56m",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show running-config\"",
"Building configuration Current configuration: ! frr version 7.5.1_git frr defaults traditional hostname some-hostname log file /etc/frr/frr.log informational log timestamp precision 3 service integrated-vtysh-config ! router bgp 64500 1 bgp router-id 10.0.1.2 no bgp ebgp-requires-policy no bgp default ipv4-unicast no bgp network import-check neighbor 10.0.2.3 remote-as 64500 2 neighbor 10.0.2.3 bfd profile doc-example-bfd-profile-full 3 neighbor 10.0.2.3 timers 5 15 neighbor 10.0.2.4 remote-as 64500 neighbor 10.0.2.4 bfd profile doc-example-bfd-profile-full neighbor 10.0.2.4 timers 5 15 ! address-family ipv4 unicast network 203.0.113.200/30 4 neighbor 10.0.2.3 activate neighbor 10.0.2.3 route-map 10.0.2.3-in in neighbor 10.0.2.4 activate neighbor 10.0.2.4 route-map 10.0.2.4-in in exit-address-family ! address-family ipv6 unicast network fc00:f853:ccd:e799::/124 neighbor 10.0.2.3 activate neighbor 10.0.2.3 route-map 10.0.2.3-in in neighbor 10.0.2.4 activate neighbor 10.0.2.4 route-map 10.0.2.4-in in exit-address-family ! route-map 10.0.2.3-in deny 20 ! route-map 10.0.2.4-in deny 20 ! ip nht resolve-via-default ! ipv6 nht resolve-via-default ! line vty ! bfd profile doc-example-bfd-profile-full transmit-interval 35 receive-interval 35 passive-mode echo-mode echo-interval 35 minimum-ttl 10 ! ! end",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bgp summary\"",
"IPv4 Unicast Summary: BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0 BGP table version 1 RIB entries 1, using 192 bytes of memory Peers 2, using 29 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt 10.0.2.3 4 64500 387 389 0 0 0 00:32:02 0 1 1 10.0.2.4 4 64500 0 0 0 0 0 never Active 0 2 Total number of neighbors 2 IPv6 Unicast Summary: BGP router identifier 10.0.1.2, local AS number 64500 vrf-id 0 BGP table version 1 RIB entries 1, using 192 bytes of memory Peers 2, using 29 KiB of memory Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd PfxSnt 10.0.2.3 4 64500 387 389 0 0 0 00:32:02 NoNeg 10.0.2.4 4 64500 0 0 0 0 0 never Active 0 Total number of neighbors 2",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bgp ipv4 unicast 203.0.113.200/30\"",
"BGP routing table entry for 203.0.113.200/30 Paths: (1 available, best #1, table default) Advertised to non peer-group peers: 10.0.2.3 1 Local 0.0.0.0 from 0.0.0.0 (10.0.1.2) Origin IGP, metric 0, weight 32768, valid, sourced, local, best (First path received) Last update: Mon Jan 10 19:49:07 2022",
"oc get -n metallb-system pods -l component=speaker",
"NAME READY STATUS RESTARTS AGE speaker-66bth 4/4 Running 0 26m speaker-gvfnf 4/4 Running 0 26m",
"oc exec -n metallb-system speaker-66bth -c frr -- vtysh -c \"show bfd peers brief\"",
"Session count: 2 SessionId LocalAddress PeerAddress Status ========= ============ =========== ====== 3909139637 10.0.1.2 10.0.2.3 up <.>"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/networking/load-balancing-with-metallb
|
Chapter 5. Red Hat Quay repository overview
|
Chapter 5. Red Hat Quay repository overview A repository provides a central location for storing a related set of container images. These images can be used to build applications along with their dependencies in a standardized format. Repositories are organized by namespaces. Each namespace can have multiple repositories. For example, you might have a namespace for your personal projects, one for your company, or one for a specific team within your organization. Red Hat Quay provides users with access controls for their repositories. Users can make a repository public, meaning that anyone can pull, or download, the images from it, or users can make it private, restricting access to authorized users or teams. There are three ways to create a repository in Red Hat Quay: by pushing an image with the relevant podman command, by using the Red Hat Quay UI, or by using the Red Hat Quay API. Similarly, repositories can be deleted by using the UI or the proper API endpoint. 5.1. Creating a repository by using the UI Use the following procedure to create a repository using the Red Hat Quay UI. Procedure Use the following procedure to create a repository using the v2 UI. Procedure Click Repositories on the navigation pane. Click Create Repository . Select a namespace, for example, quayadmin , and then enter a Repository name , for example, testrepo . Important Do not use the following words in your repository name: * build * trigger * tag * notification When these words are used for repository names, users are unable access the repository, and are unable to permanently delete the repository. Attempting to delete these repositories returns the following error: Failed to delete repository <repository_name>, HTTP404 - Not Found. Click Create . Now, your example repository should populate under the Repositories page. Optional. Click Settings Repository visibility Make private to set the repository to private. 5.2. Creating a repository by using Podman With the proper credentials, you can push an image to a repository using Podman that does not yet exist in your Red Hat Quay instance. Pushing an image refers to the process of uploading a container image from your local system or development environment to a container registry like Red Hat Quay. After pushing an image to your registry, a repository is created. If you push an image through the command-line interface (CLI) without first creating a repository on the UI, the created repository is set to Private . Use the following procedure to create an image repository by pushing an image. Prerequisites You have download and installed the podman CLI. You have logged into your registry. You have pulled an image, for example, busybox. Procedure Pull a sample page from an example registry. For example: USD sudo podman pull busybox Example output Trying to pull docker.io/library/busybox... Getting image source signatures Copying blob 4c892f00285e done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9 Tag the image on your local system with the new repository and image name. For example: USD sudo podman tag docker.io/library/busybox quay-server.example.com/quayadmin/busybox:test Push the image to the registry. Following this step, you can use your browser to see the tagged image in your repository. USD sudo podman push --tls-verify=false quay-server.example.com/quayadmin/busybox:test Example output Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 5.3. Creating a repository by using the API Use the following procedure to create an image repository using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to create a repository using the POST /api/v1/repository endpoint: USD curl -X POST \ -H "Authorization: Bearer <bearer_token>" \ -H "Content-Type: application/json" \ -d '{ "repository": "<new_repository_name>", "visibility": "<private>", "description": "<This is a description of the new repository>." }' \ "https://quay-server.example.com/api/v1/repository" Example output {"namespace": "quayadmin", "name": "<new_repository_name>", "kind": "image"} 5.4. Deleting a repository by using the UI You can delete a repository directly on the UI. Prerequisites You have created a repository. Procedure On the Repositories page of the v2 UI, check the box of the repository that you want to delete, for example, quayadmin/busybox . Click the Actions drop-down menu. Click Delete . Type confirm in the box, and then click Delete . After deletion, you are returned to the Repositories page. 5.5. Deleting a repository by using the Red Hat Quay API Use the following procedure to delete a repository using the Red Hat Quay API. Prerequisites You have Created an OAuth access token . You have set BROWSER_API_CALLS_XHR_ONLY: false in your config.yaml file. Procedure Enter the following command to delete a repository using the DELETE /api/v1/repository/{repository} endpoint: USD curl -X DELETE -H "Authorization: Bearer <bearer_token>" "<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>" The CLI does not return information when deleting a repository from the CLI. To confirm deletion, you can check the Red Hat Quay UI, or you can enter the following GET /api/v1/repository/{repository} command to see if details are returned for the deleted repository: USD curl -X GET -H "Authorization: Bearer <bearer_token>" "<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>" Example output {"detail": "Not Found", "error_message": "Not Found", "error_type": "not_found", "title": "not_found", "type": "http://quay-server.example.com/api/v1/error/not_found", "status": 404}
|
[
"sudo podman pull busybox",
"Trying to pull docker.io/library/busybox Getting image source signatures Copying blob 4c892f00285e done Copying config 22667f5368 done Writing manifest to image destination Storing signatures 22667f53682a2920948d19c7133ab1c9c3f745805c14125859d20cede07f11f9",
"sudo podman tag docker.io/library/busybox quay-server.example.com/quayadmin/busybox:test",
"sudo podman push --tls-verify=false quay-server.example.com/quayadmin/busybox:test",
"Getting image source signatures Copying blob 6b245f040973 done Copying config 22667f5368 done Writing manifest to image destination Storing signatures",
"curl -X POST -H \"Authorization: Bearer <bearer_token>\" -H \"Content-Type: application/json\" -d '{ \"repository\": \"<new_repository_name>\", \"visibility\": \"<private>\", \"description\": \"<This is a description of the new repository>.\" }' \"https://quay-server.example.com/api/v1/repository\"",
"{\"namespace\": \"quayadmin\", \"name\": \"<new_repository_name>\", \"kind\": \"image\"}",
"curl -X DELETE -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>\"",
"curl -X GET -H \"Authorization: Bearer <bearer_token>\" \"<quay-server.example.com>/api/v1/repository/<namespace>/<repository_name>\"",
"{\"detail\": \"Not Found\", \"error_message\": \"Not Found\", \"error_type\": \"not_found\", \"title\": \"not_found\", \"type\": \"http://quay-server.example.com/api/v1/error/not_found\", \"status\": 404}"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/use_red_hat_quay/use-quay-create-repo
|
10.2. Quorum Administration Commands (Red Hat Enterprise Linux 7.3 and Later)
|
10.2. Quorum Administration Commands (Red Hat Enterprise Linux 7.3 and Later) Once a cluster is running, you can enter the following cluster quorum commands. The following command shows the quorum configuration. The following command shows the quorum runtime status. If you take nodes out of a cluster for a long period of time and the loss of those nodes would cause quorum loss, you can change the value of the expected_votes parameter for the live cluster with the pcs quorum expected-votes command. This allows the cluster to continue operation when it does not have quorum. Warning Changing the expected votes in a live cluster should be done with extreme caution. If less than 50% of the cluster is running because you have manually changed the expected votes, then the other nodes in the cluster could be started separately and run cluster services, causing data corruption and other unexpected results. If you change this value, you should ensure that the wait_for_all parameter is enabled. The following command sets the expected votes in the live cluster to the specified value. This affects the live cluster only and does not change the configuration file; the value of expected_votes is reset to the value in the configuration file in the event of a reload.
|
[
"pcs quorum [config]",
"pcs quorum status",
"pcs quorum expected-votes votes"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-quorumadmin-HAAR
|
Chapter 16. Applying security context to AMQ Streams pods and containers
|
Chapter 16. Applying security context to AMQ Streams pods and containers Security context defines constraints on pods and containers. By specifying a security context, pods and containers only have the permissions they need. For example, permissions can control runtime operations or access to resources. 16.1. Handling of security context by OpenShift platform Handling of security context depends on the tooling of the OpenShift platform you are using. For example, OpenShift uses built-in security context constraints (SCCs) to control permissions. SCCs are the settings and strategies that control the security features a pod has access to. By default, OpenShift injects security context configuration automatically. In most cases, this means you don't need to configure security context for the pods and containers created by the Cluster Operator. Although you can still create and manage your own SCCs. For more information, see the OpenShift documentation .
| null |
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/deploying_and_managing_amq_streams_on_openshift/assembly-security-providers-str
|
Chapter 9. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation
|
Chapter 9. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Any Red Hat OpenShift Container Platform subscription requires an OpenShift Data Foundation subscription. However, you can save on the OpenShift Container Platform subscription costs if you are using infrastructure nodes to schedule OpenShift Data Foundation resources. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 9.3, "Manual creation of infrastructure nodes" section for more information. 9.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Data Foundation have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Data Foundation entitlements are necessary for the nodes running OpenShift Data Foundation. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Data Foundation taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Data Foundation resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Data Foundation resources to be scheduled on the tainted nodes. Note Adding storage taint on nodes might require toleration handling for the other daemonset pods such as openshift-dns daemonset . For information about how to manage the tolerations, see Knowledgebase article: https://access.redhat.com/solutions/6592171 . Example of the taint and labels required on infrastructure node that will be used to run OpenShift Data Foundation services: 9.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Data Foundation does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Data Foundation services. Important If you add a taint to the infrastructure nodes, you also need to add tolerations to the taint for other workloads, for example, the fluentd pods. For more information, see the Red Hat Knowledgebase solution Infrastructure Nodes in OpenShift 4 . 9.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Data Foundation services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Data Foundation taint is also required so that the infra node will only schedule OpenShift Data Foundation resources and repel any other non-OpenShift Data Foundation workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Data Foundation taint is sufficient to conform to entitlement exemption requirements. 9.4. Taint a node from the user interface This section explains the procedure to taint nodes after the OpenShift Data Foundation deployment. Procedure In the OpenShift Web Console, click Compute Nodes , and then select the node which has to be tainted. In the Details page click on Edit taints . Enter the values in the Key <node.ocs.openshift.io/storage>, Value <true> and in the Effect <Noschedule> field. Click Save. Verification steps Follow the steps to verify that the node has tainted successfully: Navigate to Compute Nodes . Select the node to verify its status, and then click on the YAML tab. In the specs section check the values of the following parameters: Additional resources For more information, refer to Creating the OpenShift Data Foundation cluster on VMware vSphere .
|
[
"spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"",
"label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"",
"adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule",
"Taints: Key: node.ocs.openshift.io/storage Value: true Effect: Noschedule"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/how-to-use-dedicated-worker-nodes-for-openshift-data-foundation_rhodf
|
Chapter 6. Client Registration CLI
|
Chapter 6. Client Registration CLI The Client Registration CLI is a command-line interface (CLI) tool for application developers to configure new clients in a self-service manner when integrating with Red Hat Single Sign-On. It is specifically designed to interact with Red Hat Single Sign-On Client Registration REST endpoints. It is necessary to create or obtain a client configuration for any application to be able to use Red Hat Single Sign-On. You usually configure a new client for each new application hosted on a unique host name. When an application interacts with Red Hat Single Sign-On, the application identifies itself with a client ID so Red Hat Single Sign-On can provide a login page, single sign-on (SSO) session management, and other services. You can configure application clients from a command line with the Client Registration CLI, and you can use it in shell scripts. To allow a particular user to use Client Registration CLI the Red Hat Single Sign-On administrator typically uses the Admin Console to configure a new user with proper roles or to configure a new client and client secret to grant access to the Client Registration REST API. 6.1. Configuring a new regular user for use with Client Registration CLI Log in to the Admin Console (for example, http://localhost:8080/auth/admin ) as admin . Select a realm to administer. If you want to use an existing user, select that user to edit; otherwise, create a new user. Select Role Mappings > Client Roles > realm-management . If you are in the master realm, select NAME-realm , where NAME is the name of the target realm. You can grant access to any other realm to users in the master realm. Select Available Roles > manage-client to grant a full set of client management permissions. Another option is to choose view-clients for read-only or create-client to create new clients. Note These permissions grant the user the capability to perform operations without the use of Initial Access Token or Registration Access Token . It is possible to not assign any realm-management roles to a user. In that case, a user can still log in with the Client Registration CLI but cannot use it without an Initial Access Token. Trying to perform any operations without a token results in a 403 Forbidden error. The Administrator can issue Initial Access Tokens from the Admin Console through the Realm Settings > Client Registration > Initial Access Token menu. 6.2. Configuring a client for use with the Client Registration CLI By default, the server recognizes the Client Registration CLI as the admin-cli client, which is configured automatically for every new realm. No additional client configuration is necessary when logging in with a user name. Create a new client (for example, reg-cli ) if you want to use a separate client configuration for the Client Registration CLI. Toggle the Standard Flow Enabled setting it to Off . Strengthen the security by configuring the client Access Type as Confidential and selecting Credentials > ClientId and Secret . Note You can configure either Client Id and Secret or Signed JWT under the Credentials tab . Enable service accounts if you want to use a service account associated with the client by selecting a client to edit in the Clients section of the Admin Console . Under Settings , change the Access Type to Confidential , toggle the Service Accounts Enabled setting to On , and click Save . Click Service Account Roles and select desired roles to configure the access for the service account. For the details on what roles to select, see Section 6.1, "Configuring a new regular user for use with Client Registration CLI" . Toggle the Direct Access Grants Enabled setting it to On if you want to use a regular user account instead of a service account. If the client is configured as Confidential , provide the configured secret when running kcreg config credentials by using the --secret option. Specify which clientId to use (for example, --client reg-cli ) when running kcreg config credentials . With the service account enabled, you can omit specifying the user when running kcreg config credentials and only provide the client secret or keystore information. 6.3. Installing the Client Registration CLI The Client Registration CLI is packaged inside the Red Hat Single Sign-On Server distribution. You can find execution scripts inside the bin directory. The Linux script is called kcreg.sh , and the Windows script is called kcreg.bat . Add the Red Hat Single Sign-On server directory to your PATH when setting up the client for use from any location on the file system. For example, on: Linux: Windows: KEYCLOAK_HOME refers to a directory where the Red Hat Single Sign-On Server distribution was unpacked. 6.4. Using the Client Registration CLI Start an authenticated session by logging in with your credentials. Run commands on the Client Registration REST endpoint. For example, on: Linux: Windows: Note In a production environment, Red Hat Single Sign-On has to be accessed with https: to avoid exposing tokens to network sniffers. If a server's certificate is not issued by one of the trusted certificate authorities (CAs) that are included in Java's default certificate truststore, prepare a truststore.jks file and instruct the Client Registration CLI to use it. For example, on: Linux: Windows: 6.4.1. Logging in Specify a server endpoint URL and a realm when you log in with the Client Registration CLI. Specify a user name or a client id, which results in a special service account being used. When using a user name, you must use a password for the specified user. When using a client ID, you use a client secret or a Signed JWT instead of a password. Regardless of the login method, the account that logs in needs proper permissions to be able to perform client registration operations. Keep in mind that any account in a non-master realm can only have permissions to manage clients within the same realm. If you need to manage different realms, you can either configure multiple users in different realms, or you can create a single user in the master realm and add roles for managing clients in different realms. You cannot configure users with the Client Registration CLI. Use the Admin Console web interface or the Admin Client CLI to configure users. See Server Administration Guide for more details. When kcreg successfully logs in, it receives authorization tokens and saves them in a private configuration file so the tokens can be used for subsequent invocations. See Section 6.4.2, "Working with alternative configurations" for more information on configuration files. See the built-in help for more information on using the Client Registration CLI. For example, on: Linux: Windows: See kcreg config credentials --help for more information about starting an authenticated session. 6.4.2. Working with alternative configurations By default, the Client Registration CLI automatically maintains a configuration file at a default location, ./.keycloak/kcreg.config , under the user's home directory. You can use the --config option to point to a different file or location to mantain multiple authenticated sessions in parallel. It is the safest way to perform operations tied to a single configuration file from a single thread. Important Do not make the configuration file visible to other users on the system. The configuration file contains access tokens and secrets that should be kept private. You might want to avoid storing secrets inside a configuration file by using the --no-config option with all of your commands, even though it is less convenient and requires more token requests to do so. Specify all authentication information with each kcreg invocation. 6.4.3. Initial Access and Registration Access Tokens Developers who do not have an account configured at the Red Hat Single Sign-On server they want to use can use the Client Registration CLI. This is possible only when the realm administrator issues a developer an Initial Access Token. It is up to the realm administrator to decide how and when to issue and distribute these tokens. The realm administrator can limit the maximum age of the Initial Access Token and the total number of clients that can be created with it. Once a developer has an Initial Access Token, the developer can use it to create new clients without authenticating with kcreg config credentials . The Initial Access Token can be stored in the configuration file or specified as part of the kcreg create command. For example, on: Linux: or Windows: or When using an Initial Access Token, the server response includes a newly issued Registration Access Token. Any subsequent operation for that client needs to be performed by authenticating with that token, which is only valid for that client. The Client Registration CLI automatically uses its private configuration file to save and use this token with its associated client. As long as the same configuration file is used for all client operations, the developer does not need to authenticate to read, update, or delete a client that was created this way. See Client Registration for more information about Initial Access and Registration Access Tokens. Run the kcreg config initial-token --help and kcreg config registration-token --help commands for more information on how to configure tokens with the Client Registration CLI. 6.4.4. Creating a client configuration The first task after authenticating with credentials or configuring an Initial Access Token is usually to create a new client. Often you might want to use a prepared JSON file as a template and set or override some of the attributes. The following example shows how to read a JSON file, override any client id it may contain, set any other attributes, and print the configuration to a standard output after successful creation. Linux: Windows: Run the kcreg create --help for more information about the kcreg create command. You can use kcreg attrs to list available attributes. Keep in mind that many configuration attributes are not checked for validity or consistency. It is up to you to specify proper values. Remember that you should not have any id fields in your template and should not specify them as arguments to the kcreg create command. 6.4.5. Retrieving a client configuration You can retrieve an existing client by using the kcreg get command. For example, on: Linux: Windows: You can also retrieve the client configuration as an adapter configuration file, which you can package with your web application. For example, on: Linux: Windows: Run the kcreg get --help command for more information about the kcreg get command. 6.4.6. Modifying a client configuration There are two methods for updating a client configuration. One method is to submit a complete new state to the server after getting the current configuration, saving it to a file, editing it, and posting it back to the server. For example, on: Linux: Windows: The second method fetches the current client, sets or deletes fields on it, and posts it back in one step. For example, on: Linux: Windows: You can also use a file that contains only changes to be applied so you do not have to specify too many values as arguments. In this case, specify --merge to tell the Client Registration CLI that rather than treating the JSON file as a full, new configuration, it should treat it as a set of attributes to be applied over the existing configuration. For example, on: Linux: Windows: Run the kcreg update --help command for more information about the kcreg update command. 6.4.7. Deleting a client configuration Use the following example to delete a client. Linux: Windows: Run the kcreg delete --help command for more information about the kcreg delete command. 6.4.8. Refreshing invalid Registration Access Tokens When performing a create, read, update, and delete (CRUD) operation using the --no-config mode, the Client Registration CLI cannot handle Registration Access Tokens for you. In that case, it is possible to lose track of the most recently issued Registration Access Token for a client, which makes it impossible to perform any further CRUD operations on that client without authenticating with an account that has manage-clients permissions. If you have permissions, you can issue a new Registration Access Token for the client and have it printed to a standard output or saved to a configuration file of your choice. Otherwise, you have to ask the realm administrator to issue a new Registration Access Token for your client and send it to you. You can then pass it to any CRUD command via the --token option. You can also use the kcreg config registration-token command to save the new token in a configuration file and have the Client Registration CLI automatically handle it for you from that point on. Run the kcreg update-token --help command for more information about the kcreg update-token command. 6.5. Troubleshooting Q: When logging in, I get an error: Parameter client_assertion_type is missing [invalid_client] . A: This error means your client is configured with Signed JWT token credentials, which means you have to use the --keystore parameter when logging in.
|
[
"export PATH=USDPATH:USDKEYCLOAK_HOME/bin kcreg.sh",
"c:\\> set PATH=%PATH%;%KEYCLOAK_HOME%\\bin c:\\> kcreg",
"kcreg.sh config credentials --server http://localhost:8080/auth --realm demo --user user --client reg-cli kcreg.sh create -s clientId=my_client -s 'redirectUris=[\"http://localhost:8980/myapp/*\"]' kcreg.sh get my_client",
"c:\\> kcreg config credentials --server http://localhost:8080/auth --realm demo --user user --client reg-cli c:\\> kcreg create -s clientId=my_client -s \"redirectUris=[\\\"http://localhost:8980/myapp/*\\\"]\" c:\\> kcreg get my_client",
"kcreg.sh config truststore --trustpass USDPASSWORD ~/.keycloak/truststore.jks",
"c:\\> kcreg config truststore --trustpass %PASSWORD% %HOMEPATH%\\.keycloak\\truststore.jks",
"kcreg.sh help",
"c:\\> kcreg help",
"kcreg.sh config initial-token USDTOKEN kcreg.sh create -s clientId=myclient",
"kcreg.sh create -s clientId=myclient -t USDTOKEN",
"c:\\> kcreg config initial-token %TOKEN% c:\\> kcreg create -s clientId=myclient",
"c:\\> kcreg create -s clientId=myclient -t %TOKEN%",
"kcreg.sh create -f client-template.json -s clientId=myclient -s baseUrl=/myclient -s 'redirectUris=[\"/myclient/*\"]' -o",
"C:\\> kcreg create -f client-template.json -s clientId=myclient -s baseUrl=/myclient -s \"redirectUris=[\\\"/myclient/*\\\"]\" -o",
"kcreg.sh get myclient",
"C:\\> kcreg get myclient",
"kcreg.sh get myclient -e install > keycloak.json",
"C:\\> kcreg get myclient -e install > keycloak.json",
"kcreg.sh get myclient > myclient.json vi myclient.json kcreg.sh update myclient -f myclient.json",
"C:\\> kcreg get myclient > myclient.json C:\\> notepad myclient.json C:\\> kcreg update myclient -f myclient.json",
"kcreg.sh update myclient -s enabled=false -d redirectUris",
"C:\\> kcreg update myclient -s enabled=false -d redirectUris",
"kcreg.sh update myclient --merge -d redirectUris -f mychanges.json",
"C:\\> kcreg update myclient --merge -d redirectUris -f mychanges.json",
"kcreg.sh delete myclient",
"C:\\> kcreg delete myclient"
] |
https://docs.redhat.com/en/documentation/red_hat_single_sign-on/7.4/html/securing_applications_and_services_guide/client_registration_cli
|
Chapter 4. Tuna
|
Chapter 4. Tuna You can use the Tuna tool to adjust scheduler tunables, tune thread priority, IRQ handlers, and isolate CPU cores and sockets. Tuna aims to reduce the complexity of performing tuning tasks. After installing the tuna package, use the tuna command without any arguments to start the Tuna graphical user interface (GUI). Use the tuna -h command to display available command-line interface (CLI) options. Note that the tuna (8) manual page distinguishes between action and modifier options. The Tuna GUI and CLI provide equivalent functionality. The GUI displays the CPU topology on one screen to help you identify problems. The Tuna GUI also allows you to make changes to the running threads, and see the results of those changes immediately. In the CLI, Tuna accepts multiple command-line parameters and processes them sequentially. You can use such commands in application initialization scripts as configuration commands. The Monitoring tab of the Tuna GUI Important Use the tuna --save= filename command with a descriptive file name to save the current configuration. Note that this command does not save every option that Tuna can change, but saves the kernel thread changes only. Any processes that are not currently running when they are changed are not saved. 4.1. Reviewing the System with Tuna Before you make any changes, you can use Tuna to show you what is currently happening on the system. To view the current policies and priorities, use the tuna --show_threads command: To show only a specific thread corresponding to a PID or matching a command name, add the --threads option before --show_threads : The pid_or_cmd_list argument is a list of comma-separated PIDs or command-name patterns. To view the current interrupt requests (IRQs) and their affinity, use the tuna --show_irqs command: To show only a specific interrupt request corresponding to an IRQ number or matching an IRQ user name, add the --irqs option before --show_irqs : The number_or_user_list argument is a list of comma-separated IRQ numbers or user-name patterns.
|
[
"tuna --show_threads thread pid SCHED_ rtpri affinity cmd 1 OTHER 0 0,1 init 2 FIFO 99 0 migration/0 3 OTHER 0 0 ksoftirqd/0 4 FIFO 99 0 watchdog/0",
"tuna --threads= pid_or_cmd_list --show_threads",
"tuna --show_irqs users affinity 0 timer 0 1 i8042 0 7 parport0 0",
"tuna --irqs= number_or_user_list --show_irqs"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/chap-Tuna
|
4.8. Activating Logical Volumes on Individual Nodes in a Cluster
|
4.8. Activating Logical Volumes on Individual Nodes in a Cluster If you have LVM installed in a cluster environment, you may at times need to activate logical volumes exclusively on one node. For example, the pvmove command is not cluster-aware and needs exclusive access to a volume. LVM snapshots require exclusive access to a volume as well. To activate logical volumes exclusively on one node, use the lvchange -aey command. Alternatively, you can use lvchange -aly command to activate logical volumes only on the local node but not exclusively. You can later activate them on additional nodes concurrently. You can also activate logical volumes on individual nodes by using LVM tags, which are described in Appendix C, LVM Object Tags . You can also specify activation of nodes in the configuration file, which is described in Appendix B, The LVM Configuration Files .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/cluster_activation
|
Chapter 2. Installing standalone Hammer
|
Chapter 2. Installing standalone Hammer You can install Hammer on a host running RHEL that has no Satellite Server installed, and use it to connect from the host to a remote Satellite. Prerequisites Ensure that you register the host to Satellite Server or Capsule Server. If you are installing on Red Hat Enterprise Linux 9, ensure that the following repositories are enabled and synchronized on Satellite Server: rhel-9-for-x86_64-baseos-rpms rhel-9-for-x86_64-appstream-rpms satellite-utils-6.16-for-rhel-9-x86_64-rpms If you are installing on Red Hat Enterprise Linux 8, ensure that the following repositories are enabled and synchronized on Satellite Server: rhel-8-for-x86_64-baseos-rpms rhel-8-for-x86_64-appstream-rpms satellite-utils-6.16-for-rhel-8-x86_64-rpms Procedure Enable the required repositories on the host. If you are installing on Red Hat Enterprise Linux 8, enable the following module: Install Hammer CLI: Set the :host: entry in the /etc/hammer/cli.modules.d/foreman.yml file to the Satellite URL: Additional resources Enabling repositories on hosts in Managing hosts Registering Hosts in Managing hosts Synchronizing Repositories in Managing content
|
[
"dnf module enable satellite-utils:el8",
"dnf install satellite-cli",
":host: 'https:// satellite.example.com '"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/using_the_hammer_cli_tool/installing-standalone-hammer
|
Chapter 7. Preparing to update a cluster with manually maintained credentials
|
Chapter 7. Preparing to update a cluster with manually maintained credentials The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default. For minor releases, for example, from 4.12 to 4.13, this status prevents you from updating until you have addressed any updated permissions and annotated the CloudCredential resource to indicate that the permissions are updated as needed for the version. This annotation changes the Upgradable status to True . For z-stream releases, for example, from 4.13.0 to 4.13.1, no permissions are added or changed, so the update is not blocked. Before updating a cluster with manually maintained credentials, you must accommodate any new or changed credentials in the release image for the version of OpenShift Container Platform you are updating to. 7.1. Update requirements for clusters with manually maintained credentials Before you update a cluster that uses manually maintained credentials with the Cloud Credential Operator (CCO), you must update the cloud provider resources for the new release. If the cloud credential management for your cluster was configured using the CCO utility ( ccoctl ), use the ccoctl utility to update the resources. Clusters that were configured to use manual mode without the ccoctl utility require manual updates for the resources. After updating the cloud provider resources, you must update the upgradeable-to annotation for the cluster to indicate that it is ready to update. Note The process to update the cloud provider resources and the upgradeable-to annotation can only be completed by using command line tools. 7.1.1. Cloud credential configuration options and update requirements by platform type Some platforms only support using the CCO in one mode. For clusters that are installed on those platforms, the platform type determines the credentials update requirements. For platforms that support using the CCO in multiple modes, you must determine which mode the cluster is configured to use and take the required actions for that configuration. Figure 7.1. Credentials update requirements by platform type Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization (RHV), and VMware vSphere These platforms do not support using the CCO in manual mode. Clusters on these platforms handle changes in cloud provider resources automatically and do not require an update to the upgradeable-to annotation. Administrators of clusters on these platforms should skip the manually maintained credentials section of the update process. IBM Cloud and Nutanix Clusters installed on these platforms are configured using the ccoctl utility. Administrators of clusters on these platforms must take the following actions: Configure the ccoctl utility for the new release. Use the ccoctl utility to update the cloud provider resources. Indicate that the cluster is ready to update with the upgradeable-to annotation. Microsoft Azure Stack Hub These clusters use manual mode with long-lived credentials and do not use the ccoctl utility. Administrators of clusters on these platforms must take the following actions: Manually update the cloud provider resources for the new release. Indicate that the cluster is ready to update with the upgradeable-to annotation. Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) Clusters installed on these platforms support multiple CCO modes. The required update process depends on the mode that the cluster is configured to use. If you are not sure what mode the CCO is configured to use on your cluster, you can use the web console or the CLI to determine this information. Additional resources Determining the Cloud Credential Operator mode by using the web console Determining the Cloud Credential Operator mode by using the CLI Configuring the Cloud Credential Operator utility for a cluster update Updating cloud provider resources with manually maintained credentials About the Cloud Credential Operator 7.1.2. Determining the Cloud Credential Operator mode by using the web console You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the web console. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select CloudCredential . On the CloudCredential details page, select the YAML tab. In the YAML block, check the value of spec.credentialsMode . The following values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS or GCP cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. If the cluster is specifically configured to use mint mode or uses mint mode by default, you must determine if the root secret is present on the cluster before updating. An AWS or GCP cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster using the AWS Security Token Service (STS) or GCP Workload Identity. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, navigate to Workloads Secrets and look for the root secret for your cloud provider. Note Ensure that the Project dropdown is set to All Projects . Platform Secret name AWS aws-creds GCP gcp-credentials If you see one of these values, your cluster is using mint or passthrough mode with the root secret present. If you do not see these values, your cluster is using the CCO in mint mode with the root secret removed. AWS or GCP clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, you must check the cluster Authentication object YAML values. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select Authentication . On the Authentication details page, select the YAML tab. In the YAML block, check the value of the .spec.serviceAccountIssuer parameter. A value that contains a URL that is associated with your cloud provider indicates that the CCO is using manual mode with AWS STS or GCP Workload Identity to create and manage cloud credentials from outside of the cluster. These clusters are configured using the ccoctl utility. An empty value ( '' ) indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. steps If you are updating a cluster that has the CCO operating in mint or passthrough mode and the root secret is present, you do not need to update any cloud provider resources and can continue to the part of the update process. If your cluster is using the CCO in mint mode with the root secret removed, you must reinstate the credential secret with the administrator-level credential before continuing to the part of the update process. If your cluster was configured using the CCO utility ( ccoctl ), you must take the following actions: Configure the ccoctl utility for the new release and use it to update the cloud provider resources. Update the upgradeable-to annotation to indicate that the cluster is ready to update. If your cluster is using the CCO in manual mode but was not configured using the ccoctl utility, you must take the following actions: Manually update the cloud provider resources for the new release. Update the upgradeable-to annotation to indicate that the cluster is ready to update. Additional resources Configuring the Cloud Credential Operator utility for a cluster update Updating cloud provider resources with manually maintained credentials 7.1.3. Determining the Cloud Credential Operator mode by using the CLI You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the CLI. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. You have installed the OpenShift CLI ( oc ). Procedure Log in to oc on the cluster as a user with the cluster-admin role. To determine the mode that the CCO is configured to use, enter the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS or GCP cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. If the cluster is specifically configured to use mint mode or uses mint mode by default, you must determine if the root secret is present on the cluster before updating. An AWS or GCP cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster using the AWS Security Token Service (STS) or GCP Workload Identity. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, run the following command: USD oc get secret <secret_name> \ -n=kube-system where <secret_name> is aws-creds for AWS or gcp-credentials for GCP. If the root secret is present, the output of this command returns information about the secret. An error indicates that the root secret is not present on the cluster. AWS or GCP clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, run the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the CCO is using manual mode with AWS STS or GCP Workload Identity to create and manage cloud credentials from outside of the cluster. These clusters are configured using the ccoctl utility. An empty output indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. steps If you are updating a cluster that has the CCO operating in mint or passthrough mode and the root secret is present, you do not need to update any cloud provider resources and can continue to the part of the update process. If your cluster is using the CCO in mint mode with the root secret removed, you must reinstate the credential secret with the administrator-level credential before continuing to the part of the update process. If your cluster was configured using the CCO utility ( ccoctl ), you must take the following actions: Configure the ccoctl utility for the new release and use it to update the cloud provider resources. Update the upgradeable-to annotation to indicate that the cluster is ready to update. If your cluster is using the CCO in manual mode but was not configured using the ccoctl utility, you must take the following actions: Manually update the cloud provider resources for the new release. Update the upgradeable-to annotation to indicate that the cluster is ready to update. Additional resources Configuring the Cloud Credential Operator utility for a cluster update Updating cloud provider resources with manually maintained credentials 7.2. Configuring the Cloud Credential Operator utility for a cluster update To upgrade a cluster that uses the Cloud Credential Operator (CCO) in manual mode to create and manage cloud credentials from outside of the cluster, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Your cluster was configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster. Procedure Obtain the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 7.3. Updating cloud provider resources with the Cloud Credential Operator utility The process for upgrading an OpenShift Container Platform cluster that was configured using the CCO utility ( ccoctl ) is similar to creating the cloud provider resources during installation. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. On AWS clusters, some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Obtain the OpenShift Container Platform release image for the version that you are upgrading to. Extract and prepare the ccoctl binary from the release image. Procedure Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract --credentials-requests \ --cloud=<provider_type> \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \ quay.io/<path_to>/ocp-release:<version> where: <provider_type> is the value for your cloud provider. Valid values are alibabacloud , aws , gcp , ibmcloud , and nutanix . credrequests is the directory where the list of CredentialsRequest objects is stored. This command creates the directory if it does not exist. For each CredentialsRequest CR in the release image, ensure that a namespace that matches the text in the spec.secretRef.namespace field exists in the cluster. This field is where the generated secrets that hold the credentials configuration are stored. Sample AWS CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cloud-credential-operator-iam-ro namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" secretRef: name: cloud-credential-operator-iam-ro-creds namespace: openshift-cloud-credential-operator 1 1 This field indicates the namespace which needs to exist to hold the generated secret. The CredentialsRequest CRs for other platforms have a similar format with different platform-specific values. For any CredentialsRequest CR for which the cluster does not already have a namespace with the name specified in spec.secretRef.namespace , create the namespace by running the following command: USD oc create namespace <component_namespace> Use the ccoctl tool to process all CredentialsRequest objects in the credrequests directory by running the command for your cloud provider. The following commands process CredentialsRequest objects: Alibaba Cloud: ccoctl alibabacloud create-ram-users Amazon Web Services (AWS): ccoctl aws create-iam-roles Google Cloud Platform (GCP): ccoctl gcp create-all IBM Cloud: ccoctl ibmcloud create-service-id Nutanix: ccoctl nutanix create-shared-secrets Important Refer to the ccoctl utility instructions in the installation content for your cloud provider for important platform-specific details about the required arguments and special considerations. For each CredentialsRequest object, ccoctl creates the required provider resources and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Apply the secrets to your cluster by running the following command: USD ls <path_to_ccoctl_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {} Verification You can verify that the required provider resources and permissions policies are created by querying the cloud provider. For more information, refer to your cloud provider documentation on listing roles or service accounts. steps Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade. Additional resources Creating Alibaba Cloud credentials for OpenShift Container Platform components with the ccoctl tool Creating AWS resources with the Cloud Credential Operator utility Creating GCP resources with the Cloud Credential Operator utility Manually creating IAM for IBM Cloud VPC Configuring IAM for Nutanix Indicating that the cluster is ready to upgrade 7.4. Updating cloud provider resources with manually maintained credentials Before upgrading a cluster with manually maintained credentials, you must create any new credentials for the release image that you are upgrading to. You must also review the required permissions for existing credentials and accommodate any new permissions requirements in the new release for those components. Procedure Extract and examine the CredentialsRequest custom resource for the new release. The "Manually creating IAM" section of the installation content for your cloud provider explains how to obtain and use the credentials required for your cloud. Update the manually maintained credentials on your cluster: Create new secrets for any CredentialsRequest custom resources that are added by the new release image. If the CredentialsRequest custom resources for any existing credentials that are stored in secrets have changed permissions requirements, update the permissions as required. If your cluster uses cluster capabilities to disable one or more optional components, delete the CredentialsRequest custom resources for any disabled components. Example credrequests directory contents for OpenShift Container Platform 4.12 on AWS 0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-network-operator_02-cncc-credentials.yaml 5 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 6 1 The Machine API Operator CR is required. 2 The Cloud Credential Operator CR is required. 3 The Image Registry Operator CR is required. 4 The Ingress Operator CR is required. 5 The Network Operator CR is required. 6 The Storage Operator CR is an optional component and might be disabled in your cluster. Example credrequests directory contents for OpenShift Container Platform 4.12 on GCP 0000_26_cloud-controller-manager-operator_16_credentialsrequest-gcp.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-gcp-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request-gcs.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_gcp.yaml 7 1 The Cloud Controller Manager Operator CR is required. 2 The Machine API Operator CR is required. 3 The Cloud Credential Operator CR is required. 4 The Image Registry Operator CR is required. 5 The Ingress Operator CR is required. 6 The Network Operator CR is required. 7 The Storage Operator CR is an optional component and might be disabled in your cluster. steps Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade. Additional resources Manually creating IAM for AWS Manually creating IAM for Azure Manually creating IAM for Azure Stack Hub Manually creating IAM for GCP Indicating that the cluster is ready to upgrade 7.5. Indicating that the cluster is ready to upgrade The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default. Prerequisites For the release image that you are upgrading to, you have processed any new credentials manually or by using the Cloud Credential Operator utility ( ccoctl ). You have installed the OpenShift CLI ( oc ). Procedure Log in to oc on the cluster as a user with the cluster-admin role. Edit the CloudCredential resource to add an upgradeable-to annotation within the metadata field by running the following command: USD oc edit cloudcredential cluster Text to add ... metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number> ... Where <version_number> is the version that you are upgrading to, in the format x.y.z . For example, use 4.12.2 for OpenShift Container Platform 4.12.2. It may take several minutes after adding the annotation for the upgradeable status to change. Verification In the Administrator perspective of the web console, navigate to Administration Cluster Settings . To view the CCO status details, click cloud-credential in the Cluster Operators list. If the Upgradeable status in the Conditions section is False , verify that the upgradeable-to annotation is free of typographical errors. When the Upgradeable status in the Conditions section is True , begin the OpenShift Container Platform upgrade.
|
[
"oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}",
"oc get secret <secret_name> -n=kube-system",
"oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"oc adm release extract --credentials-requests --cloud=<provider_type> --to=<path_to_directory_with_list_of_credentials_requests>/credrequests quay.io/<path_to>/ocp-release:<version>",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cloud-credential-operator-iam-ro namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\" secretRef: name: cloud-credential-operator-iam-ro-creds namespace: openshift-cloud-credential-operator 1",
"oc create namespace <component_namespace>",
"ls <path_to_ccoctl_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {}",
"0000_30_machine-api-operator_00_credentials-request.yaml 1 0000_50_cloud-credential-operator_05-iam-ro-credentialsrequest.yaml 2 0000_50_cluster-image-registry-operator_01-registry-credentials-request.yaml 3 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 4 0000_50_cluster-network-operator_02-cncc-credentials.yaml 5 0000_50_cluster-storage-operator_03_credentials_request_aws.yaml 6",
"0000_26_cloud-controller-manager-operator_16_credentialsrequest-gcp.yaml 1 0000_30_machine-api-operator_00_credentials-request.yaml 2 0000_50_cloud-credential-operator_05-gcp-ro-credentialsrequest.yaml 3 0000_50_cluster-image-registry-operator_01-registry-credentials-request-gcs.yaml 4 0000_50_cluster-ingress-operator_00-ingress-credentials-request.yaml 5 0000_50_cluster-network-operator_02-cncc-credentials.yaml 6 0000_50_cluster-storage-operator_03_credentials_request_gcp.yaml 7",
"oc edit cloudcredential cluster",
"metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/updating_clusters/preparing-manual-creds-update
|
Chapter 16. Distributed tracing
|
Chapter 16. Distributed tracing Distributed tracing allows you to track the progress of transactions between applications in a distributed system. In a microservices architecture, tracing tracks the progress of transactions between services. Trace data is useful for monitoring application performance and investigating issues with target systems and end-user applications. In AMQ Streams on Red Hat Enterprise Linux, tracing facilitates the end-to-end tracking of messages: from source systems to Kafka, and then from Kafka to target systems and applications. Tracing complements the available JMX metrics . How AMQ Streams supports tracing Support for tracing is provided for the following clients and components. Kafka clients: Kafka producers and consumers Kafka Streams API applications Kafka components: Kafka Connect Kafka Bridge MirrorMaker MirrorMaker 2.0 To enable tracing, you perform four high-level tasks: Enable a Jaeger tracer. Enable the Interceptors: For Kafka clients, you instrument your application code using the OpenTracing Apache Kafka Client Instrumentation library (included with AMQ Streams). For Kafka components, you set configuration properties for each component. Set tracing environment variables . Deploy the client or component. When instrumented, clients generate trace data. For example, when producing messages or writing offsets to the log. Traces are sampled according to a sampling strategy and then visualized in the Jaeger user interface. Note Tracing is not supported for Kafka brokers. Setting up tracing for applications and systems beyond AMQ Streams is outside the scope of this chapter. To learn more about this subject, search for "inject and extract" in the OpenTracing documentation . Outline of procedures To set up tracing for AMQ Streams, follow these procedures in order: Set up tracing for clients: Initialize a Jaeger tracer for Kafka clients Instrument producers and consumers for tracing Instrument Kafka Streams applications for tracing Set up tracing for MirrorMaker, MirrorMaker 2.0, and Kafka Connect: Enable tracing for MirrorMaker Enable tracing for MirrorMaker 2.0 Enable tracing for Kafka Connect Enable tracing for the Kafka Bridge Prerequisites The Jaeger backend components are deployed to the host operating system. For deployment instructions, see the Jaeger deployment documentation . 16.1. Overview of OpenTracing and Jaeger AMQ Streams uses the OpenTracing and Jaeger projects. OpenTracing is an API specification that is independent from the tracing or monitoring system. The OpenTracing APIs are used to instrument application code Instrumented applications generate traces for individual transactions across the distributed system Traces are composed of spans that define specific units of work over time Jaeger is a tracing system for microservices-based distributed systems. Jaeger implements the OpenTracing APIs and provides client libraries for instrumentation The Jaeger user interface allows you to query, filter, and analyze trace data Additional resources OpenTracing Jaeger 16.2. Setting up tracing for Kafka clients Initialize a Jaeger tracer to instrument your client applications for distributed tracing. 16.2.1. Initializing a Jaeger tracer for Kafka clients Configure and initialize a Jaeger tracer using a set of tracing environment variables . Procedure In each client application: Add Maven dependencies for Jaeger to the pom.xml file for the client application: <dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>1.1.0.redhat-00002</version> </dependency> Define the configuration of the Jaeger tracer using the tracing environment variables . Create the Jaeger tracer from the environment variables that you defined in step two: Tracer tracer = Configuration.fromEnv().getTracer(); Note For alternative ways to initialize a Jaeger tracer, see the Java OpenTracing library documentation. Register the Jaeger tracer as a global tracer: GlobalTracer.register(tracer); A Jaeger tracer is now initialized for the client application to use. 16.2.2. Instrumenting producers and consumers for tracing Use a Decorator pattern or Interceptors to instrument your Java producer and consumer application code for tracing. Procedure In the application code of each producer and consumer application: Add a Maven dependency for OpenTracing to the producer or consumer's pom.xml file. <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-client</artifactId> <version>0.1.12.redhat-00001</version> </dependency> Instrument your client application code using either a Decorator pattern or Interceptors. To use a Decorator pattern: // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer: TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer); // Send: tracingProducer.send(...); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer); // Subscribe: tracingConsumer.subscribe(Collections.singletonList("messages")); // Get messages: ConsumerRecords<Integer, String> records = tracingConsumer.poll(1000); // Retrieve SpanContext from polled record (consumer side): ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); To use Interceptors: // Register the tracer with GlobalTracer: GlobalTracer.register(tracer); // Add the TracingProducerInterceptor to the sender properties: senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Send: producer.send(...); // Add the TracingConsumerInterceptor to the consumer properties: consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Subscribe: consumer.subscribe(Collections.singletonList("messages")); // Get messages: ConsumerRecords<Integer, String> records = consumer.poll(1000); // Retrieve the SpanContext from a polled message (consumer side): ConsumerRecord<Integer, String> record = ... SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer); Custom span names in a Decorator pattern A span is a logical unit of work in Jaeger, with an operation name, start time, and duration. To use a Decorator pattern to instrument your producer and consumer applications, define custom span names by passing a BiFunction object as an additional argument when creating the TracingKafkaProducer and TracingKafkaConsumer objects. The OpenTracing Apache Kafka Client Instrumentation library includes several built-in span names. Example: Using custom span names to instrument client application code in a Decorator pattern // Create a BiFunction for the KafkaProducer that operates on (String operationName, ProducerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ProducerRecord, String> producerSpanNameProvider = (operationName, producerRecord) -> "CUSTOM_PRODUCER_NAME"; // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer, producerSpanNameProvider); // Spans created by the tracingProducer will now have "CUSTOM_PRODUCER_NAME" as the span name. // Create a BiFunction for the KafkaConsumer that operates on (String operationName, ConsumerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ConsumerRecord, String> consumerSpanNameProvider = (operationName, consumerRecord) -> operationName.toUpperCase(); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer, passing in the consumerSpanNameProvider BiFunction: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer, consumerSpanNameProvider); // Spans created by the tracingConsumer will have the operation name as the span name, in upper-case. // "receive" -> "RECEIVE" Built-in span names When defining custom span names, you can use the following BiFunctions in the ClientSpanNameProvider class. If no spanNameProvider is specified, CONSUMER_OPERATION_NAME and PRODUCER_OPERATION_NAME are used. Table 16.1. BiFunctions to define custom span names BiFunction Description CONSUMER_OPERATION_NAME, PRODUCER_OPERATION_NAME Returns the operationName as the span name: "receive" for consumers and "send" for producers. CONSUMER_PREFIXED_OPERATION_NAME(String prefix), PRODUCER_PREFIXED_OPERATION_NAME(String prefix) Returns a String concatenation of prefix and operationName . CONSUMER_TOPIC, PRODUCER_TOPIC Returns the name of the topic that the message was sent to or retrieved from in the format (record.topic()) . PREFIXED_CONSUMER_TOPIC(String prefix), PREFIXED_PRODUCER_TOPIC(String prefix) Returns a String concatenation of prefix and the topic name in the format (record.topic()) . CONSUMER_OPERATION_NAME_TOPIC, PRODUCER_OPERATION_NAME_TOPIC Returns the operation name and the topic name: "operationName - record.topic()" . CONSUMER_PREFIXED_OPERATION_NAME_TOPIC(String prefix), PRODUCER_PREFIXED_OPERATION_NAME_TOPIC(String prefix) Returns a String concatenation of prefix and "operationName - record.topic()" . 16.2.3. Instrumenting Kafka Streams applications for tracing Instrument Kafka Streams applications for distributed tracing using a supplier interface. This enables the Interceptors in the application. Procedure In each Kafka Streams application: Add the opentracing-kafka-streams dependency to the Kafka Streams application's pom.xml file. <dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-streams</artifactId> <version>0.1.12.redhat-00001</version> </dependency> Create an instance of the TracingKafkaClientSupplier supplier interface: KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer); Provide the supplier interface to KafkaStreams : KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start(); 16.3. Setting up tracing for MirrorMaker and Kafka Connect This section describes how to configure MirrorMaker, MirrorMaker 2.0, and Kafka Connect for distributed tracing. You must enable a Jaeger tracer for each component. 16.3.1. Enabling tracing for MirrorMaker Enable distributed tracing for MirrorMaker by passing the Interceptor properties as consumer and producer configuration parameters. Messages are traced from the source cluster to the target cluster. The trace data records messages entering and leaving the MirrorMaker component. Procedure Configure and enable a Jaeger tracer. Edit the /opt/kafka/config/consumer.properties file. Add the following Interceptor property: consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor Edit the /opt/kafka/config/producer.properties file. Add the following Interceptor property: producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor Start MirrorMaker with the consumer and producer configuration files as parameters: su - kafka /opt/kafka/bin/kafka-mirror-maker.sh --consumer.config /opt/kafka/config/consumer.properties --producer.config /opt/kafka/config/producer.properties --num.streams=2 16.3.2. Enabling tracing for MirrorMaker 2.0 Enable distributed tracing for MirrorMaker 2.0 by defining the Interceptor properties in the MirrorMaker 2.0 properties file. Messages are traced between Kafka clusters. The trace data records messages entering and leaving the MirrorMaker 2.0 component. Procedure Configure and enable a Jaeger tracer. Edit the MirrorMaker 2.0 configuration properties file, ./config/connect-mirror-maker.properties , and add the following properties: header.converter=org.apache.kafka.connect.converters.ByteArrayConverter 1 consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor 2 producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor 1 Prevents Kafka Connect from converting message headers (containing trace IDs) to base64 encoding. This ensures that messages are the same in both the source and the target clusters. 2 Enables the Interceptors for MirrorMaker 2.0. Start MirrorMaker 2.0 using the instructions in Section 10.4, "Synchronizing data between Kafka clusters using MirrorMaker 2.0" . Additional resources Chapter 10, Using AMQ Streams with MirrorMaker 2.0 16.3.3. Enabling tracing for Kafka Connect Enable distributed tracing for Kafka Connect using configuration properties. Only messages produced and consumed by Kafka Connect itself are traced. To trace messages sent between Kafka Connect and external systems, you must configure tracing in the connectors for those systems. Procedure Configure and enable a Jaeger tracer. Edit the relevant Kafka Connect configuration file. If you are running Kafka Connect in standalone mode, edit the /opt/kafka/config/connect-standalone.properties file. If you are running Kafka Connect in distributed mode, edit the /opt/kafka/config/connect-distributed.properties file. Add the following properties to the configuration file: producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor Save the configuration file. Set tracing environment variables and then run Kafka Connect in standalone or distributed mode. The Interceptors in Kafka Connect's internal consumers and producers are now enabled. Additional resources Section 16.5, "Environment variables for tracing" Section 9.1.3, "Running Kafka Connect in standalone mode" Section 9.2.3, "Running distributed Kafka Connect" 16.4. Enabling tracing for the Kafka Bridge Enable distributed tracing for the Kafka Bridge by editing the Kafka Bridge configuration file. You can then deploy a Kafka Bridge instance that is configured for distributed tracing to the host operating system. Traces are generated when: The Kafka Bridge sends messages to HTTP clients and consumes messages from HTTP clients HTTP clients send HTTP requests to send and receive messages through the Kafka Bridge To have end-to-end tracing, you must configure tracing in your HTTP clients. Procedure Edit the config/application.properties file in the Kafka Bridge installation directory. Remove the code comments from the following line: bridge.tracing=jaeger Save the configuration file. Run the bin/kafka_bridge_run.sh script using the configuration properties as a parameter: cd kafka-bridge-0.xy.x.redhat-0000x ./bin/kafka_bridge_run.sh --config-file=config/application.properties The Interceptors in the Kafka Bridge's internal consumers and producers are now enabled. Additional resources Section 13.1.6, "Configuring Kafka Bridge properties" 16.5. Environment variables for tracing Use these environment variables when configuring a Jaeger tracer for Kafka clients and components. Note The tracing environment variables are part of the Jaeger project and are subject to change. For the latest environment variables, see the Jaeger documentation . Table 16.2. Jaeger tracer environment variables Property Required Description JAEGER_SERVICE_NAME Yes The name of the Jaeger tracer service. JAEGER_AGENT_HOST No The hostname for communicating with the jaeger-agent through the User Datagram Protocol (UDP). JAEGER_AGENT_PORT No The port used for communicating with the jaeger-agent through UDP. JAEGER_ENDPOINT No The traces endpoint. Only define this variable if the client application will bypass the jaeger-agent and connect directly to the jaeger-collector . JAEGER_AUTH_TOKEN No The authentication token to send to the endpoint as a bearer token. JAEGER_USER No The username to send to the endpoint if using basic authentication. JAEGER_PASSWORD No The password to send to the endpoint if using basic authentication. JAEGER_PROPAGATION No A comma-separated list of formats to use for propagating the trace context. Defaults to the standard Jaeger format. Valid values are jaeger and b3 . JAEGER_REPORTER_LOG_SPANS No Indicates whether the reporter should also log the spans. JAEGER_REPORTER_MAX_QUEUE_SIZE No The reporter's maximum queue size. JAEGER_REPORTER_FLUSH_INTERVAL No The reporter's flush interval, in ms. Defines how frequently the Jaeger reporter flushes span batches. JAEGER_SAMPLER_TYPE No The sampling strategy to use for client traces: Constant Probabilistic Rate Limiting Remote (the default) To sample all traces, use the Constant sampling strategy with a parameter of 1. For more information, see the Jaeger documentation . JAEGER_SAMPLER_PARAM No The sampler parameter (number). JAEGER_SAMPLER_MANAGER_HOST_PORT No The hostname and port to use if a Remote sampling strategy is selected. JAEGER_TAGS No A comma-separated list of tracer-level tags that are added to all reported spans. The value can also refer to an environment variable using the format USD{envVarName:default} . :default is optional and identifies a value to use if the environment variable cannot be found.
|
[
"<dependency> <groupId>io.jaegertracing</groupId> <artifactId>jaeger-client</artifactId> <version>1.1.0.redhat-00002</version> </dependency>",
"Tracer tracer = Configuration.fromEnv().getTracer();",
"GlobalTracer.register(tracer);",
"<dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-client</artifactId> <version>0.1.12.redhat-00001</version> </dependency>",
"// Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer: TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer); // Send: tracingProducer.send(...); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer); // Subscribe: tracingConsumer.subscribe(Collections.singletonList(\"messages\")); // Get messages: ConsumerRecords<Integer, String> records = tracingConsumer.poll(1000); // Retrieve SpanContext from polled record (consumer side): ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"// Register the tracer with GlobalTracer: GlobalTracer.register(tracer); // Add the TracingProducerInterceptor to the sender properties: senderProps.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingProducerInterceptor.class.getName()); // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Send: producer.send(...); // Add the TracingConsumerInterceptor to the consumer properties: consumerProps.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, TracingConsumerInterceptor.class.getName()); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Subscribe: consumer.subscribe(Collections.singletonList(\"messages\")); // Get messages: ConsumerRecords<Integer, String> records = consumer.poll(1000); // Retrieve the SpanContext from a polled message (consumer side): ConsumerRecord<Integer, String> record = SpanContext spanContext = TracingKafkaUtils.extractSpanContext(record.headers(), tracer);",
"// Create a BiFunction for the KafkaProducer that operates on (String operationName, ProducerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ProducerRecord, String> producerSpanNameProvider = (operationName, producerRecord) -> \"CUSTOM_PRODUCER_NAME\"; // Create an instance of the KafkaProducer: KafkaProducer<Integer, String> producer = new KafkaProducer<>(senderProps); // Create an instance of the TracingKafkaProducer TracingKafkaProducer<Integer, String> tracingProducer = new TracingKafkaProducer<>(producer, tracer, producerSpanNameProvider); // Spans created by the tracingProducer will now have \"CUSTOM_PRODUCER_NAME\" as the span name. // Create a BiFunction for the KafkaConsumer that operates on (String operationName, ConsumerRecord consumerRecord) and returns a String to be used as the name: BiFunction<String, ConsumerRecord, String> consumerSpanNameProvider = (operationName, consumerRecord) -> operationName.toUpperCase(); // Create an instance of the KafkaConsumer: KafkaConsumer<Integer, String> consumer = new KafkaConsumer<>(consumerProps); // Create an instance of the TracingKafkaConsumer, passing in the consumerSpanNameProvider BiFunction: TracingKafkaConsumer<Integer, String> tracingConsumer = new TracingKafkaConsumer<>(consumer, tracer, consumerSpanNameProvider); // Spans created by the tracingConsumer will have the operation name as the span name, in upper-case. // \"receive\" -> \"RECEIVE\"",
"<dependency> <groupId>io.opentracing.contrib</groupId> <artifactId>opentracing-kafka-streams</artifactId> <version>0.1.12.redhat-00001</version> </dependency>",
"KafkaClientSupplier supplier = new TracingKafkaClientSupplier(tracer);",
"KafkaStreams streams = new KafkaStreams(builder.build(), new StreamsConfig(config), supplier); streams.start();",
"consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor",
"producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor",
"su - kafka /opt/kafka/bin/kafka-mirror-maker.sh --consumer.config /opt/kafka/config/consumer.properties --producer.config /opt/kafka/config/producer.properties --num.streams=2",
"header.converter=org.apache.kafka.connect.converters.ByteArrayConverter 1 consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor 2 producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor",
"producer.interceptor.classes=io.opentracing.contrib.kafka.TracingProducerInterceptor consumer.interceptor.classes=io.opentracing.contrib.kafka.TracingConsumerInterceptor",
"bridge.tracing=jaeger",
"cd kafka-bridge-0.xy.x.redhat-0000x ./bin/kafka_bridge_run.sh --config-file=config/application.properties"
] |
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_streams_on_rhel/assembly-distributed-tracing-str
|
9.4. Selecting Appropriate Authentication Methods
|
9.4. Selecting Appropriate Authentication Methods A basic decision regarding the security policy is how users access the directory. Are anonymous users allowed to access the directory, or is every user required to log into the directory with a user name and password (authenticate)? Directory Server provides the following methods for authentication: Section 9.4.1, "Anonymous and Unauthenticated Access" Section 9.4.2, "Simple Binds and Secure Binds" Section 9.4.3, "Certificate-Based Authentication" Section 9.4.4, "Proxy Authentication" Section 9.4.6, "Password-less Authentication" The directory uses the same authentication mechanism for all users, whether they are people or LDAP-aware applications. For information about preventing authentication by a client or group of clients, see Section 9.5, "Designing an Account Lockout Policy" . 9.4.1. Anonymous and Unauthenticated Access Anonymous access provides the easiest form of access to the directory. It makes data available to any user of the directory, regardless of whether they have authenticated. However, anonymous access does not allow administrators to track who is performing what kinds of searches, only that someone is performing searches. With anonymous access, anyone who connects to the directory can access the data. Therefore, an administrator may attempt to block a specific user or group of users from accessing some kinds of directory data, but, if anonymous access is allowed to that data, those users can still access the data simply by binding to the directory anonymously. Anonymous access can be limited. Usually directory administrators only allow anonymous access for read, search, and compare privileges (not for write, add, delete, or selfwrite). Often, administrators limit access to a subset of attributes that contain general information such as names, telephone numbers, and email addresses. Anonymous access should never be allowed for more sensitive data such as government identification numbers (for example, Social Security Numbers in the US), home telephone numbers and addresses, and salary information. Anonymous access can also be disabled entirely, if there is a need for tighter rules on who accesses the directory data. An unauthenticated bind is when a user attempts to bind with a user name but without a user password attribute. For example: The Directory Server grants anonymous access if the user does not attempt to provide a password. An unauthenticated bind does not require that the bind DN be an existing entry. As with anonymous binds, unauthenticated binds can be disabled to increase security by limiting access to the database. Disabling unauthenticated binds has another advantage: it can be used to prevent silent bind failures for clients. A poorly-written application may believe that it successfully authenticated to the directory because it received a bind success message when, in reality, it failed to pass a password and simply connected with an unauthenticated bind. 9.4.2. Simple Binds and Secure Binds If anonymous access is not allowed, users must authenticate to the directory before they can access the directory contents. With simple password authentication, a client authenticates to the server by sending a reusable password. For example, a client authenticates to the directory using a bind operation in which it provides a distinguished name and a set of credentials. The server locates the entry in the directory that corresponds to the client DN and checks whether the password given by the client matches the value stored with the entry. If it does, the server authenticates the client. If it does not, the authentication operation fails, and the client receives an error message. The bind DN often corresponds to the entry of a person. However, some directory administrators find it useful to bind as an organizational entry rather than as a person. The directory requires the entry used to bind to be of an object class that allows the userPassword attribute. This ensures that the directory recognizes the bind DN and password. Most LDAP clients hide the bind DN from the user because users may find the long strings of DN characters hard to remember. When a client attempts to hide the bind DN from the user, it uses a bind algorithm such as the following: The user enters a unique identifier, such as a user ID (for example, fchen ). The LDAP client application searches the directory for that identifier and returns the associated distinguished name (such as uid=fchen,ou=people,dc=example,dc=com ). The LDAP client application binds to the directory using the retrieved distinguished name and the password supplied by the user. Simple password authentication offers an easy way to authenticate users, but it requires extra security to be used safely. Consider restricting its use to the organization's intranet. To use with connections between business partners over an extranet or for transmissions with customers on the Internet, it may be best to require a secure (encrypted) connection. Note The drawback of simple password authentication is that the password is sent in plain text. If an unauthorized user is listening, this can compromise the security of the directory because that person can impersonate an authorized user. The nsslapd-require-secure-binds configuration attribute requires simple password authentication to occur over a secure connection, using TLS or Start TLS. This effectively encrypts the plaintext password so it cannot be sniffed by a hacker. When a secure connection is established between Directory Server and a client application using TLS or the Start TLS operation, the client performs a simple bind with an extra level of protection by not transmitting the password in plaintext. The nsslapd-require-secure-binds configuration attribute requires simple password authentication over a secure connection, meaning TLS or Start TLS. This setting allows alternative secure connections, like SASL authentication or certificate-based authentication, as well. For more information about secure connections, see Section 9.9, "Securing Server Connections" . 9.4.3. Certificate-Based Authentication An alternative form of directory authentication involves using digital certificates to bind to the directory. The directory prompts users for a password when they first access it. However, rather than matching a password stored in the directory, the password opens the user's certificate database. If the user supplies the correct password, the directory client application obtains authentication information from the certificate database. The client application and the directory then use this information to identify the user by mapping the user's certificate to a directory DN. The directory allows or denies access based on the directory DN identified during this authentication process. For more information about certificates and TLS, see the Administration Guide . 9.4.4. Proxy Authentication Proxy authentication is a special form of authentication because the user requesting access to the directory does not bind with its own DN but with a proxy DN . The proxy DN is an entity that has appropriate rights to perform the operation requested by the user. When proxy rights are granted to a person or an application, they are granted the right to specify any DN as a proxy DN, with the exception of the Directory Manager DN. One of the main advantages of proxy right is that an LDAP application can be enabled to use a single thread with a single bind to service multiple users making requests against the Directory Server. Instead of having to bind and authenticate for each user, the client application binds to the Directory Server using a proxy DN. The proxy DN is specified in the LDAP operation submitted by the client application. For example: This ldapmodify command gives the manager entry ( cn=Directory Manager ) the permissions of a user named Joe ( cn=joe ) to apply the modifications in the mods.ldif file. The manager does not need to provide Joe's password to make this change. Note The proxy mechanism is very powerful and must be used sparingly. Proxy rights are granted within the scope of the ACL, and there is no way to restrict who can be impersonated by an entry that has the proxy right. That is, when a user is granted proxy rights, that user has the ability to proxy for any user under the target; there is no way to restrict the proxy rights to only certain users. For example, if an entity has proxy rights to the dc=example,dc=com tree, that entity can do anything. Therefore, ensure that the proxy ACI is set at the lowest possible level of the DIT. For more information on this topic, check out the "Proxied Authorization ACI Example" section in the "Managing Access Control" chapter of the Administration Guide . 9.4.5. Pass-through Authentication Pass-through authentication is when any authentication request is forwarded from one server to another service. For example, whenever all of the configuration information for an instance is stored in another directory instance, the Directory Server uses pass-through authentication for the User Directory Server to connect to the Configuration Directory Server. Directory Server-to-Directory Server pass-through authentication is handled with the PTA Plug-in. Figure 9.1. Simple Pass-through Authentication Process Many systems already have authentication mechanisms in place for Unix and Linux users. One of the most common authentication frameworks is Pluggable Authentication Modules (PAM). Since many networks already have existing authentication services available, administrators may want to continue using those services. A PAM module can be configured to tell Directory Server to use an existing authentication store for LDAP clients. PAM pass-through authentication in Red Hat Directory Server uses the PAM Pass-through Authentication Plug-in, which enables the Directory Server to talk to the PAM service to authenticate LDAP clients. Figure 9.2. PAM Pass-through Authentication Process With PAM pass-through authentication, when a user attempts to bind to the Directory Server, the credentials are forwarded to the PAM service. If the credentials match the information in the PAM service, then the user can successfully bind to the Directory Server, with all of the Directory Server access control restrictions and account settings in place. Note The Directory Server can be configured to use PAM, but it cannot be used to set up PAM to use the Directory Server for authentication. For PAM to use a Directory Server instance for authentication, the pam_ldap module must be properly configured. For general configuration information about pam_ldap , look at the manpage (such as http://linux.die.net/man/5/pam_ldap ). The PAM service can be configured using system tools like the System Security Services Daemon (SSSD). SSSD can use a variety of different identity providers, including Active Directory, Red Hat Directory Server or other directories like OpenLDAP, or local system settings. To use SSSD, simply point the PAM Pass-through Authentication Plug-in to the PAM file used by SSSD, /etc/pam.d/system-auth by default. 9.4.6. Password-less Authentication An authentication attempt evaluates, first, whether the user account has the ability to authenticate. The account must be active, it must not be locked, and it must have a valid password according to any applicable password policy (meaning it cannot be expired or need to be reset). There can be times when that evaluation of whether a user should be permitted to authenticate needs to be performed, but the user should not (or cannot) be bound to the Directory Server for real. For example, a system may be using PAM to manage system accounts, and PAM is configured to use the LDAP directory as its identity store. However, the system is using password-less credentials, such as SSH keys or RSA tokens, and those credentials cannot be passed to authenticate to the Directory Server. Red Hat Directory Server supports the Account Usability Extension Control for ldapsearch es. This control returns information about the account status and any password policies in effect (like requiring a reset, a password expiration warning, or the number of grace logins left after password expiration) - all the information that would be returned in a bind attempt but without authenticating and binding to the Directory Server as that user. That allows the client to determine if the user should be allowed to authenticate based on the Directory Server settings and information, but the actual authentication process is performed outside of Directory Server. This control can be used with system-level services like PAM to allow password-less logins which still use Directory Server to store identities and even control account status. Note The Account Usability Extension Control can only be used by the Directory Manager, by default. To allow other users to use the control, set the appropriate ACI on the supported control entry, oid=1.3.6.1.4.1.42.2.27.9.5.8,cn=features,cn=config .
|
[
"ldapsearch -x -D \"cn=jsmith,ou=people,dc=example,dc=com\" -b \"dc=example,dc=com\" \"(cn=joe)\"",
"ldapmodify -D \"cn=Directory Manager\" -W -x -D \"cn=directory manager\" -W -p 389 -h server.example.com -x -Y \"cn=joe,dc=example,dc=com\" -f mods.ldif"
] |
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/Designing_a_Secure_Directory-Selecting_Appropriate_Authentication_Methods
|
9.2. OpenLDAP
|
9.2. OpenLDAP This section covers the installation and configuration of OpenLDAP 2.4 , an open source implementation of the LDAPv2 and LDAPv3 protocols. Note Starting with Red Hat Enterprise Linux 7.4, the openldap-server package has been deprecated and will not be included in a future major release of Red Hat Enterprise Linux. For this reason, migrate to Identity Management included in Red Hat Enterprise Linux or to Red Hat Directory Server. For further details about Identity Management, see Linux Domain Identity, Authentication, and Policy Guide . For further details about Directory Server, see Section 9.1, "Red Hat Directory Server" . 9.2.1. Introduction to LDAP Using a client-server architecture, LDAP provides a reliable means to create a central information directory accessible from the network. When a client attempts to modify information within this directory, the server verifies the user has permission to make the change, and then adds or updates the entry as requested. To ensure the communication is secure, the Transport Layer Security ( TLS ) cryptographic protocol can be used to prevent an attacker from intercepting the transmission. Important The OpenLDAP suite in Red Hat Enterprise Linux 7.5 and later no longer uses Mozilla implementation of Network Security Services ( NSS ). Instead, it uses the OpenSSL . OpenLDAP continues to work with existing NSS database configuration. Important Due to the vulnerability described in Resolution for POODLE SSLv3.0 vulnerability (CVE-2014-3566) for components that do not allow SSLv3 to be disabled via configuration settings , Red Hat recommends that you do not rely on the SSLv3 protocol for security. OpenLDAP is one of the system components that do not provide configuration parameters that allow SSLv3 to be effectively disabled. To mitigate the risk, it is recommended that you use the stunnel command to provide a secure tunnel, and disable stunnel from using SSLv3 . For more information on using stunnel , see the Red Hat Enterprise Linux 7 Security Guide . The LDAP server supports several database systems, which gives administrators the flexibility to choose the best suited solution for the type of information they are planning to serve. Because of a well-defined client Application Programming Interface ( API ), the number of applications able to communicate with an LDAP server is numerous, and increasing in both quantity and quality. 9.2.1.1. LDAP Terminology The following is a list of LDAP-specific terms that are used within this chapter: entry A single unit within an LDAP directory. Each entry is identified by its unique Distinguished Name ( DN ). attribute Information directly associated with an entry. For example, if an organization is represented as an LDAP entry, attributes associated with this organization might include an address, a fax number, and so on. Similarly, people can be represented as entries with common attributes such as personal telephone number or email address. An attribute can either have a single value, or an unordered space-separated list of values. While certain attributes are optional, others are required. Required attributes are specified using the objectClass definition, and can be found in schema files located in the /etc/openldap/slapd.d/cn=config/cn=schema/ directory. The assertion of an attribute and its corresponding value is also referred to as a Relative Distinguished Name ( RDN ). Unlike distinguished names that are unique globally, a relative distinguished name is only unique per entry. LDIF The LDAP Data Interchange Format ( LDIF ) is a plain text representation of an LDAP entry. It takes the following form: The optional id is a number determined by the application that is used to edit the entry. Each entry can contain as many attribute_type and attribute_value pairs as needed, as long as they are all defined in a corresponding schema file. A blank line indicates the end of an entry. 9.2.1.2. OpenLDAP Features OpenLDAP suite provides a number of important features: LDAPv3 Support - Many of the changes in the protocol since LDAP version 2 are designed to make LDAP more secure. Among other improvements, this includes the support for Simple Authentication and Security Layer ( SASL ), Transport Layer Security ( TLS ), and Secure Sockets Layer ( SSL ) protocols. LDAP Over IPC - The use of inter-process communication ( IPC ) enhances security by eliminating the need to communicate over a network. IPv6 Support - OpenLDAP is compliant with Internet Protocol version 6 ( IPv6 ), the generation of the Internet Protocol. LDIFv1 Support - OpenLDAP is fully compliant with LDIF version 1. Updated C API - The current C API improves the way programmers can connect to and use LDAP directory servers. Enhanced Standalone LDAP Server - This includes an updated access control system, thread pooling, better tools, and much more. 9.2.1.3. OpenLDAP Server Setup The typical steps to set up an LDAP server on Red Hat Enterprise Linux are as follows: Install the OpenLDAP suite. See Section 9.2.2, "Installing the OpenLDAP Suite" for more information on required packages. Customize the configuration as described in Section 9.2.3, "Configuring an OpenLDAP Server" . Start the slapd service as described in Section 9.2.5, "Running an OpenLDAP Server" . Use the ldapadd utility to add entries to the LDAP directory. Use the ldapsearch utility to verify that the slapd service is accessing the information correctly. 9.2.2. Installing the OpenLDAP Suite The suite of OpenLDAP libraries and tools is provided by the following packages: Table 9.1. List of OpenLDAP packages Package Description openldap A package containing the libraries necessary to run the OpenLDAP server and client applications. openldap-clients A package containing the command line utilities for viewing and modifying directories on an LDAP server. openldap-servers A package containing both the services and utilities to configure and run an LDAP server. This includes the Standalone LDAP Daemon , slapd . compat-openldap A package containing the OpenLDAP compatibility libraries. Additionally, the following packages are commonly used along with the LDAP server: Table 9.2. List of commonly installed additional LDAP packages Package Description nss-pam-ldapd A package containing nslcd , a local LDAP name service that allows a user to perform local LDAP queries. mod_ldap A package containing the mod_authnz_ldap and mod_ldap modules. The mod_authnz_ldap module is the LDAP authorization module for the Apache HTTP Server. This module can authenticate users' credentials against an LDAP directory, and can enforce access control based on the user name, full DN, group membership, an arbitrary attribute, or a complete filter string. The mod_ldap module contained in the same package provides a configurable shared memory cache, to avoid repeated directory access across many HTTP requests, and also support for SSL/TLS. Note that this package is provided by the Optional channel. See Adding the Optional and Supplementary Repositories in the System Administrator's Guide for more information on Red Hat additional channels. To install these packages, use the yum command in the following form: For example, to perform the basic LDAP server installation, type the following at a shell prompt: Note that you must have superuser privileges (that is, you must be logged in as root ) to run this command. For more information on how to install new packages in Red Hat Enterprise Linux, see Installing Packages in the System Administrator's Guide . 9.2.2.1. Overview of OpenLDAP Server Utilities To perform administrative tasks, the openldap-servers package installs the following utilities along with the slapd service: Table 9.3. List of OpenLDAP server utilities Command Description slapacl Allows you to check the access to a list of attributes. slapadd Allows you to add entries from an LDIF file to an LDAP directory. slapauth Allows you to check a list of IDs for authentication and authorization permissions. slapcat Allows you to pull entries from an LDAP directory in the default format and save them in an LDIF file. slapdn Allows you to check a list of Distinguished Names (DNs) based on available schema syntax. slapindex Allows you to re-index the slapd directory based on the current content. Run this utility whenever you change indexing options in the configuration file. slappasswd Allows you to create an encrypted user password to be used with the ldapmodify utility, or in the slapd configuration file. slapschema Allows you to check the compliance of a database with the corresponding schema. slaptest Allows you to check the LDAP server configuration. For a detailed description of these utilities and their usage, see the corresponding manual pages as referred to in the section called "Installed Documentation" . Important Although only root can run slapadd , the slapd service runs as the ldap user. Because of this, the directory server is unable to modify any files created by slapadd . To correct this issue, after running the slapdadd utility, type the following at a shell prompt: Warning To preserve the data integrity, stop the slapd service before using slapadd , slapcat , or slapindex . You can do so by typing the following at a shell prompt: For more information on how to start, stop, restart, and check the current status of the slapd service, see Section 9.2.5, "Running an OpenLDAP Server" . 9.2.2.2. Overview of OpenLDAP Client Utilities The openldap-clients package installs the following utilities which can be used to add, modify, and delete entries in an LDAP directory: Table 9.4. List of OpenLDAP client utilities Command Description ldapadd Allows you to add entries to an LDAP directory, either from a file, or from standard input. It is a symbolic link to ldapmodify -a . ldapcompare Allows you to compare given attribute with an LDAP directory entry. ldapdelete Allows you to delete entries from an LDAP directory. ldapexop Allows you to perform extended LDAP operations. ldapmodify Allows you to modify entries in an LDAP directory, either from a file, or from standard input. ldapmodrdn Allows you to modify the RDN value of an LDAP directory entry. ldappasswd Allows you to set or change the password for an LDAP user. ldapsearch Allows you to search LDAP directory entries. ldapurl Allows you to compose or decompose LDAP URLs. ldapwhoami Allows you to perform a whoami operation on an LDAP server. With the exception of ldapsearch , each of these utilities is more easily used by referencing a file containing the changes to be made rather than typing a command for each entry to be changed within an LDAP directory. The format of such a file is outlined in the man page for each utility. 9.2.2.3. Overview of Common LDAP Client Applications Although there are various graphical LDAP clients capable of creating and modifying directories on the server, none of them is included in Red Hat Enterprise Linux. Popular applications that can access directories in a read-only mode include Mozilla Thunderbird , Evolution , or Ekiga . 9.2.3. Configuring an OpenLDAP Server By default, the OpenLDAP configuration is stored in the /etc/openldap/ directory. The following table highlights the most important directories and files within this directory: Table 9.5. List of OpenLDAP configuration files and directories Path Description /etc/openldap/ldap.conf The configuration file for client applications that use the OpenLDAP libraries. This includes ldapadd , ldapsearch , Evolution , and so on. /etc/openldap/slapd.d/ The directory containing the slapd configuration. Note that OpenLDAP no longer reads its configuration from the /etc/openldap/slapd.conf file. Instead, it uses a configuration database located in the /etc/openldap/slapd.d/ directory. If you have an existing slapd.conf file from a installation, you can convert it to the new format by running the following command: The slapd configuration consists of LDIF entries organized in a hierarchical directory structure, and the recommended way to edit these entries is to use the server utilities described in Section 9.2.2.1, "Overview of OpenLDAP Server Utilities" . Important An error in an LDIF file can render the slapd service unable to start. Because of this, it is strongly advised that you avoid editing the LDIF files within the /etc/openldap/slapd.d/ directly. 9.2.3.1. Changing the Global Configuration Global configuration options for the LDAP server are stored in the /etc/openldap/slapd.d/cn=config.ldif file. The following directives are commonly used: olcAllows The olcAllows directive allows you to specify which features to enable. It takes the following form: It accepts a space-separated list of features as described in Table 9.6, "Available olcAllows options" . The default option is bind_v2 . Table 9.6. Available olcAllows options Option Description bind_v2 Enables the acceptance of LDAP version 2 bind requests. bind_anon_cred Enables an anonymous bind when the Distinguished Name (DN) is empty. bind_anon_dn Enables an anonymous bind when the Distinguished Name (DN) is not empty. update_anon Enables processing of anonymous update operations. proxy_authz_anon Enables processing of anonymous proxy authorization control. Example 9.1. Using the olcAllows directive olcConnMaxPending The olcConnMaxPending directive allows you to specify the maximum number of pending requests for an anonymous session. It takes the following form: The default option is 100 . Example 9.2. Using the olcConnMaxPending directive olcConnMaxPendingAuth The olcConnMaxPendingAuth directive allows you to specify the maximum number of pending requests for an authenticated session. It takes the following form: The default option is 1000 . Example 9.3. Using the olcConnMaxPendingAuth directive olcDisallows The olcDisallows directive allows you to specify which features to disable. It takes the following form: It accepts a space-separated list of features as described in Table 9.7, "Available olcDisallows options" . No features are disabled by default. Table 9.7. Available olcDisallows options Option Description bind_anon Disables the acceptance of anonymous bind requests. bind_simple Disables the simple bind authentication mechanism. tls_2_anon Disables the enforcing of an anonymous session when the STARTTLS command is received. tls_authc Disallows the STARTTLS command when authenticated. Example 9.4. Using the olcDisallows directive olcIdleTimeout The olcIdleTimeout directive allows you to specify how many seconds to wait before closing an idle connection. It takes the following form: This option is disabled by default (that is, set to 0 ). Example 9.5. Using the olcIdleTimeout directive olcLogFile The olcLogFile directive allows you to specify a file in which to write log messages. It takes the following form: The log messages are written to standard error by default. Example 9.6. Using the olcLogFile directive olcReferral The olcReferral option allows you to specify a URL of a server to process the request in case the server is not able to handle it. It takes the following form: This option is disabled by default. Example 9.7. Using the olcReferral directive olcWriteTimeout The olcWriteTimeout option allows you to specify how many seconds to wait before closing a connection with an outstanding write request. It takes the following form: This option is disabled by default (that is, set to 0 ). Example 9.8. Using the olcWriteTimeout directive 9.2.3.2. The Front End Configuration The OpenLDAP front end configuration is stored in the etc/openldap/slapd.d/cn=config/olcDatabase={-1}frontend.ldif file and defines global database options, such as access control lists (ACL). For details, see the Global Database Options section in the slapd-config (5) man page. 9.2.3.3. The Monitor Back End The /etc/openldap/slapd.d/cn=config/olcDatabase={1}monitor.ldif file controls the OpenLDAP monitor back end. If enabled, it is automatically generated and dynamically updated by OpenLDAP with information about the running status of the daemon. The suffix is cn=Monitor and cannot be changed. For further details, see the slapd-monitor (5) man page. 9.2.3.4. Database-Specific Configuration By default, the OpenLDAP server uses the hdb database back end. Besides that it uses a hierarchical database layout which supports subtree renames, it is identical to the bdb back end and uses the same configuration options. The configuration for this database back end is stored in the /etc/openldap/slapd.d/cn=config/olcDatabase={2}hdb.ldif file. For a list of other back end databases, see the slapd.backends (5) man page. Database-specific settings you find in the man page for the individual back ends. For example: Note The bdb and hdb back ends are deprecated. Consider using the mdb back end for new installations instead. The following directives are commonly used in a database-specific configuration: olcReadOnly The olcReadOnly directive allows you to use the database in a read-only mode. It takes the following form: It accepts either TRUE (enable the read-only mode), or FALSE (enable modifications of the database). The default option is FALSE . Example 9.9. Using the olcReadOnly directive olcRootDN The olcRootDN directive allows you to specify the user that is unrestricted by access controls or administrative limit parameters set for operations on the LDAP directory. It takes the following form: It accepts a Distinguished Name ( DN ). The default option is cn=Manager,dn=my-domain,dc=com . Example 9.10. Using the olcRootDN directive olcRootPW The olcRootPW directive allows you to set a password for the user that is specified using the olcRootDN directive. It takes the following form: It accepts either a plain text string, or a hash. To generate a hash, type the following at a shell prompt: Example 9.11. Using the olcRootPW directive olcSuffix The olcSuffix directive allows you to specify the domain for which to provide information. It takes the following form: It accepts a fully qualified domain name ( FQDN ). The default option is dc=my-domain,dc=com . Example 9.12. Using the olcSuffix directive 9.2.3.5. Extending Schema Since OpenLDAP 2.3, the /etc/openldap/slapd.d/ directory also contains LDAP definitions that were previously located in /etc/openldap/schema/ . It is possible to extend the schema used by OpenLDAP to support additional attribute types and object classes using the default schema files as a guide. However, this task is beyond the scope of this chapter. For more information on this topic, see https://openldap.org/doc/admin24/schema.html . 9.2.3.6. Establishing a Secure Connection The OpenLDAP suite and servers can be secured using the Transport Layer Security (TLS) framework. TLS is a cryptographic protocol designed to provide communication security over the network. OpenLDAP suite in Red Hat Enterprise Linux 7 uses OpenSSL as the TLS implementation. To establish a secure connection using TLS, obtain the required certificates. Then, a number of options must be configured on both the client and the server. At minimum, a server must be configured with the Certificate Authority (CA) certificates and also its own server certificate and private key. The clients must be configured with the name of the file containing all the trusted CA certificates. Typically, a server only needs to sign a single CA certificate. A client may want to connect to a variety of secure servers, therefore it is common to specify a list of several trusted CAs in its configuration. Server Configuration This section lists global configuration directives for slapd that need to be specified in the /etc/openldap/slapd.d/cn=config.ldif file on an OpenLDAP server in order to establish TLS. While the old style configuration uses a single file, normally installed as /usr/local/etc/openldap/slapd.conf , the new style uses a slapd back end database to store the configuration. The configuration database normally resides in the /usr/local/etc/openldap/slapd.d/ directory. The following directives are also valid for establishing SSL. In addition to TLS directives, you need to enable a port dedicated to SSL on the server side - typically it is port 636. To do so, edit the /etc/sysconfig/slapd file and append the ldaps:/// string to the list of URLs specified with the SLAPD_URLS directive. olcTLSCACertificateFile The olcTLSCACertificateFile directive specifies the file encoded with privacy-enhanced mail (PEM) schema that contains trusted CA certificates. The directive takes the following form: olcTLSCACertificateFile : path Replace path with the path to the CA certificate file. olcTLSCACertificatePath The olcTLSCACertificatePath directive specifies the path to a directory containing individual CA certificates in separate files. This directory must be specially managed with the OpenSSL c_rehash utility that generates symbolic links with the hashed names that point to the actual certificate files. In general, it is simpler to use the olcTLSCACertificateFile directive instead. The directive takes the following form: olcTLSCACertificatePath : path Replace path with a path to the directory containing the CA certificate files. The specified directory must be managed with the OpenSSL c_rehash utility. olcTLSCertificateFile The olcTLSCertificateFile directive specifies the file that contains the slapd server certificate. The directive takes the following form: olcTLSCertificateFile : path Replace path with a path to the server certificate file of the slapd service. olcTLSCertificateKeyFile The olcTLSCertificateKeyFile directive specifies the file that contains the private key that matches the certificate stored in the file specified with olcTLSCertificateFile . Note that the current implementation does not support encrypted private keys, and therefore the containing file must be sufficiently protected. The directive takes the following form: olcTLSCertificateKeyFile : path Replace path with a path to the private key file. Client Configuration Specify the following directives in the /etc/openldap/ldap.conf configuration file on the client system. Most of these directives are parallel to the server configuration options. Directives in /etc/openldap/ldap.conf are configured on a system-wide basis, however, individual users may override them in their ~/.ldaprc files. The same directives can be used to establish an SSL connection. The ldaps:// string must be used instead of ldap:// in OpenLDAP commands such as ldapsearch . This forces commands to use the default port for SSL, port 636, configured on the server. TLS_CACERT The TLS_CACERT directive specifies a file containing certificates for all of the Certificate Authorities the client will recognize. This is equivalent to the olcTLSCACertificateFile directive on a server. TLS_CACERT should always be specified before TLS_CACERTDIR in /etc/openldap/ldap.conf . The directive takes the following form: TLS_CACERT path Replace path with a path to the CA certificate file. TLS_CACERTDIR The TLS_CACERTDIR directive specifies the path to a directory that contains Certificate Authority certificates in separate files. As with olcTLSCACertificatePath on a server, the specified directory must be managed with the OpenSSL c_rehash utility. TLS_CACERTDIR directory Replace directory with a path to the directory containing CA certificate files. TLS_CERT The TLS_CERT specifies the file that contains a client certificate. This directive can only be specified in a user's ~/.ldaprc file. The directive takes the following form: TLS_CERT path Replace path with a path to the client certificate file. TLS_KEY The TLS_KEY specifies the file that contains the private key that matches the certificate stored in the file specified with the TLS_CERT directive. As with olcTLSCertificateFile on a server, encrypted key files are not supported, so the file itself must be carefully protected. This option is only configurable in a user's ~/.ldaprc file. The TLS_KEY directive takes the following form: TLS_KEY path Replace path with a path to the client certificate file. 9.2.3.7. Setting Up Replication Replication is the process of copying updates from one LDAP server ( provider ) to one or more other servers or clients ( consumers ). A provider replicates directory updates to consumers, the received updates can be further propagated by the consumer to other servers, so a consumer can also act simultaneously as a provider. Also, a consumer does not have to be an LDAP server, it may be just an LDAP client. In OpenLDAP, you can use several replication modes, most notable are mirror and sync . For more information on OpenLDAP replication modes, see the OpenLDAP Software Administrator's Guide installed with openldap-servers package (see the section called "Installed Documentation" ). To enable a chosen replication mode, use one of the following directives in /etc/openldap/slapd.d/ on both provider and consumers. olcMirrorMode The olcMirrorMode directive enables the mirror replication mode. It takes the following form: olcMirrorMode on This option needs to be specified both on provider and consumers. Also a serverID must be specified along with syncrepl options. Find a detailed example in the 18.3.4. MirrorMode section of the OpenLDAP Software Administrator's Guide (see the section called "Installed Documentation" ). olcSyncrepl The olcSyncrepl directive enables the sync replication mode. It takes the following form: olcSyncrepl on The sync replication mode requires a specific configuration on both the provider and the consumers. This configuration is thoroughly described in the 18.3.1. Syncrepl section of the OpenLDAP Software Administrator's Guide (see the section called "Installed Documentation" ). 9.2.3.8. Loading Modules and Back ends You can enhance the slapd service with dynamically loaded modules. Support for these modules must be enabled with the --enable-modules option when configuring slapd . Modules are stored in files with the .la extension: module_name .la Back ends store or retrieve data in response to LDAP requests. Back ends may be compiled statically into slapd , or when module support is enabled, they may be dynamically loaded. In the latter case, the following naming convention is applied: back_ backend_name .la To load a module or a back end, use the following directive in /etc/openldap/slapd.d/ : olcModuleLoad The olcModuleLoad directive specifies a dynamically loadable module to load. It takes the following form: olcModuleLoad : module Here, module stands either for a file containing the module, or a back end, that will be loaded. 9.2.4. SELinux Policy for Applications Using LDAP SELinux is an implementation of a mandatory access control mechanism in the Linux kernel. By default, SELinux prevents applications from accessing an OpenLDAP server. To enable authentication through LDAP, which is required by several applications, the allow_ypbind SELinux Boolean needs to be enabled. Certain applications also demand an enabled authlogin_nsswitch_use_ldap Boolean in this scenario. Execute the following commands to enable the aforementioned Booleans: ~]# setsebool -P allow_ypbind = 1 ~]# setsebool -P authlogin_nsswitch_use_ldap = 1 The -P option makes this setting persistent across system reboots. See the Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide for more detailed information about SELinux. 9.2.5. Running an OpenLDAP Server This section describes how to start, stop, restart, and check the current status of the Standalone LDAP Daemon . For more information on how to manage system services in general, see Managing Services with systemd in the System Administrator's Guide . 9.2.5.1. Starting the Service To start the slapd service in the current session, type the following at a shell prompt as root : To configure the service to start automatically at the boot time, use the following command as root : 9.2.5.2. Stopping the Service To stop the running slapd service in the current session, type the following at a shell prompt as root : To prevent the service from starting automatically at the boot time, type as root : 9.2.5.3. Restarting the Service To restart the running slapd service, type the following at a shell prompt: This stops the service and immediately starts it again. Use this command to reload the configuration. 9.2.5.4. Verifying the Service Status To verify that the slapd service is running, type the following at a shell prompt: 9.2.6. Configuring a System to Authenticate Using OpenLDAP In order to configure a system to authenticate using OpenLDAP, make sure that the appropriate packages are installed on both LDAP server and client machines. For information on how to set up the server, follow the instructions in Section 9.2.2, "Installing the OpenLDAP Suite" and Section 9.2.3, "Configuring an OpenLDAP Server" . On a client, type the following at a shell prompt: 9.2.6.1. Migrating Old Authentication Information to LDAP Format The migrationtools package provides a set of shell and Perl scripts to help you migrate authentication information into an LDAP format. To install this package, type the following at a shell prompt: This will install the scripts to the /usr/share/migrationtools/ directory. Once installed, edit the /usr/share/migrationtools/migrate_common.ph file and change the following lines to reflect the correct domain, for example: Alternatively, you can specify the environment variables directly on the command line. For example, to run the migrate_all_online.sh script with the default base set to dc=example,dc=com , type: To decide which script to run in order to migrate the user database, see Table 9.8, "Commonly used LDAP migration scripts" . Table 9.8. Commonly used LDAP migration scripts Existing Name Service Is LDAP Running? Script to Use /etc flat files yes migrate_all_online.sh /etc flat files no migrate_all_offline.sh NetInfo yes migrate_all_netinfo_online.sh NetInfo no migrate_all_netinfo_offline.sh NIS (YP) yes migrate_all_nis_online.sh NIS (YP) no migrate_all_nis_offline.sh For more information on how to use these scripts, see the README and the migration-tools.txt files in the /usr/share/doc/migrationtools- version / directory. 9.2.7. Additional Resources The following resources offer additional information on the Lightweight Directory Access Protocol. Before configuring LDAP on your system, it is highly recommended that you review these resources, especially the OpenLDAP Software Administrator's Guide . Installed Documentation The following documentation is installed with the openldap-servers package: /usr/share/doc/openldap-servers- version /guide.html - A copy of the OpenLDAP Software Administrator's Guide . /usr/share/doc/openldap-servers- version /README.schema - A README file containing the description of installed schema files. Additionally, there is also a number of manual pages that are installed with the openldap , openldap-servers , and openldap-clients packages: Client Applications ldapadd (1) - The manual page for the ldapadd command describes how to add entries to an LDAP directory. ldapdelete (1) - The manual page for the ldapdelete command describes how to delete entries within an LDAP directory. ldapmodify (1) - The manual page for the ldapmodify command describes how to modify entries within an LDAP directory. ldapsearch (1) - The manual page for the ldapsearch command describes how to search for entries within an LDAP directory. ldappasswd (1) - The manual page for the ldappasswd command describes how to set or change the password of an LDAP user. ldapcompare (1) - Describes how to use the ldapcompare tool. ldapwhoami (1) - Describes how to use the ldapwhoami tool. ldapmodrdn (1) - Describes how to modify the RDNs of entries. Server Applications slapd (8C) - Describes command line options for the LDAP server. Administrative Applications slapadd (8C) - Describes command line options used to add entries to a slapd database. slapcat (8C) - Describes command line options used to generate an LDIF file from a slapd database. slapindex (8C) - Describes command line options used to regenerate an index based upon the contents of a slapd database. slappasswd (8C) - Describes command line options used to generate user passwords for LDAP directories. Configuration Files ldap.conf (5) - The manual page for the ldap.conf file describes the format and options available within the configuration file for LDAP clients. slapd-config (5) - Describes the format and options available within the /etc/openldap/slapd.d configuration directory. Other Resources OpenLDAP and Mozilla NSS Compatibility Layer Implementation details of NSS database backwards compatibility. How do I use TLS/SSL? Information on how to configure OpenLDAP to use OpenSSL.
|
[
"[ id ] dn: distinguished_name attribute_type : attribute_value ... attribute_type : attribute_value ... ...",
"install package ...",
"~]# yum install openldap openldap-clients openldap-servers",
"~]# chown -R ldap:ldap /var/lib/ldap",
"~]# systemctl stop slapd.service",
"~]# slaptest -f /etc/openldap/slapd.conf -F /etc/openldap/slapd.d/",
"olcAllows : feature ...",
"olcAllows: bind_v2 update_anon",
"olcConnMaxPending : number",
"olcConnMaxPending: 100",
"olcConnMaxPendingAuth : number",
"olcConnMaxPendingAuth: 1000",
"olcDisallows : feature ...",
"olcDisallows: bind_anon",
"olcIdleTimeout : number",
"olcIdleTimeout: 180",
"olcLogFile : file_name",
"olcLogFile: /var/log/slapd.log",
"olcReferral : URL",
"olcReferral: ldap://root.openldap.org",
"olcWriteTimeout",
"olcWriteTimeout: 180",
"man slapd-hdb",
"olcReadOnly : boolean",
"olcReadOnly: TRUE",
"olcRootDN : distinguished_name",
"olcRootDN: cn=root,dn=example,dn=com",
"olcRootPW : password",
"~]USD slappaswd New password: Re-enter new password: {SSHA}WczWsyPEnMchFf1GRTweq2q7XJcvmSxD",
"olcRootPW: {SSHA}WczWsyPEnMchFf1GRTweq2q7XJcvmSxD",
"olcSuffix : domain_name",
"olcSuffix: dc=example,dc=com",
"~]# systemctl start slapd.service",
"~]# systemctl enable slapd.service ln -s '/usr/lib/systemd/system/slapd.service' '/etc/systemd/system/multi-user.target.wants/slapd.service'",
"~]# systemctl stop slapd.service",
"~]# systemctl disable slapd.service rm '/etc/systemd/system/multi-user.target.wants/slapd.service'",
"~]# systemctl restart slapd.service",
"~]USD systemctl is-active slapd.service active",
"~]# yum install openldap openldap-clients nss-pam-ldapd",
"~]# yum install migrationtools",
"Default DNS domain USDDEFAULT_MAIL_DOMAIN = \"example.com\"; Default base USDDEFAULT_BASE = \"dc=example,dc=com\";",
"~]# export DEFAULT_BASE=\"dc=example,dc=com\" /usr/share/migrationtools/migrate_all_online.sh"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system-level_authentication_guide/openldap
|
Chapter 5. Management of managers using the Ceph Orchestrator
|
Chapter 5. Management of managers using the Ceph Orchestrator As a storage administrator, you can use the Ceph Orchestrator to deploy additional manager daemons. Cephadm automatically installs a manager daemon on the bootstrap node during the bootstrapping process. In general, you should set up a Ceph Manager on each of the hosts running the Ceph Monitor daemon to achieve same level of availability. By default, whichever ceph-mgr instance comes up first is made active by the Ceph Monitors, and others are standby managers. There is no requirement that there should be a quorum among the ceph-mgr daemons. If the active daemon fails to send a beacon to the monitors for more than the mon mgr beacon grace , then it is replaced by a standby. If you want to pre-empt failover, you can explicitly mark a ceph-mgr daemon as failed with ceph mgr fail MANAGER_NAME command. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. 5.1. Deploying the manager daemons using the Ceph Orchestrator The Ceph Orchestrator deploys two Manager daemons by default. You can deploy additional manager daemons using the placement specification in the command line interface. To deploy a different number of Manager daemons, specify a different number. If you do not specify the hosts where the Manager daemons should be deployed, the Ceph Orchestrator randomly selects the hosts and deploys the Manager daemons to them. Note Ensure your deployment has at least three Ceph Managers in each deployment. Prerequisites A running Red Hat Ceph Storage cluster. Hosts are added to the cluster. Procedure Log into the Cephadm shell: Example You can deploy manager daemons in two different ways: Method 1 Deploy manager daemons using placement specification on specific set of hosts: Note Red Hat recommends that you use the --placement option to deploy on specific hosts. Syntax Example Method 2 Deploy manager daemons randomly on the hosts in the storage cluster: Syntax Example Verification List the service: Example List the hosts, daemons, and processes: Syntax Example 5.2. Removing the manager daemons using the Ceph Orchestrator To remove the manager daemons from the host, you can just redeploy the daemons on other hosts. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to all the nodes. Hosts are added to the cluster. At least one manager daemon deployed on the hosts. Procedure Log into the Cephadm shell: Example Run the ceph orch apply command to redeploy the required manager daemons: Syntax If you want to remove manager daemons from host02 , then you can redeploy the manager daemons on other hosts. Example Verification List the hosts,daemons, and processes: Syntax Example Additional Resources See Deploying the manager daemons using the Ceph Orchestrator section in the Red Hat Ceph Storage Operations Guide for more information. 5.3. Using the Ceph Manager modules Use the ceph mgr module ls command to see the available modules and the modules that are presently enabled. Enable or disable modules with ceph mgr module enable MODULE command or ceph mgr module disable MODULE command respectively. If a module is enabled, then the active ceph-mgr daemon loads and executes it. In the case of modules that provide a service, such as an HTTP server, the module might publish its address when it is loaded. To see the addresses of such modules, run the ceph mgr services command. Some modules might also implement a special standby mode which runs on standby ceph-mgr daemon as well as the active daemon. This enables modules that provide services to redirect their clients to the active daemon, if the client tries to connect to a standby. Following is an example to enable the dashboard module: The first time the cluster starts, it uses the mgr_initial_modules setting to override which modules to enable. However, this setting is ignored through the rest of the lifetime of the cluster: only use it for bootstrapping. For example, before starting your monitor daemons for the first time, you might add a section like this to your ceph.conf file: Where a module implements comment line hooks, the commands are accessible as ordinary Ceph commands and Ceph automatically incorporates module commands into the standard CLI interface and route them appropriately to the module: You can use the following configuration parameters with the above command: Table 5.1. Configuration parameters Configuration Description Type Default mgr module path Path to load modules from. String "<library dir>/mgr" mgr data Path to load daemon data (such as keyring) String "/var/lib/ceph/mgr/USDcluster-USDid" mgr tick period How many seconds between manager beacons to monitors, and other periodic checks. Integer 5 mon mgr beacon grace How long after last beacon should a manager be considered failed. Integer 30 5.4. Using the Ceph Manager balancer module The balancer is a module for Ceph Manager ( ceph-mgr ) that optimizes the placement of placement groups (PGs) across OSDs in order to achieve a balanced distribution, either automatically or in a supervised fashion. Currently the balancer module cannot be disabled. It can only be turned off to customize the configuration. Modes There are currently two supported balancer modes: crush-compat : The CRUSH compat mode uses the compat weight-set feature, introduced in Ceph Luminous, to manage an alternative set of weights for devices in the CRUSH hierarchy. The normal weights should remain set to the size of the device to reflect the target amount of data that you want to store on the device. The balancer then optimizes the weight-set values, adjusting them up or down in small increments in order to achieve a distribution that matches the target distribution as closely as possible. Because PG placement is a pseudorandom process, there is a natural amount of variation in the placement; by optimizing the weights, the balancer counter-acts that natural variation. This mode is fully backwards compatible with older clients. When an OSDMap and CRUSH map are shared with older clients, the balancer presents the optimized weightsff as the real weights. The primary restriction of this mode is that the balancer cannot handle multiple CRUSH hierarchies with different placement rules if the subtrees of the hierarchy share any OSDs. Because this configuration makes managing space utilization on the shared OSDs difficult, it is generally not recommended. As such, this restriction is normally not an issue. upmap : Starting with Luminous, the OSDMap can store explicit mappings for individual OSDs as exceptions to the normal CRUSH placement calculation. These upmap entries provide fine-grained control over the PG mapping. This CRUSH mode will optimize the placement of individual PGs in order to achieve a balanced distribution. In most cases, this distribution is "perfect", with an equal number of PGs on each OSD +/-1 PG, as they might not divide evenly. Important To allow use of this feature, you must tell the cluster that it only needs to support luminous or later clients with the following command: This command fails if any pre-luminous clients or daemons are connected to the monitors. Due to a known issue, kernel CephFS clients report themselves as jewel clients. To work around this issue, use the --yes-i-really-mean-it flag: You can check what client versions are in use with: Prerequisites A running Red Hat Ceph Storage cluster. Procedure Ensure the balancer module is enabled: Example Turn on the balancer module: Example The default mode is upmap . The mode can be changed with: Example or Example Status The current status of the balancer can be checked at any time with: Example Automatic balancing By default, when turning on the balancer module, automatic balancing is used: Example The balancer can be turned back off again with: Example This will use the crush-compat mode, which is backward compatible with older clients and will make small changes to the data distribution over time to ensure that OSDs are equally utilized. Throttling No adjustments will be made to the PG distribution if the cluster is degraded, for example, if an OSD has failed and the system has not yet healed itself. When the cluster is healthy, the balancer throttles its changes such that the percentage of PGs that are misplaced, or need to be moved, is below a threshold of 5% by default. This percentage can be adjusted using the target_max_misplaced_ratio setting. For example, to increase the threshold to 7%: Example Automatic balancing By default, when turning on the balancer module, automatic balancing is used: Example The balancer can be turned back off again with: Example This will use the crush-compat mode, which is backward compatible with older clients and will make small changes to the data distribution over time to ensure that OSDs are equally utilized. Throttling No adjustments will be made to the PG distribution if the cluster is degraded, for example, if an OSD has failed and the system has not yet healed itself. When the cluster is healthy, the balancer throttles its changes such that the percentage of PGs that are misplaced, or need to be moved, is below a threshold of 5% by default. This percentage can be adjusted using the target_max_misplaced_ratio setting. For example, to increase the threshold to 7%: Example For automatic balancing: Set the number of seconds to sleep in between runs of the automatic balancer: Example Set the time of day to begin automatic balancing in HHMM format: Example Set the time of day to finish automatic balancing in HHMM format: Example Restrict automatic balancing to this day of the week or later. Uses the same conventions as crontab, 0 is Sunday, 1 is Monday, and so on: Example Restrict automatic balancing to this day of the week or earlier. This uses the same conventions as crontab, 0 is Sunday, 1 is Monday, and so on: Example Define the pool IDs to which the automatic balancing is limited. The default for this is an empty string, meaning all pools are balanced. The numeric pool IDs can be gotten with the ceph osd pool ls detail command: Example Supervised optimization The balancer operation is broken into a few distinct phases: Building a plan . Evaluating the quality of the data distribution, either for the current PG distribution, or the PG distribution that would result after executing a plan . Executing the plan . To evaluate and score the current distribution: Example To evaluate the distribution for a single pool: Syntax Example To see greater detail for the evaluation: Example To generate a plan using the currently configured mode: Syntax Replace PLAN_NAME with a custom plan name. Example To see the contents of a plan: Syntax Example To discard old plans: Syntax Example To see currently recorded plans use the status command: To calculate the quality of the distribution that would result after executing a plan: Syntax Example To execute the plan: Syntax Example Note Only execute the plan if it is expected to improve the distribution. After execution, the plan will be discarded. 5.5. Using the Ceph Manager alerts module You can use the Ceph Manager alerts module to send simple alert messages about the Red Hat Ceph Storage cluster's health by email. Note This module is not intended to be a robust monitoring solution. The fact that it is run as part of the Ceph cluster itself is fundamentally limiting in that a failure of the ceph-mgr daemon prevents alerts from being sent. This module can, however, be useful for standalone clusters that exist in environments where existing monitoring infrastructure does not exist. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. Procedure Log into the Cephadm shell: Example Enable the alerts module: Example Ensure the alerts module is enabled: Example Configure the Simple Mail Transfer Protocol (SMTP): Syntax Example Optional: By default, the alerts module uses SSL and port 465. Syntax Example Important SSL is not supported in Red Hat Ceph Storage 6 cluster. Do not set the smtp_ssl parameter while configuring alerts. Authenticate to the SMTP server: Syntax Example Optional: By default, SMTP From name is Ceph . To change that, set the smtp_from_name parameter: Syntax Example Optional: By default, the alerts module checks the storage cluster's health every minute, and sends a message when there is a change in the cluster health status. To change the frequency, set the interval parameter: Syntax Example In this example, the interval is set to 5 minutes. Optional: Send an alert immediately: Example Additional Resources See the Health messages of a Ceph cluster section in the Red Hat Ceph Storage Troubleshooting Guide for more information on Ceph health messages. 5.6. Using the Ceph manager crash module Using the Ceph manager crash module, you can collect information about daemon crashdumps and store it in the Red Hat Ceph Storage cluster for further analysis. By default, daemon crashdumps are dumped in /var/lib/ceph/crash . You can configure it with the option crash dir . Crash directories are named by time, date, and a randomly-generated UUID, and contain a metadata file meta and a recent log file, with a crash_id that is the same. You can use ceph-crash.service to submit these crashes automatically and persist in the Ceph Monitors. The ceph-crash.service watches the crashdump directory and uploads them with ceph crash post . The RECENT_CRASH heath message is one of the most common health messages in a Ceph cluster. This health message means that one or more Ceph daemons has crashed recently, and the crash has not yet been archived or acknowledged by the administrator. This might indicate a software bug, a hardware problem like a failing disk, or some other problem. The option mgr/crash/warn_recent_interval controls the time period of what recent means, which is two weeks by default. You can disable the warnings by running the following command: Example The option mgr/crash/retain_interval controls the period for which you want to retain the crash reports before they are automatically purged. The default for this option is one year. Prerequisites A running Red Hat Ceph Storage cluster. Procedure Ensure the crash module is enabled: Example Save a crash dump: The metadata file is a JSON blob stored in the crash dir as meta . You can invoke the ceph command -i - option, which reads from stdin. Example List the timestamp or the UUID crash IDs for all the new and archived crash info: Example List the timestamp or the UUID crash IDs for all the new crash information: Example List the timestamp or the UUID crash IDs for all the new crash information: Example List the summary of saved crash information grouped by age: Example View the details of the saved crash: Syntax Example Remove saved crashes older than KEEP days: Here, KEEP must be an integer. Syntax Example Archive a crash report so that it is no longer considered for the RECENT_CRASH health check and does not appear in the crash ls-new output. It appears in the crash ls . Syntax Example Archive all crash reports: Example Remove the crash dump: Syntax Example Additional Resources See the Health messages of a Ceph cluster section in the Red Hat Ceph Storage Troubleshooting Guide for more information on Ceph health messages. 5.7. Telemetry module The telemetry module sends data about the storage cluster to help understand how Ceph is used and what problems are encountered during operations. The data is visualized on the public dashboard to view the summary statistics on how many clusters are reporting, their total capacity and OSD count, and version distribution trends. Channels The telemetry report is broken down into different channels, each with a different type of information. After the telemetry is enabled, you can turn on or turn off the individual channels. The following are the four different channels: basic - The default is on . This channel provides the basic information about the clusters, which includes the following information: The capacity of the cluster. The number of monitors, managers, OSDs, MDSs, object gateways, or other daemons. The software version that is currently being used. The number and types of RADOS pools and Ceph File Systems. The names of configuration options that are changed from their default (but not their values). crash - The default is on . This channel provides information about the daemon crashes, which includes the following information: The type of daemon. The version of the daemon. The operating system, the OS distribution, and the kernel version. The stack trace that identifies where in the Ceph code the crash occurred. device - The default is on . This channel provides information about the device metrics, which includes anonymized SMART metrics. ident - The default is off . This channel provides the user-provided identifying information about the cluster such as cluster description, and contact email address. perf - The default is off . This channel provides the various performance metrics of the cluster, which can be used for the following: Reveal overall cluster health. Identify workload patterns. Troubleshoot issues with latency, throttling, memory management, and other similar issues. Monitor cluster performance by daemon. The data that is reported does not contain any sensitive data such as pool names, object names, object contents, hostnames, or device serial numbers. It contains counters and statistics on how the cluster is deployed, Ceph version, host distribution, and other parameters that help the project to gain a better understanding of the way Ceph is used. Data is secure and is sent to https://telemetry.ceph.com . Enable telemetry Before enabling channels, ensure that the telemetry is on . Enable telemetry: Enable and disable channels Enable or disable individual channels: Enable or disable multiple channels: Enable or disable all channels together: Sample report To review the data reported at any time, generate a sample report: If telemetry is off , preview the sample report: It takes longer to generate a sample report for storage clusters with hundreds of OSDs or more. To protect your privacy, device reports are generated separately, and data such as hostname and device serial number are anonymized. The device telemetry is sent to a different endpoint and does not associate the device data with a particular cluster. To see the device report, run the following command: If telemetry is off , preview the sample device report: Get a single output of both the reports with telemetry on : Get a single output of both the reports with telemetry off : Generate a sample report by channel: Syntax Generate a preview of the sample report by channel: Syntax Collections Collections are different aspects of data that is collected within a channel. List the collections: See the difference between the collections that you are enrolled in, and the new, available collections: Enroll to the most recent collections: Syntax Interval The module compiles and sends a new report every 24 hours by default. Adjust the interval: Syntax Example In the example, the report is generated every three days (72 hours). Status View the current configuration: Manually sending telemetry Send telemetry data on an ad hoc basis: If telemetry is disabled, add --license sharing-1-0 to the ceph telemetry send command. Sending telemetry through a proxy If the cluster cannot connect directly to the configured telemetry endpoint, you can configure a HTTP/HTTPs proxy server: Syntax Example You can include the user pass in the command: Example Contact and description Optional: Add a contact and description to the report: Syntax Example If ident flag is enabled, its details are not displayed in the leaderboard. Leaderboard Participate in a leaderboard on the public dashboard: Example The leaderboard displays basic information about the storage cluster. This board includes the total storage capacity and the number of OSDs. Disable telemetry Disable telemetry any time: Example
|
[
"cephadm shell",
"ceph orch apply mgr --placement=\" HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"",
"ceph orch apply mgr --placement=\"host01 host02 host03\"",
"ceph orch apply mgr NUMBER_OF_DAEMONS",
"ceph orch apply mgr 3",
"ceph orch ls",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=mgr",
"cephadm shell",
"ceph orch apply mgr \" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_3 \"",
"ceph orch apply mgr \"2 host01 host03\"",
"ceph orch ps --daemon_type= DAEMON_NAME",
"ceph orch ps --daemon_type=mgr",
"ceph mgr module enable dashboard ceph mgr module ls MODULE balancer on (always on) crash on (always on) devicehealth on (always on) orchestrator on (always on) pg_autoscaler on (always on) progress on (always on) rbd_support on (always on) status on (always on) telemetry on (always on) volumes on (always on) cephadm on dashboard on iostat on nfs on prometheus on restful on alerts - diskprediction_local - influx - insights - k8sevents - localpool - mds_autoscaler - mirroring - osd_perf_query - osd_support - rgw - rook - selftest - snap_schedule - stats - telegraf - test_orchestrator - zabbix - ceph mgr services { \"dashboard\": \"http://myserver.com:7789/\", \"restful\": \"https://myserver.com:8789/\" }",
"[mon] mgr initial modules = dashboard balancer",
"ceph <command | help>",
"ceph osd set-require-min-compat-client luminous",
"ceph osd set-require-min-compat-client luminous --yes-i-really-mean-it",
"ceph features",
"ceph mgr module enable balancer",
"ceph balancer on",
"ceph balancer mode crush-compat",
"ceph balancer mode upmap",
"ceph balancer status",
"ceph balancer on",
"ceph balancer off",
"ceph config-key set mgr target_max_misplaced_ratio .07",
"ceph balancer on",
"ceph balancer off",
"ceph config-key set mgr target_max_misplaced_ratio .07",
"ceph config set mgr mgr/balancer/sleep_interval 60",
"ceph config set mgr mgr/balancer/begin_time 0000",
"ceph config set mgr mgr/balancer/end_time 2359",
"ceph config set mgr mgr/balancer/begin_weekday 0",
"ceph config set mgr mgr/balancer/end_weekday 6",
"ceph config set mgr mgr/balancer/pool_ids 1,2,3",
"ceph balancer eval",
"ceph balancer eval POOL_NAME",
"ceph balancer eval rbd",
"ceph balancer eval-verbose",
"ceph balancer optimize PLAN_NAME",
"ceph balancer optimize rbd_123",
"ceph balancer show PLAN_NAME",
"ceph balancer show rbd_123",
"ceph balancer rm PLAN_NAME",
"ceph balancer rm rbd_123",
"ceph balancer status",
"ceph balancer eval PLAN_NAME",
"ceph balancer eval rbd_123",
"ceph balancer execute PLAN_NAME",
"ceph balancer execute rbd_123",
"cephadm shell",
"ceph mgr module enable alerts",
"ceph mgr module ls | more { \"always_on_modules\": [ \"balancer\", \"crash\", \"devicehealth\", \"orchestrator\", \"pg_autoscaler\", \"progress\", \"rbd_support\", \"status\", \"telemetry\", \"volumes\" ], \"enabled_modules\": [ \"alerts\", \"cephadm\", \"dashboard\", \"iostat\", \"nfs\", \"prometheus\", \"restful\" ]",
"ceph config set mgr mgr/alerts/smtp_host SMTP_SERVER ceph config set mgr mgr/alerts/smtp_destination RECEIVER_EMAIL_ADDRESS ceph config set mgr mgr/alerts/smtp_sender SENDER_EMAIL_ADDRESS",
"ceph config set mgr mgr/alerts/smtp_host smtp.example.com ceph config set mgr mgr/alerts/smtp_destination [email protected] ceph config set mgr mgr/alerts/smtp_sender [email protected]",
"ceph config set mgr mgr/alerts/smtp_port PORT_NUMBER",
"ceph config set mgr mgr/alerts/smtp_port 587",
"ceph config set mgr mgr/alerts/smtp_user USERNAME ceph config set mgr mgr/alerts/smtp_password PASSWORD",
"ceph config set mgr mgr/alerts/smtp_user admin1234 ceph config set mgr mgr/alerts/smtp_password admin1234",
"ceph config set mgr mgr/alerts/smtp_from_name CLUSTER_NAME",
"ceph config set mgr mgr/alerts/smtp_from_name 'Ceph Cluster Test'",
"ceph config set mgr mgr/alerts/interval INTERVAL",
"ceph config set mgr mgr/alerts/interval \"5m\"",
"ceph alerts send",
"ceph config set mgr/crash/warn_recent_interval 0",
"ceph mgr module ls | more { \"always_on_modules\": [ \"balancer\", \"crash\", \"devicehealth\", \"orchestrator_cli\", \"progress\", \"rbd_support\", \"status\", \"volumes\" ], \"enabled_modules\": [ \"dashboard\", \"pg_autoscaler\", \"prometheus\" ]",
"ceph crash post -i meta",
"ceph crash ls",
"ceph crash ls-new",
"ceph crash ls-new",
"ceph crash stat 8 crashes recorded 8 older than 1 days old: 2022-05-20T08:30:14.533316Z_4ea88673-8db6-4959-a8c6-0eea22d305c2 2022-05-20T08:30:14.590789Z_30a8bb92-2147-4e0f-a58b-a12c2c73d4f5 2022-05-20T08:34:42.278648Z_6a91a778-bce6-4ef3-a3fb-84c4276c8297 2022-05-20T08:34:42.801268Z_e5f25c74-c381-46b1-bee3-63d891f9fc2d 2022-05-20T08:34:42.803141Z_96adfc59-be3a-4a38-9981-e71ad3d55e47 2022-05-20T08:34:42.830416Z_e45ed474-550c-44b3-b9bb-283e3f4cc1fe 2022-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d 2022-05-24T19:58:44.315282Z_1847afbc-f8a9-45da-94e8-5aef0738954e",
"ceph crash info CRASH_ID",
"ceph crash info 2022-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d { \"assert_condition\": \"session_map.sessions.empty()\", \"assert_file\": \"/builddir/build/BUILD/ceph-16.1.0-486-g324d7073/src/mon/Monitor.cc\", \"assert_func\": \"virtual Monitor::~Monitor()\", \"assert_line\": 287, \"assert_msg\": \"/builddir/build/BUILD/ceph-16.1.0-486-g324d7073/src/mon/Monitor.cc: In function 'virtual Monitor::~Monitor()' thread 7f67a1aeb700 time 2022-05-24T19:58:42.545485+0000\\n/builddir/build/BUILD/ceph-16.1.0-486-g324d7073/src/mon/Monitor.cc: 287: FAILED ceph_assert(session_map.sessions.empty())\\n\", \"assert_thread_name\": \"ceph-mon\", \"backtrace\": [ \"/lib64/libpthread.so.0(+0x12b30) [0x7f679678bb30]\", \"gsignal()\", \"abort()\", \"(ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x1a9) [0x7f6798c8d37b]\", \"/usr/lib64/ceph/libceph-common.so.2(+0x276544) [0x7f6798c8d544]\", \"(Monitor::~Monitor()+0xe30) [0x561152ed3c80]\", \"(Monitor::~Monitor()+0xd) [0x561152ed3cdd]\", \"main()\", \"__libc_start_main()\", \"_start()\" ], \"ceph_version\": \"16.2.8-65.el8cp\", \"crash_id\": \"2022-07-06T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d\", \"entity_name\": \"mon.ceph-adm4\", \"os_id\": \"rhel\", \"os_name\": \"Red Hat Enterprise Linux\", \"os_version\": \"8.5 (Ootpa)\", \"os_version_id\": \"8.5\", \"process_name\": \"ceph-mon\", \"stack_sig\": \"957c21d558d0cba4cee9e8aaf9227b3b1b09738b8a4d2c9f4dc26d9233b0d511\", \"timestamp\": \"2022-07-06T19:58:42.549073Z\", \"utsname_hostname\": \"host02\", \"utsname_machine\": \"x86_64\", \"utsname_release\": \"4.18.0-240.15.1.el8_3.x86_64\", \"utsname_sysname\": \"Linux\", \"utsname_version\": \"#1 SMP Wed Jul 06 03:12:15 EDT 2022\" }",
"ceph crash prune KEEP",
"ceph crash prune 60",
"ceph crash archive CRASH_ID",
"ceph crash archive 2022-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d",
"ceph crash archive-all",
"ceph crash rm CRASH_ID",
"ceph crash rm 2022-05-24T19:58:42.549073Z_b2382865-ea89-4be2-b46f-9a59af7b7a2d",
"ceph telemetry on",
"ceph telemetry enable channel basic ceph telemetry enable channel crash ceph telemetry enable channel device ceph telemetry enable channel ident ceph telemetry enable channel perf ceph telemetry disable channel basic ceph telemetry disable channel crash ceph telemetry disable channel device ceph telemetry disable channel ident ceph telemetry disable channel perf",
"ceph telemetry enable channel basic crash device ident perf ceph telemetry disable channel basic crash device ident perf",
"ceph telemetry enable channel all ceph telemetry disable channel all",
"ceph telemetry show",
"ceph telemetry preview",
"ceph telemetry show-device",
"ceph telemetry preview-device",
"ceph telemetry show-all",
"ceph telemetry preview-all",
"ceph telemetry show CHANNEL_NAME",
"ceph telemetry preview CHANNEL_NAME",
"ceph telemetry collection ls",
"ceph telemetry diff",
"ceph telemetry on ceph telemetry enable channel CHANNEL_NAME",
"ceph config set mgr mgr/telemetry/interval INTERVAL",
"ceph config set mgr mgr/telemetry/interval 72",
"ceph telemetry status",
"ceph telemetry send",
"ceph config set mgr mgr/telemetry/proxy PROXY_URL",
"ceph config set mgr mgr/telemetry/proxy https://10.0.0.1:8080",
"ceph config set mgr mgr/telemetry/proxy https://10.0.0.1:8080",
"ceph config set mgr mgr/telemetry/contact '_CONTACT_NAME_' ceph config set mgr mgr/telemetry/description '_DESCRIPTION_' ceph config set mgr mgr/telemetry/channel_ident true",
"ceph config set mgr mgr/telemetry/contact 'John Doe <[email protected]>' ceph config set mgr mgr/telemetry/description 'My first Ceph cluster' ceph config set mgr mgr/telemetry/channel_ident true",
"ceph config set mgr mgr/telemetry/leaderboard true",
"ceph telemetry off"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/operations_guide/management-of-managers-using-the-ceph-orchestrator
|
Chapter 4. Gathering Information About the Environment
|
Chapter 4. Gathering Information About the Environment 4.1. Monitoring and observability This chapter provides a number of ways to monitor and obtain metrics and logs from your Red Hat Virtualization system. These methods include: Using Data Warehouse and Grafana to monitor RHV Sending metrics to a remote instance of Elasticsearch Deploying Insights in Red Hat Virtualization Manager 4.1.1. Using Data Warehouse and Grafana to monitor RHV 4.1.1.1. Grafana overview Grafana is a web-based UI tool used to display reports based on data collected from the oVirt Data Warehouse PostgreSQL database under the database name ovirt_engine_history . For details of the available report dashboards, see Grafana dashboards and Grafana website - dashboards . Data from the Manager is collected every minute and aggregated in hourly and daily aggregations. The data is retained according to the scale setting defined in the Data Warehouse configuration during engine-setup (Basic or Full scale): Basic (default) - samples data saved for 24 hours, hourly data saved for 1 month, daily data - no daily aggregations saved. Full (recommended)- samples data saved for 24 hours, hourly data saved for 2 months, daily aggregations saved for 5 years. Full sample scaling may require migrating the Data Warehouse to a separate virtual machine. For Data Warehouse scaling instructions, see Changing the Data Warehouse Sampling Scale . For instructions on migrating the Data Warehouse to or installing on a separate machine, see Migrating Data Warehouse to a Separate Machine and Installing and Configuring Data Warehouse on a Separate Machine . Note Red Hat only supports installing the Data Warehouse database, the Data Warehouse service and Grafana all on the same machine as each other, even though you can install each of these components on separate machines from each other. 4.1.1.2. Installation Grafana integration is enabled and installed by default when you run Red Hat Virtualization Manager engine-setup in a Stand Alone Manager installation, and in the Self-Hosted engine installation. Note Grafana is not installed by default and you may need to install it manually under some scenarios such as performing an upgrade from an earlier version of RHV, restoring a backup, or when the Data Warehouse is migrated to a separate machine. To enable Grafana integration manually: Put the environment in global maintenance mode: # hosted-engine --set-maintenance --mode=global Log in to the machine where you want to install Grafana. This should be the same machine where the Data Warehouse is configured; usually the Manager machine. Run the engine-setup command as follows: Answer Yes to install Grafana on this machine: Disable global maintenance mode: # hosted-engine --set-maintenance --mode=none To access the Grafana dashboards: Go to https://<engine FQDN or IP address>/ovirt-engine-grafana or Click Monitoring Portal in the web administration welcome page for the Administration Portal . 4.1.1.2.1. Configuring Grafana for Single Sign-on The Manager engine-setup automatically configures Grafana to allow existing users on the Manager to log in with SSO from the Administration Portal, but does not automatically create users. You need to create new users ( Invite in the Grafana UI), confirm the new user, and then they can log in. Set an email address for the user in the Manager, if it is not already defined. Log in to Grafana with an existing admin user (the initially configured admin). Go to Configuration Users and select Invite . Input the email address and name, and select a Role. Send the invitation using one of these options: Select Send invite mail and click Submit . For this option, you need an operational local mail server configured on the Grafana machine. or Select Pending Invites Locate the entry you want Select Copy invite Copy and use this link to create the account by pasting it directly into a browser address bar, or by sending it to another user. If you use the Pending Invites option, no email is sent, and the email address does not really need to exist - any valid looking address will work, as long as it's configured as the email address of a Manager user. To log in with this account: Log in to the Red Hat Virtualization web administration welcome page using the account that has this email address. Select Monitoring Portal to open the Grafana dashboard. Select Sign in with oVirt Engine Auth . 4.1.1.3. Built-in Grafana dashboards The following dashboards are available in the initial Grafana setup to report Data Center, Cluster, Host, and Virtual Machine data: Table 4.1. Built-in Grafana dashboards Dashboard type Content Executive dashboards System dashboard - resource usage and up-time for hosts and storage domains in the system, according to the latest configurations. Data Center dashboard - resource usage, peaks, and up-time for clusters, hosts, and storage domains in a selected data center, according to the latest configurations. Cluster dashboard - resource usage, peaks, over-commit, and up-time for hosts and virtual machines in a selected cluster, according to the latest configurations. Host dashboard - latest and historical configuration details and resource usage metrics of a selected host over a selected period. Virtual Machine dashboard - latest and historical configuration details and resource usage metrics of a selected virtual machine over a selected period. Executive dashboard - user resource usage and number of operating systems for hosts and virtual machines in selected clusters over a selected period. Inventory dashboards Inventory dashboard - number of hosts, virtual machines, and running virtual machines, resources usage and over-commit rates for selected data centers, according to the latest configurations. Hosts Inventory dashboard - FQDN, VDSM version, operating system, CPU model, CPU cores, memory size, create date, delete date, and hardware details for selected hosts, according to the latest configurations. Storage Domains Inventory dashboard - domain type, storage type, available disk size, used disk size, total disk size, creation date, and delete date for selected storage domains over a selected period. Virtual Machines Inventory dashboard - template name, operating system, CPU cores, memory size, create date, and delete date for selected virtual machines, according to the latest configurations. Service Level dashboards Uptime dashboard - planned downtime, unplanned downtime, and total time for the hosts, high availability virtual machines, and all virtual machines in selected clusters in a selected period. Hosts Uptime dashboard - the uptime, planned downtime, and unplanned downtime for selected hosts in a selected period. Virtual Machines Uptime dashboard - the uptime, planned downtime, and unplanned downtime for selected virtual machines in a selected period. Cluster Quality of Service Hosts dashboard - the time selected hosts have performed above and below the CPU and memory threshold in a selected period. Virtual Machines dashboard - the time selected virtual machines have performed above and below the CPU and memory threshold in a selected period. Trend dashboards Trend dashboard - usage rates for the 5 most and least utilized virtual machines and hosts by memory and by CPU in selected clusters over a selected period. Hosts Trend dashboard - resource usage (number of virtual machines, CPU, memory, and network Tx/Rx) for selected hosts over a selected period. Virtual Machines Trend dashboard -resource usage (CPU, memory, network Tx/Rx, disk I/O) for selected virtual machines over a selected period. Hosts Resource Usage dashboard - daily and hourly resource usage (number of virtual machines, CPU, memory, network Tx/Rx) for selected hosts in a selected period. Virtual Machines Resource Usage dashboard - daily and hourly resource usage (CPU, memory, network Tx/Rx, disk I/O) for selected virtual machines in a selected period. Note The Grafana dashboards includes direct links to the Red Hat Virtualization Administration Portal, allowing you to quickly view additional details for your clusters, hosts, and virtual machines. 4.1.1.4. Customized Grafana dashboards You can create customized dashboards or copy and modify existing dashboards according to your reporting needs. Note Built-in dashboards cannot be customized. 4.1.2. Sending metrics and logs to a remote instance of Elasticsearch Note Red Hat does not own or maintain Elasticsearch. You need to have a working familiarity with Elasticsearch setup and maintenance to deploy this option. You can configure the Red Hat Virtualization Manager and hosts to send metrics data and logs to your existing Elasticsearch instance. To do this, run the Ansible role that configures collectd and rsyslog on the Manager and all hosts to collect engine.log , vdsm.log , and collectd metrics, and send them to your Elasticsearch instance. For more information, including a full list with explanations of available Metrics Schema, see Sending RHV monitoring data to a remote Elasticsearch instance . 4.1.2.1. Installing collectd and rsyslog Deploy collectd and rsyslog on the hosts to collect logs and metrics. Note You do not need to repeat this procedure for new hosts. Every new host that is added is automatically configured by the Manager to send the data to Elasticsearch during host-deploy. Procedure Log in to the Manager machine using SSH. Copy /etc/ovirt-engine-metrics/config.yml.example to create /etc/ovirt-engine-metrics/config.yml.d/config.yml : # cp /etc/ovirt-engine-metrics/config.yml.example /etc/ovirt-engine-metrics/config.yml.d/config.yml Edit the ovirt_env_name and elasticsearch_host parameters in config.yml and save the file. The following additional parameters can be added to the file: When using certificates, set use_omelasticsearch_cert to true . To disable logs or metrics, use the rsyslog_elasticsearch_usehttps_metrics and/or rsyslog_elasticsearch_usehttps_logs parameters. Deploy collectd and rsyslog on the hosts: # /usr/share/ovirt-engine-metrics/setup/ansible/configure_ovirt_machines_for_metrics.sh The configure_ovirt_machines_for_metrics.sh script runs an Ansible role that includes linux-system-roles (see Administration and configuration tasks using System Roles in RHEL ) and uses it to deploy and configure rsyslog on the host. rsyslog collects metrics from collectd and sends them to Elasticsearch. 4.1.2.2. Logging schema and analyzing logs Use the Discover page to interactively explore data collected from RHV. Each set of results that is collected is referred to as a document . Documents are collected from the following log files: engine.log - contains all oVirt Engine UI crashes, Active Directory lookups, database issues, and other events. vdsm.log - the log file for the VDSM, the Manager's agent on the virtualization hosts, and contains host-related events. The following fields are available: parameter description _id The unique ID of the document _index The ID of the index to which the document belongs. The index with the project.ovirt-logs prefix is the only relevant index in the Discover page. hostname For the engine.log this is the hostname of the Manager. For the vdsm.log this is the hostname of the host. level The log record severity: TRACE, DEBUG, INFO, WARN, ERROR, FATAL. message The body of the document message. ovirt.class The name of a Java class that produced this log. ovirt.correlationid For the engine.log only. This ID is used to correlate the multiple parts of a single task performed by the Manager. ovirt.thread The name of a Java thread inside which the log record was produced. tag Predefined sets of metadata that can be used to filter the data. @timestamp The [time](Troubleshooting#information-is-missing-from-kibana) that the record was issued. _score N/A _type N/A ipaddr4 The machine's IP address. ovirt.cluster_name For the vdsm.log only. The name of the cluster to which the host belongs. ovirt.engine_fqdn The Manager's FQDN. ovirt.module_lineno The file and line number within the file that ran the command defined in ovirt.class . 4.1.3. Deploying Insights To deploy Red Hat Insights on an existing Red Hat Enterprise Linux (RHEL) system with Red Hat Virtualization Manager installed, complete these tasks: Register the system to the Red Hat Insights application. Enable data collection from the Red Hat Virtualization environment. 4.1.3.1. Register the system to Red Hat Insights Register the system to communicate with the Red Hat Insights service and to view results displayed in the Red Hat Insights console. 4.1.3.2. Enable data collection from the Red Hat Virtualization environment Modify the /etc/ovirt-engine/rhv-log-collector-analyzer/rhv-log-collector-analyzer.conf file to include the following line: 4.1.3.3. View your Insights results in the Insights Console System and infrastructure results can be viewed in the Insights console . The Overview tab provides a dashboard view of current risks to your infrastructure. From this starting point, you can investigate how a specific rule is affecting your system, or take a system-based approach to view all the rule matches that pose a risk to the system. Procedure Select Rule hits by severity to view rules by the Total Risk they pose to your infrastructure ( Critical , Important , Moderate , or Low ). Or Select Rule hits by category to see the type of risk they pose to your infrastructure ( Availability , Stability , Performance , or Security ). Search for a specific rule by name, or scroll through the list of rules to see high-level information about risk, systems exposed, and availability of Ansible Playbook to automate remediation. Click a rule to see a description of the rule, learn more from relevant knowledge base articles, and view a list of systems that are affected. Click a system to see specific information about detected issues and steps to resolve the issue.
|
[
"hosted-engine --set-maintenance --mode=global",
"engine-setup --reconfigure-optional-components",
"Configure Grafana on this host (Yes, No) [Yes]:",
"hosted-engine --set-maintenance --mode=none",
"cp /etc/ovirt-engine-metrics/config.yml.example /etc/ovirt-engine-metrics/config.yml.d/config.yml",
"use_omelasticsearch_cert: false rsyslog_elasticsearch_usehttps_metrics: !!str off rsyslog_elasticsearch_usehttps_logs: !!str off",
"/usr/share/ovirt-engine-metrics/setup/ansible/configure_ovirt_machines_for_metrics.sh",
"insights-client --register",
"upload-json=True"
] |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/administration_guide/part-gathering_information_about_the_environment
|
15.6. Displaying Guest Details
|
15.6. Displaying Guest Details You can use the Virtual Machine Monitor to view activity information for any virtual machines on your system. To view a virtual system's details: In the Virtual Machine Manager main window, highlight the virtual machine that you want to view. Figure 15.10. Selecting a virtual machine to display From the Virtual Machine Manager Edit menu, select Virtual Machine Details . Figure 15.11. Displaying the virtual machine details When the Virtual Machine details window opens, there may be a console displayed. Should this happen, click View and then select Details . The Overview window opens first by default. To go back to this window, select Overview from the navigation pane on the left hand side. The Overview view shows a summary of configuration details for the guest. Figure 15.12. Displaying guest details overview Select Performance from the navigation pane on the left hand side. The Performance view shows a summary of guest performance, including CPU and Memory usage. Figure 15.13. Displaying guest performance details Select Processor from the navigation pane on the left hand side. The Processor view allows you to view the current processor allocation, as well as to change it. It is also possible to change the number of virtual CPUs (vCPUs) while the virtual machine is running, which is referred to as hot plugging and hot unplugging . Important The hot unplugging feature is only available as a Technology Preview. Therefore, it is not supported and not recommended for use in high-value deployments. Figure 15.14. Processor allocation panel Select Memory from the navigation pane on the left hand side. The Memory view allows you to view or change the current memory allocation. Figure 15.15. Displaying memory allocation Each virtual disk attached to the virtual machine is displayed in the navigation pane. Click on a virtual disk to modify or remove it. Figure 15.16. Displaying disk configuration Each virtual network interface attached to the virtual machine is displayed in the navigation pane. Click on a virtual network interface to modify or remove it. Figure 15.17. Displaying network configuration
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtualization-managing_guests_with_the_virtual_machine_manager_virt_manager-displaying_guest_details
|
3.9. Other XFS File System Utilities
|
3.9. Other XFS File System Utilities Red Hat Enterprise Linux 7 also features other utilities for managing XFS file systems: xfs_fsr Used to defragment mounted XFS file systems. When invoked with no arguments, xfs_fsr defragments all regular files in all mounted XFS file systems. This utility also allows users to suspend a defragmentation at a specified time and resume from where it left off later. In addition, xfs_fsr also allows the defragmentation of only one file, as in xfs_fsr /path/to/file . Red Hat advises not to periodically defrag an entire file system because XFS avoids fragmentation by default. System wide defragmentation could cause the side effect of fragmentation in free space. xfs_bmap Prints the map of disk blocks used by files in an XFS filesystem. This map lists each extent used by a specified file, as well as regions in the file with no corresponding blocks (that is, holes). xfs_info Prints XFS file system information. xfs_admin Changes the parameters of an XFS file system. The xfs_admin utility can only modify parameters of unmounted devices or file systems. xfs_copy Copies the contents of an entire XFS file system to one or more targets in parallel. The following utilities are also useful in debugging and analyzing XFS file systems: xfs_metadump Copies XFS file system metadata to a file. Red Hat only supports using the xfs_metadump utility to copy unmounted file systems or read-only mounted file systems; otherwise, generated dumps could be corrupted or inconsistent. xfs_mdrestore Restores an XFS metadump image (generated using xfs_metadump ) to a file system image. xfs_db Debugs an XFS file system. For more information about these utilities, see their respective man pages.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/xfsothers
|
Chapter 26. Process Automation Manager controller Java client API for KIE Server templates and instances
|
Chapter 26. Process Automation Manager controller Java client API for KIE Server templates and instances Red Hat Decision Manager provides a Process Automation Manager controller Java client API that enables you to connect to the Process Automation Manager controller using REST or WebSocket protocol from your Java client application. You can use the Process Automation Manager controller Java client API as an alternative to the Process Automation Manager controller REST API to interact with your KIE Server templates (configurations), KIE Server instances (remote servers), and associated KIE containers (deployment units) in Red Hat Decision Manager without using the Business Central user interface. This API support enables you to maintain your Red Hat Decision Manager servers and resources more efficiently and optimize your integration and development with Red Hat Decision Manager. With the Process Automation Manager controller Java client API, you can perform the following actions also supported by the Process Automation Manager controller REST API: Retrieve information about KIE Server templates, instances, and associated KIE containers Update, start, or stop KIE containers associated with KIE Server templates and instances Create, update, or delete KIE Server templates Create, update, or delete KIE Server instances Process Automation Manager controller Java client API requests require the following components: Authentication The Process Automation Manager controller Java client API requires HTTP Basic authentication for the following user roles, depending on controller type: rest-all user role if you installed Business Central and you want to use the built-in Process Automation Manager controller kie-server user role if you installed the headless Process Automation Manager controller separately from Business Central To view configured user roles for your Red Hat Decision Manager distribution, navigate to ~/USDSERVER_HOME/standalone/configuration/application-roles.properties and ~/application-users.properties . To add a user with the kie-server role or the rest-all role or both (assuming a Keystore is already set), navigate to ~/USDSERVER_HOME/bin and run the following command with the role or roles specified: USD ./bin/jboss-cli.sh --commands="embed-server --std-out=echo,/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>),/subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}),/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=['rest-all','kie-server'])" In case the Keystore is not set, then execute the following command to create a Keystore: USD keytool -importpassword -keystore USDSERVER_HOME/standalone/configuration/kie_keystore.jceks -keypass <SECRETKEYPASSWORD> -alias kieserver -storepass <SECRETSTOREPASSWORD> -storetype JCEKS Also, add the following properties to ~/USDSERVER_HOME/standalone/configuration/standalone-full.xml : <property name="kie.keystore.keyStoreURL" value="file:///data/jboss/rhpam780/standalone/configuration/kie_keystore.jceks"/> <property name="kie.keystore.keyStorePwd" value="<SECRETSTOREPASSWORD>"/> <property name="kie.keystore.key.server.alias" value="kieserver"/> <property name="kie.keystore.key.server.pwd" value="<SECRETKEYPASSWORD>"/> <property name="kie.keystore.key.ctrl.alias" value="kieserver"/> <property name="kie.keystore.key.ctrl.pwd" value="<SECRETKEYPASSWORD>"/> To configure the kie-server or rest-all user with Process Automation Manager controller access, navigate to ~/USDSERVER_HOME/standalone/configuration/standalone-full.xml , uncomment the org.kie.server properties (if applicable), and add the controller user login credentials and controller location (if needed): <property name="org.kie.server.location" value="http://localhost:8080/kie-server/services/rest/server"/> <property name="org.kie.server.controller" value="http://localhost:8080/business-central/rest/controller"/> <property name="org.kie.server.controller.user" value="<USERNAME>"/> <property name="org.kie.server.id" value="default-kieserver"/> For more information about user roles and Red Hat Decision Manager installation options, see Planning a Red Hat Decision Manager installation . Project dependencies The Process Automation Manager controller Java client API requires the following dependencies on the relevant classpath of your Java project: <!-- For remote execution on controller --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-controller-client</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- For REST client --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-client</artifactId> <version>USD{resteasy.version}</version> </dependency> <!-- For WebSocket client --> <dependency> <groupId>io.undertow</groupId> <artifactId>undertow-websockets-jsr</artifactId> <version>USD{undertow.version}</version> </dependency> <!-- For debug logging (optional) --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>USD{logback.version}</version> </dependency> The <version> for Red Hat Decision Manager dependencies is the Maven artifact version for Red Hat Decision Manager currently used in your project (for example, 7.67.0.Final-redhat-00024). Note Instead of specifying a Red Hat Decision Manager <version> for individual dependencies, consider adding the Red Hat Business Automation bill of materials (BOM) dependency to your project pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project. Example BOM dependency: <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency> For more information about the Red Hat Business Automation BOM, see What is the mapping between RHDM product and maven library version? . Client request configuration All Java client requests with the Process Automation Manager controller Java client API must define at least the following controller communication components: Credentials of the rest-all user if you installed Business Central, or the kie-server user if you installed the headless Process Automation Manager controller separately from Business Central Process Automation Manager controller location for REST or WebSocket protocol: Example REST URL: http://localhost:8080/business-central/rest/controller Example WebSocket URL: ws://localhost:8080/headless-controller/websocket/controller Marshalling format for API requests and responses (JSON or JAXB) A KieServerControllerClient object, which serves as the entry point for starting the server communication using the Java client API A KieServerControllerClientFactory defining REST or WebSocket protocol and user access The Process Automation Manager controller client service or services used, such as listServerTemplates , getServerTemplate , or getServerInstances The following are examples of REST and WebSocket client configurations with these components: Client configuration example with REST import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.controller.api.model.spec.ServerTemplateList; import org.kie.server.controller.client.KieServerControllerClient; import org.kie.server.controller.client.KieServerControllerClientFactory; public class ListServerTemplatesExample { private static final String URL = "http://localhost:8080/business-central/rest/controller"; private static final String USER = "baAdmin"; private static final String PASSWORD = "password@1"; private static final MarshallingFormat FORMAT = MarshallingFormat.JSON; public static void main(String[] args) { KieServerControllerClient client = KieServerControllerClientFactory.newRestClient(URL, USER, PASSWORD); final ServerTemplateList serverTemplateList = client.listServerTemplates(); System.out.println(String.format("Found %s server template(s) at controller url: %s", serverTemplateList.getServerTemplates().length, URL)); } } Client configuration example with WebSocket import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.controller.api.model.spec.ServerTemplateList; import org.kie.server.controller.client.KieServerControllerClient; import org.kie.server.controller.client.KieServerControllerClientFactory; public class ListServerTemplatesExample { private static final String URL = "ws://localhost:8080/my-controller/websocket/controller"; private static final String USER = "baAdmin"; private static final String PASSWORD = "password@1"; private static final MarshallingFormat FORMAT = MarshallingFormat.JSON; public static void main(String[] args) { KieServerControllerClient client = KieServerControllerClientFactory.newWebSocketClient(URL, USER, PASSWORD); final ServerTemplateList serverTemplateList = client.listServerTemplates(); System.out.println(String.format("Found %s server template(s) at controller url: %s", serverTemplateList.getServerTemplates().length, URL)); } } 26.1. Sending requests with the Process Automation Manager controller Java client API The Process Automation Manager controller Java client API enables you to connect to the Process Automation Manager controller using REST or WebSocket protocols from your Java client application. You can use the Process Automation Manager controller Java client API as an alternative to the Process Automation Manager controller REST API to interact with your KIE Server templates (configurations), KIE Server instances (remote servers), and associated KIE containers (deployment units) in Red Hat Decision Manager without using the Business Central user interface. Prerequisites KIE Server is installed and running. The Process Automation Manager controller or headless Process Automation Manager controller is installed and running. You have rest-all user role access to the Process Automation Manager controller if you installed Business Central, or kie-server user role access to the headless Process Automation Manager controller installed separately from Business Central. You have a Java project with Red Hat Decision Manager resources. Procedure In your client application, ensure that the following dependencies have been added to the relevant classpath of your Java project: <!-- For remote execution on controller --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-controller-client</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- For REST client --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-client</artifactId> <version>USD{resteasy.version}</version> </dependency> <!-- For WebSocket client --> <dependency> <groupId>io.undertow</groupId> <artifactId>undertow-websockets-jsr</artifactId> <version>USD{undertow.version}</version> </dependency> <!-- For debug logging (optional) --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>USD{logback.version}</version> </dependency> Download the Red Hat Process Automation Manager 7.13.5 Source Distribution from the Red Hat Customer Portal and navigate to ~/rhpam-7.13.5-sources/src/droolsjbpm-integration-USDVERSION/kie-server-parent/kie-server-controller/kie-server-controller-client/src/main/java/org/kie/server/controller/client to access the Process Automation Manager controller Java clients. In the ~/kie/server/controller/client folder , identify the relevant Java client implementation for the request you want to send, such as the RestKieServerControllerClient implementation to access client services for KIE Server templates and KIE containers in REST protocol. In your client application, create a .java class for the API request. The class must contain the necessary imports, the Process Automation Manager controller location and user credentials, a KieServerControllerClient object, and the client method to execute, such as createServerTemplate and createContainer from the RestKieServerControllerClient implementation. Adjust any configuration details according to your use case. Creating and interacting with a KIE Server template and KIE containers import java.util.Arrays; import java.util.HashMap; import java.util.Map; import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.api.model.KieContainerStatus; import org.kie.server.api.model.KieScannerStatus; import org.kie.server.api.model.ReleaseId; import org.kie.server.controller.api.model.spec.*; import org.kie.server.controller.client.KieServerControllerClient; import org.kie.server.controller.client.KieServerControllerClientFactory; public class RestTemplateContainerExample { private static final String URL = "http://localhost:8080/business-central/rest/controller"; private static final String USER = "baAdmin"; private static final String PASSWORD = "password@1"; private static KieServerControllerClient client; public static void main(String[] args) { KieServerControllerClient client = KieServerControllerClientFactory.newRestClient(URL, USER, PASSWORD, MarshallingFormat.JSON); // Create server template and KIE container, start and stop KIE container, and delete server template ServerTemplate serverTemplate = createServerTemplate(); ContainerSpec container = createContainer(serverTemplate); client.startContainer(container); client.stopContainer(container); client.deleteServerTemplate(serverTemplate.getId()); } // Re-create and configure server template protected static ServerTemplate createServerTemplate() { ServerTemplate serverTemplate = new ServerTemplate(); serverTemplate.setId("example-client-id"); serverTemplate.setName("example-client-name"); serverTemplate.setCapabilities(Arrays.asList(Capability.PROCESS.name(), Capability.RULE.name(), Capability.PLANNING.name())); client.saveServerTemplate(serverTemplate); return serverTemplate; } // Re-create and configure KIE containers protected static ContainerSpec createContainer(ServerTemplate serverTemplate){ Map<Capability, ContainerConfig> containerConfigMap = new HashMap(); ProcessConfig processConfig = new ProcessConfig("PER_PROCESS_INSTANCE", "kieBase", "kieSession", "MERGE_COLLECTION"); containerConfigMap.put(Capability.PROCESS, processConfig); RuleConfig ruleConfig = new RuleConfig(500l, KieScannerStatus.SCANNING); containerConfigMap.put(Capability.RULE, ruleConfig); ReleaseId releaseId = new ReleaseId("org.kie.server.testing", "stateless-session-kjar", "1.0.0-SNAPSHOT"); ContainerSpec containerSpec = new ContainerSpec("example-container-id", "example-client-name", serverTemplate, releaseId, KieContainerStatus.STOPPED, containerConfigMap); client.saveContainerSpec(serverTemplate.getId(), containerSpec); return containerSpec; } } Run the configured .java class from your project directory to execute the request, and review the Process Automation Manager controller response. If you enabled debug logging, KIE Server responds with a detailed response according to your configured marshalling format, such as JSON. If you encounter request errors, review the returned error code messages and adjust your Java configurations accordingly. 26.2. Supported Process Automation Manager controller Java clients The following are some of the Java client services available in the org.kie.server.controller.client package of your Red Hat Decision Manager distribution. You can use these services to interact with related resources in the Process Automation Manager controller similarly to the Process Automation Manager controller REST API. KieServerControllerClient : Used as the entry point for communicating with the Process Automation Manager controller RestKieServerControllerClient : Implementation used to interact with KIE Server templates and KIE containers in REST protocol (found in ~/org/kie/server/controller/client/rest ) WebSocketKieServerControllerClient : Implementation used to interact with KIE Server templates and KIE containers in WebSocket protocol (found in ~/org/kie/server/controller/client/websocket ) For the full list of available Process Automation Manager controller Java clients, download the Red Hat Process Automation Manager 7.13.5 Source Distribution from the Red Hat Customer Portal and navigate to ~/rhpam-7.13.5-sources/src/droolsjbpm-integration-USDVERSION/kie-server-parent/kie-server-controller/kie-server-controller-client/src/main/java/org/kie/server/controller/client . 26.3. Example requests with the Process Automation Manager controller Java client API The following are examples of Process Automation Manager controller Java client API requests for basic interactions with the Process Automation Manager controller. For the full list of available Process Automation Manager controller Java clients, download the Red Hat Process Automation Manager 7.13.5 Source Distribution from the Red Hat Customer Portal and navigate to ~/rhpam-7.13.5-sources/src/droolsjbpm-integration-USDVERSION/kie-server-parent/kie-server-controller/kie-server-controller-client/src/main/java/org/kie/server/controller/client . Creating and interacting with KIE Server templates and KIE containers You can use the ServerTemplate and ContainerSpec services in the REST or WebSocket Process Automation Manager controller clients to create, dispose, and update KIE Server templates and KIE containers, and to start and stop KIE containers, as illustrated in this example. Example request to create and interact with a KIE Server template and KIE containers import java.util.Arrays; import java.util.HashMap; import java.util.Map; import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.api.model.KieContainerStatus; import org.kie.server.api.model.KieScannerStatus; import org.kie.server.api.model.ReleaseId; import org.kie.server.controller.api.model.spec.*; import org.kie.server.controller.client.KieServerControllerClient; import org.kie.server.controller.client.KieServerControllerClientFactory; public class RestTemplateContainerExample { private static final String URL = "http://localhost:8080/business-central/rest/controller"; private static final String USER = "baAdmin"; private static final String PASSWORD = "password@1"; private static KieServerControllerClient client; public static void main(String[] args) { KieServerControllerClient client = KieServerControllerClientFactory.newRestClient(URL, USER, PASSWORD, MarshallingFormat.JSON); // Create server template and KIE container, start and stop KIE container, and delete server template ServerTemplate serverTemplate = createServerTemplate(); ContainerSpec container = createContainer(serverTemplate); client.startContainer(container); client.stopContainer(container); client.deleteServerTemplate(serverTemplate.getId()); } // Re-create and configure server template protected static ServerTemplate createServerTemplate() { ServerTemplate serverTemplate = new ServerTemplate(); serverTemplate.setId("example-client-id"); serverTemplate.setName("example-client-name"); serverTemplate.setCapabilities(Arrays.asList(Capability.PROCESS.name(), Capability.RULE.name(), Capability.PLANNING.name())); client.saveServerTemplate(serverTemplate); return serverTemplate; } // Re-create and configure KIE containers protected static ContainerSpec createContainer(ServerTemplate serverTemplate){ Map<Capability, ContainerConfig> containerConfigMap = new HashMap(); ProcessConfig processConfig = new ProcessConfig("PER_PROCESS_INSTANCE", "kieBase", "kieSession", "MERGE_COLLECTION"); containerConfigMap.put(Capability.PROCESS, processConfig); RuleConfig ruleConfig = new RuleConfig(500l, KieScannerStatus.SCANNING); containerConfigMap.put(Capability.RULE, ruleConfig); ReleaseId releaseId = new ReleaseId("org.kie.server.testing", "stateless-session-kjar", "1.0.0-SNAPSHOT"); ContainerSpec containerSpec = new ContainerSpec("example-container-id", "example-client-name", serverTemplate, releaseId, KieContainerStatus.STOPPED, containerConfigMap); client.saveContainerSpec(serverTemplate.getId(), containerSpec); return containerSpec; } } Listing KIE Server templates and specifying connection timeout (REST) When you use REST protocol for Process Automation Manager controller Java client API requests, you can provide your own javax.ws.rs.core.Configuration specification to modify the underlying REST client API, such as connection timeout. Example REST request to return server templates and specify connection timeout import java.util.concurrent.TimeUnit; import javax.ws.rs.core.Configuration; import org.jboss.resteasy.client.jaxrs.ResteasyClientBuilder; import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.controller.api.model.spec.ServerTemplateList; import org.kie.server.controller.client.KieServerControllerClient; import org.kie.server.controller.client.KieServerControllerClientFactory; public class RESTTimeoutExample { private static final String URL = "http://localhost:8080/business-central/rest/controller"; private static final String USER = "baAdmin"; private static final String PASSWORD = "password@1"; public static void main(String[] args) { // Specify connection timeout final Configuration configuration = new ResteasyClientBuilder() .establishConnectionTimeout(10, TimeUnit.SECONDS) .socketTimeout(60, TimeUnit.SECONDS) .getConfiguration(); KieServerControllerClient client = KieServerControllerClientFactory.newRestClient(URL, USER, PASSWORD, MarshallingFormat.JSON, configuration); // Retrieve list of server templates final ServerTemplateList serverTemplateList = client.listServerTemplates(); System.out.println(String.format("Found %s server template(s) at controller url: %s", serverTemplateList.getServerTemplates().length, URL)); } } Listing KIE Server templates and specifying event notifications (WebSocket) When you use WebSocket protocol for Process Automation Manager controller Java client API requests, you can enable event notifications based on changes that happen in the particular Process Automation Manager controller to which the client API is connected. For example, you can receive notifications when KIE Server templates or instances are connected to or updated in the Process Automation Manager controller. Example WebSocket request to return server templates and specify event notifications import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.controller.api.model.events.*; import org.kie.server.controller.api.model.spec.ServerTemplateList; import org.kie.server.controller.client.KieServerControllerClient; import org.kie.server.controller.client.KieServerControllerClientFactory; import org.kie.server.controller.client.event.EventHandler; public class WebSocketEventsExample { private static final String URL = "ws://localhost:8080/my-controller/websocket/controller"; private static final String USER = "baAdmin"; private static final String PASSWORD = "password@1"; public static void main(String[] args) { KieServerControllerClient client = KieServerControllerClientFactory.newWebSocketClient(URL, USER, PASSWORD, MarshallingFormat.JSON, new TestEventHandler()); // Retrieve list of server templates final ServerTemplateList serverTemplateList = client.listServerTemplates(); System.out.println(String.format("Found %s server template(s) at controller url: %s", serverTemplateList.getServerTemplates().length, URL)); try { Thread.sleep(60 * 1000); } catch (Exception e) { e.printStackTrace(); } } // Set up event notifications static class TestEventHandler implements EventHandler { @Override public void onServerInstanceConnected(ServerInstanceConnected serverInstanceConnected) { System.out.println("serverInstanceConnected = " + serverInstanceConnected); } @Override public void onServerInstanceDeleted(ServerInstanceDeleted serverInstanceDeleted) { System.out.println("serverInstanceDeleted = " + serverInstanceDeleted); } @Override public void onServerInstanceDisconnected(ServerInstanceDisconnected serverInstanceDisconnected) { System.out.println("serverInstanceDisconnected = " + serverInstanceDisconnected); } @Override public void onServerTemplateDeleted(ServerTemplateDeleted serverTemplateDeleted) { System.out.println("serverTemplateDeleted = " + serverTemplateDeleted); } @Override public void onServerTemplateUpdated(ServerTemplateUpdated serverTemplateUpdated) { System.out.println("serverTemplateUpdated = " + serverTemplateUpdated); } @Override public void onServerInstanceUpdated(ServerInstanceUpdated serverInstanceUpdated) { System.out.println("serverInstanceUpdated = " + serverInstanceUpdated); } @Override public void onContainerSpecUpdated(ContainerSpecUpdated containerSpecUpdated) { System.out.println("onContainerSpecUpdated = " + containerSpecUpdated); } } }
|
[
"./bin/jboss-cli.sh --commands=\"embed-server --std-out=echo,/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity(identity=<USERNAME>),/subsystem=elytron/filesystem-realm=ApplicationRealm:set-password(identity=<USERNAME>, clear={password='<PASSWORD>'}),/subsystem=elytron/filesystem-realm=ApplicationRealm:add-identity-attribute(identity=<USERNAME>, name=role, value=['rest-all','kie-server'])\"",
"keytool -importpassword -keystore USDSERVER_HOME/standalone/configuration/kie_keystore.jceks -keypass <SECRETKEYPASSWORD> -alias kieserver -storepass <SECRETSTOREPASSWORD> -storetype JCEKS",
"<property name=\"kie.keystore.keyStoreURL\" value=\"file:///data/jboss/rhpam780/standalone/configuration/kie_keystore.jceks\"/> <property name=\"kie.keystore.keyStorePwd\" value=\"<SECRETSTOREPASSWORD>\"/> <property name=\"kie.keystore.key.server.alias\" value=\"kieserver\"/> <property name=\"kie.keystore.key.server.pwd\" value=\"<SECRETKEYPASSWORD>\"/> <property name=\"kie.keystore.key.ctrl.alias\" value=\"kieserver\"/> <property name=\"kie.keystore.key.ctrl.pwd\" value=\"<SECRETKEYPASSWORD>\"/>",
"<property name=\"org.kie.server.location\" value=\"http://localhost:8080/kie-server/services/rest/server\"/> <property name=\"org.kie.server.controller\" value=\"http://localhost:8080/business-central/rest/controller\"/> <property name=\"org.kie.server.controller.user\" value=\"<USERNAME>\"/> <property name=\"org.kie.server.id\" value=\"default-kieserver\"/>",
"<!-- For remote execution on controller --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-controller-client</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- For REST client --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-client</artifactId> <version>USD{resteasy.version}</version> </dependency> <!-- For WebSocket client --> <dependency> <groupId>io.undertow</groupId> <artifactId>undertow-websockets-jsr</artifactId> <version>USD{undertow.version}</version> </dependency> <!-- For debug logging (optional) --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>USD{logback.version}</version> </dependency>",
"<dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency>",
"import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.controller.api.model.spec.ServerTemplateList; import org.kie.server.controller.client.KieServerControllerClient; import org.kie.server.controller.client.KieServerControllerClientFactory; public class ListServerTemplatesExample { private static final String URL = \"http://localhost:8080/business-central/rest/controller\"; private static final String USER = \"baAdmin\"; private static final String PASSWORD = \"password@1\"; private static final MarshallingFormat FORMAT = MarshallingFormat.JSON; public static void main(String[] args) { KieServerControllerClient client = KieServerControllerClientFactory.newRestClient(URL, USER, PASSWORD); final ServerTemplateList serverTemplateList = client.listServerTemplates(); System.out.println(String.format(\"Found %s server template(s) at controller url: %s\", serverTemplateList.getServerTemplates().length, URL)); } }",
"import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.controller.api.model.spec.ServerTemplateList; import org.kie.server.controller.client.KieServerControllerClient; import org.kie.server.controller.client.KieServerControllerClientFactory; public class ListServerTemplatesExample { private static final String URL = \"ws://localhost:8080/my-controller/websocket/controller\"; private static final String USER = \"baAdmin\"; private static final String PASSWORD = \"password@1\"; private static final MarshallingFormat FORMAT = MarshallingFormat.JSON; public static void main(String[] args) { KieServerControllerClient client = KieServerControllerClientFactory.newWebSocketClient(URL, USER, PASSWORD); final ServerTemplateList serverTemplateList = client.listServerTemplates(); System.out.println(String.format(\"Found %s server template(s) at controller url: %s\", serverTemplateList.getServerTemplates().length, URL)); } }",
"<!-- For remote execution on controller --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-controller-client</artifactId> <version>USD{rhpam.version}</version> </dependency> <!-- For REST client --> <dependency> <groupId>org.jboss.resteasy</groupId> <artifactId>resteasy-client</artifactId> <version>USD{resteasy.version}</version> </dependency> <!-- For WebSocket client --> <dependency> <groupId>io.undertow</groupId> <artifactId>undertow-websockets-jsr</artifactId> <version>USD{undertow.version}</version> </dependency> <!-- For debug logging (optional) --> <dependency> <groupId>ch.qos.logback</groupId> <artifactId>logback-classic</artifactId> <version>USD{logback.version}</version> </dependency>",
"import java.util.Arrays; import java.util.HashMap; import java.util.Map; import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.api.model.KieContainerStatus; import org.kie.server.api.model.KieScannerStatus; import org.kie.server.api.model.ReleaseId; import org.kie.server.controller.api.model.spec.*; import org.kie.server.controller.client.KieServerControllerClient; import org.kie.server.controller.client.KieServerControllerClientFactory; public class RestTemplateContainerExample { private static final String URL = \"http://localhost:8080/business-central/rest/controller\"; private static final String USER = \"baAdmin\"; private static final String PASSWORD = \"password@1\"; private static KieServerControllerClient client; public static void main(String[] args) { KieServerControllerClient client = KieServerControllerClientFactory.newRestClient(URL, USER, PASSWORD, MarshallingFormat.JSON); // Create server template and KIE container, start and stop KIE container, and delete server template ServerTemplate serverTemplate = createServerTemplate(); ContainerSpec container = createContainer(serverTemplate); client.startContainer(container); client.stopContainer(container); client.deleteServerTemplate(serverTemplate.getId()); } // Re-create and configure server template protected static ServerTemplate createServerTemplate() { ServerTemplate serverTemplate = new ServerTemplate(); serverTemplate.setId(\"example-client-id\"); serverTemplate.setName(\"example-client-name\"); serverTemplate.setCapabilities(Arrays.asList(Capability.PROCESS.name(), Capability.RULE.name(), Capability.PLANNING.name())); client.saveServerTemplate(serverTemplate); return serverTemplate; } // Re-create and configure KIE containers protected static ContainerSpec createContainer(ServerTemplate serverTemplate){ Map<Capability, ContainerConfig> containerConfigMap = new HashMap(); ProcessConfig processConfig = new ProcessConfig(\"PER_PROCESS_INSTANCE\", \"kieBase\", \"kieSession\", \"MERGE_COLLECTION\"); containerConfigMap.put(Capability.PROCESS, processConfig); RuleConfig ruleConfig = new RuleConfig(500l, KieScannerStatus.SCANNING); containerConfigMap.put(Capability.RULE, ruleConfig); ReleaseId releaseId = new ReleaseId(\"org.kie.server.testing\", \"stateless-session-kjar\", \"1.0.0-SNAPSHOT\"); ContainerSpec containerSpec = new ContainerSpec(\"example-container-id\", \"example-client-name\", serverTemplate, releaseId, KieContainerStatus.STOPPED, containerConfigMap); client.saveContainerSpec(serverTemplate.getId(), containerSpec); return containerSpec; } }",
"import java.util.Arrays; import java.util.HashMap; import java.util.Map; import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.api.model.KieContainerStatus; import org.kie.server.api.model.KieScannerStatus; import org.kie.server.api.model.ReleaseId; import org.kie.server.controller.api.model.spec.*; import org.kie.server.controller.client.KieServerControllerClient; import org.kie.server.controller.client.KieServerControllerClientFactory; public class RestTemplateContainerExample { private static final String URL = \"http://localhost:8080/business-central/rest/controller\"; private static final String USER = \"baAdmin\"; private static final String PASSWORD = \"password@1\"; private static KieServerControllerClient client; public static void main(String[] args) { KieServerControllerClient client = KieServerControllerClientFactory.newRestClient(URL, USER, PASSWORD, MarshallingFormat.JSON); // Create server template and KIE container, start and stop KIE container, and delete server template ServerTemplate serverTemplate = createServerTemplate(); ContainerSpec container = createContainer(serverTemplate); client.startContainer(container); client.stopContainer(container); client.deleteServerTemplate(serverTemplate.getId()); } // Re-create and configure server template protected static ServerTemplate createServerTemplate() { ServerTemplate serverTemplate = new ServerTemplate(); serverTemplate.setId(\"example-client-id\"); serverTemplate.setName(\"example-client-name\"); serverTemplate.setCapabilities(Arrays.asList(Capability.PROCESS.name(), Capability.RULE.name(), Capability.PLANNING.name())); client.saveServerTemplate(serverTemplate); return serverTemplate; } // Re-create and configure KIE containers protected static ContainerSpec createContainer(ServerTemplate serverTemplate){ Map<Capability, ContainerConfig> containerConfigMap = new HashMap(); ProcessConfig processConfig = new ProcessConfig(\"PER_PROCESS_INSTANCE\", \"kieBase\", \"kieSession\", \"MERGE_COLLECTION\"); containerConfigMap.put(Capability.PROCESS, processConfig); RuleConfig ruleConfig = new RuleConfig(500l, KieScannerStatus.SCANNING); containerConfigMap.put(Capability.RULE, ruleConfig); ReleaseId releaseId = new ReleaseId(\"org.kie.server.testing\", \"stateless-session-kjar\", \"1.0.0-SNAPSHOT\"); ContainerSpec containerSpec = new ContainerSpec(\"example-container-id\", \"example-client-name\", serverTemplate, releaseId, KieContainerStatus.STOPPED, containerConfigMap); client.saveContainerSpec(serverTemplate.getId(), containerSpec); return containerSpec; } }",
"import java.util.concurrent.TimeUnit; import javax.ws.rs.core.Configuration; import org.jboss.resteasy.client.jaxrs.ResteasyClientBuilder; import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.controller.api.model.spec.ServerTemplateList; import org.kie.server.controller.client.KieServerControllerClient; import org.kie.server.controller.client.KieServerControllerClientFactory; public class RESTTimeoutExample { private static final String URL = \"http://localhost:8080/business-central/rest/controller\"; private static final String USER = \"baAdmin\"; private static final String PASSWORD = \"password@1\"; public static void main(String[] args) { // Specify connection timeout final Configuration configuration = new ResteasyClientBuilder() .establishConnectionTimeout(10, TimeUnit.SECONDS) .socketTimeout(60, TimeUnit.SECONDS) .getConfiguration(); KieServerControllerClient client = KieServerControllerClientFactory.newRestClient(URL, USER, PASSWORD, MarshallingFormat.JSON, configuration); // Retrieve list of server templates final ServerTemplateList serverTemplateList = client.listServerTemplates(); System.out.println(String.format(\"Found %s server template(s) at controller url: %s\", serverTemplateList.getServerTemplates().length, URL)); } }",
"import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.controller.api.model.events.*; import org.kie.server.controller.api.model.spec.ServerTemplateList; import org.kie.server.controller.client.KieServerControllerClient; import org.kie.server.controller.client.KieServerControllerClientFactory; import org.kie.server.controller.client.event.EventHandler; public class WebSocketEventsExample { private static final String URL = \"ws://localhost:8080/my-controller/websocket/controller\"; private static final String USER = \"baAdmin\"; private static final String PASSWORD = \"password@1\"; public static void main(String[] args) { KieServerControllerClient client = KieServerControllerClientFactory.newWebSocketClient(URL, USER, PASSWORD, MarshallingFormat.JSON, new TestEventHandler()); // Retrieve list of server templates final ServerTemplateList serverTemplateList = client.listServerTemplates(); System.out.println(String.format(\"Found %s server template(s) at controller url: %s\", serverTemplateList.getServerTemplates().length, URL)); try { Thread.sleep(60 * 1000); } catch (Exception e) { e.printStackTrace(); } } // Set up event notifications static class TestEventHandler implements EventHandler { @Override public void onServerInstanceConnected(ServerInstanceConnected serverInstanceConnected) { System.out.println(\"serverInstanceConnected = \" + serverInstanceConnected); } @Override public void onServerInstanceDeleted(ServerInstanceDeleted serverInstanceDeleted) { System.out.println(\"serverInstanceDeleted = \" + serverInstanceDeleted); } @Override public void onServerInstanceDisconnected(ServerInstanceDisconnected serverInstanceDisconnected) { System.out.println(\"serverInstanceDisconnected = \" + serverInstanceDisconnected); } @Override public void onServerTemplateDeleted(ServerTemplateDeleted serverTemplateDeleted) { System.out.println(\"serverTemplateDeleted = \" + serverTemplateDeleted); } @Override public void onServerTemplateUpdated(ServerTemplateUpdated serverTemplateUpdated) { System.out.println(\"serverTemplateUpdated = \" + serverTemplateUpdated); } @Override public void onServerInstanceUpdated(ServerInstanceUpdated serverInstanceUpdated) { System.out.println(\"serverInstanceUpdated = \" + serverInstanceUpdated); } @Override public void onContainerSpecUpdated(ContainerSpecUpdated containerSpecUpdated) { System.out.println(\"onContainerSpecUpdated = \" + containerSpecUpdated); } } }"
] |
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/controller-java-api-con_kie-apis
|
probe::vm.mmap
|
probe::vm.mmap Name probe::vm.mmap - Fires when an mmap is requested. Synopsis Values length The length of the memory segment name Name of the probe point address The requested address Context The process calling mmap.
|
[
"vm.mmap"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-vm-mmap
|
15.9. Changing the FQDN of the Manager in a Self-Hosted Engine
|
15.9. Changing the FQDN of the Manager in a Self-Hosted Engine You can use the ovirt-engine-rename command to update records of the fully qualified domain name (FQDN) of the Manager. For details see Section 22.1.3, "Renaming the Manager with the oVirt Engine Rename Tool" .
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/changing_fqdn_of_rhvm_self-hosted_engine
|
10.5. Setting up Geo-replication using gdeploy
|
10.5. Setting up Geo-replication using gdeploy This section describes how to use gdeploy to configure geo-replication, control and verify geo-replication sessions in your storage environment. The gdeploy tool automates the following processes related to geo-replication: Section 10.5.1, "Setting up geo-replication as root user using gdeploy" Section 10.5.2, "Setting up a secure geo-replication session using gdeploy" Section 10.5.3, "Controlling geo-replication sessions using gdeploy" 10.5.1. Setting up geo-replication as root user using gdeploy Setting up a geo-replication session as a root user involves: Creating a common pem pub file Creating a geo-replication session Configuring the meta-volume Starting the geo-replication session gdeploy helps in automating these tasks by creating a single configuration file. When gdeploy is installed, a sample configuration file is created in the following location: Procedure 10.1. Setting up geo-replication as root user using gdeploy Important Ensure that the prerequisites listed in Section 10.3.3, "Prerequisites" are complete. Create a copy of the sample gdeploy configuration file present in the following location: Add the required details in the geo-replication section of the configuration file using the following template: After modifying the configuration file, invoke the configuration using the command: Following is an example of the modifications to the configuration file in order to set up geo-replication as a root user: For more information on other available values, see Section 5.1.7, "Configuration File" 10.5.2. Setting up a secure geo-replication session using gdeploy Setting up a secure geo-replication session involves: Creating a new group with a unprivileged account for all slave nodes Setting up the mountbroker Creating a common pem pub file Creating a geo-replication session Configuring the meta-volume Starting the geo-replication session gdeploy helps in automating these tasks by creating a single configuration file. When gdeploy is installed, a sample configuration file is created in the following location: Procedure 10.2. Setting up a secure geo-replication session using gdeploy Important Ensure that the prerequisites listed in Section 10.3.3, "Prerequisites" are complete. Create a copy of the sample gdeploy configuration file present in the following location: Add the required details in the geo-replication section of the configuration file using the following template: After modifying the configuration file, invoke the configuration using the command: The following is an example of the modifications to the configuration file in order to set up a secure geo-replication session: For more information on other available values, see Section 5.1.7, "Configuration File" 10.5.3. Controlling geo-replication sessions using gdeploy gdeploy version 2.0.2-35 supports controlling geo-replication sessions on Red Hat Gluster Storage 3.5. Using gdeploy, the following actions can be performed for controlling a geo-replication session: Starting a geo-replication session Stopping a geo-replication session Pausing a geo-replication session Resuming a geo-replication session Deleting a geo-replication session When gdeploy is installed, sample configuration files are created in /usr/share/doc/gdeploy/examples . The sample configuration file names for each action are as follows: Table 10.1. gdeploy for Geo-replication Configuration File Names Geo-replication Session Control Configuration File Name Starting a session georep-start.conf Stopping a session georep-stop.conf Pausing a session georep-pause.conf Resuming a session georep-resume.conf Deleting a session georep-delete.conf Procedure 10.3. Controlling geo-replication sessions using gdeploy Warning You must create a geo-replication session before controlling it. For more information, see any one of the following: Section 10.3.4.1, "Setting Up your Environment for Geo-replication Session" Section 10.5.1, "Setting up geo-replication as root user using gdeploy" Section 10.5.2, "Setting up a secure geo-replication session using gdeploy" Important Ensure that the prerequisites listed in Section 10.3.3, "Prerequisites" are complete. Create a copy of the required gdeploy sample configuration file present in the following location: Add the required information in the geo-replication section of the configuration file using the following template: Important If georepuser variable is omitted, the user is assumed to be root user. After modifying the configuration file, invoke the configuration using the command: Following are the examples of the modifications to the configuration file in order to control a geo-replication session: Starting a geo-replication session Stopping a geo-replication session Pausing a geo-replication session Resuming a geo-replication session Deleting a geo-replication session For more information on available values, see Section 5.1.7, "Configuration File"
|
[
"/usr/share/doc/gdeploy/examples/geo-replication.conf",
"/usr/share/doc/gdeploy/examples/geo-replication.conf",
"[geo-replication] action=create mastervol= Master_IP : Master_Volname slavevol= Slave_IP : Slave_Volname slavenodes= Slave_IP_1 , Slave_IP_2 [Add all slave IP addresses. Each address followed by a comma (,)] force=yes [yes or no] start=yes [yes or no]",
"gdeploy -c txt.conf",
"[geo-replication] action=create mastervol=10.1.1.29:mastervolume slavevol=10.1.1.25:slavesvolume slavenodes=10.1.1.28,10.1.1.86 force=yes start=yes",
"/usr/share/doc/gdeploy/examples/georep-secure.conf",
"/usr/share/doc/gdeploy/examples/georep-secure.conf",
"[geo-replication] action=create georepuser= User_Name [If the user is not present, gdeploy creates the geo-replication user.] mastervol= Master_IP : Master_Volname slavevol= Slave_IP : Slave_Volname slavenodes= Slave_IP_1 , Slave_IP_2 [Add all slave IP addresses. Each address followed by a comma (,)] force=yes [yes or no] start=yes [yes or no]",
"gdeploy -c txt.conf",
"[geo-replication] action=create georepuser=testgeorep mastervol=10.1.1.29:mastervolume slavevol=10.1.1.25:slavesvolume slavenodes=10.1.1.28,10.1.1.86 force=yes start=yes",
"/usr/share/doc/gdeploy/examples",
"[geo-replication] action= Action_Name georepuser= User_Name If georepuser variable is omitted, the user is assumed to be root user. mastervol= Master_IP : Master_Volname slavevol= Slave_IP : Slave_Volname slavenodes= Slave_IP_1 , Slave_IP_2 [Add all slave IP addresses. Each address followed by a comma (,)] force=yes [yes or no] start=yes [yes or no]",
"gdeploy -c txt.conf",
"[geo-replication] action=start mastervol=10.1.1.29:mastervolume slavevol=10.1.1.25:slavevolume",
"[geo-replication] action=stop mastervol=10.1.1.29:mastervolume slavevol=10.1.1.25:slavevolume force=yes",
"[geo-replication] action=pause mastervol=10.1.1.29:mastervolume slavevol=10.1.1.25:slavevolume force=yes",
"[geo-replication] action=resume mastervol=10.1.1.29:mastervolume slavevol=10.1.1.25:slavevolume force=yes",
"[geo-replication] action=delete mastervol=10.1.1.29:mastervolume slavevol=10.1.1.25:slavevolume force=yes"
] |
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/sect-geo-repl_with_gdeploy
|
Customizing Anaconda
|
Customizing Anaconda Red Hat Enterprise Linux 8 Changing the installer appearance and creating custom add-ons on Red Hat Enterprise Linux Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/customizing_anaconda/index
|
Chapter 7. Observing the network traffic
|
Chapter 7. Observing the network traffic As an administrator, you can observe the network traffic in the OpenShift Container Platform console for detailed troubleshooting and analysis. This feature helps you get insights from different graphical representations of traffic flow. There are several available views to observe the network traffic. 7.1. Observing the network traffic from the Overview view The Overview view displays the overall aggregated metrics of the network traffic flow on the cluster. As an administrator, you can monitor the statistics with the available display options. 7.1.1. Working with the Overview view As an administrator, you can navigate to the Overview view to see the graphical representation of the flow rate statistics. Procedure Navigate to Observe Network Traffic . In the Network Traffic page, click the Overview tab. You can configure the scope of each flow rate data by clicking the menu icon. 7.1.2. Configuring advanced options for the Overview view You can customize the graphical view by using advanced options. To access the advanced options, click Show advanced options . You can configure the details in the graph by using the Display options drop-down menu. The options available are as follows: Scope : Select to view the components that network traffic flows between. You can set the scope to Node , Namespace , Owner , Zones , Cluster or Resource . Owner is an aggregation of resources. Resource can be a pod, service, node, in case of host-network traffic, or an unknown IP address. The default value is Namespace . Truncate labels : Select the required width of the label from the drop-down list. The default value is M . 7.1.2.1. Managing panels and display You can select the required panels to be displayed, reorder them, and focus on a specific panel. To add or remove panels, click Manage panels . The following panels are shown by default: Top X average bytes rates Top X bytes rates stacked with total Other panels can be added in Manage panels : Top X average packets rates Top X packets rates stacked with total Query options allows you to choose whether to show the Top 5 , Top 10 , or Top 15 rates. 7.1.3. Packet drop tracking You can configure graphical representation of network flow records with packet loss in the Overview view. By employing eBPF tracepoint hooks, you can gain valuable insights into packet drops for TCP, UDP, SCTP, ICMPv4, and ICMPv6 protocols, which can result in the following actions: Identification: Pinpoint the exact locations and network paths where packet drops are occurring. Determine whether specific devices, interfaces, or routes are more prone to drops. Root cause analysis: Examine the data collected by the eBPF program to understand the causes of packet drops. For example, are they a result of congestion, buffer issues, or specific network events? Performance optimization: With a clearer picture of packet drops, you can take steps to optimize network performance, such as adjust buffer sizes, reconfigure routing paths, or implement Quality of Service (QoS) measures. When packet drop tracking is enabled, you can see the following panels in the Overview by default: Top X packet dropped state stacked with total Top X packet dropped cause stacked with total Top X average dropped packets rates Top X dropped packets rates stacked with total Other packet drop panels are available to add in Manage panels : Top X average dropped bytes rates Top X dropped bytes rates stacked with total 7.1.3.1. Types of packet drops Two kinds of packet drops are detected by Network Observability: host drops and OVS drops. Host drops are prefixed with SKB_DROP and OVS drops are prefixed with OVS_DROP . Dropped flows are shown in the side panel of the Traffic flows table along with a link to a description of each drop type. Examples of host drop reasons are as follows: SKB_DROP_REASON_NO_SOCKET : the packet dropped due to a missing socket. SKB_DROP_REASON_TCP_CSUM : the packet dropped due to a TCP checksum error. Examples of OVS drops reasons are as follows: OVS_DROP_LAST_ACTION : OVS packets dropped due to an implicit drop action, for example due to a configured network policy. OVS_DROP_IP_TTL : OVS packets dropped due to an expired IP TTL. See the Additional resources of this section for more information about enabling and working with packet drop tracking. Additional resources Working with packet drops Network Observability metrics 7.1.4. DNS tracking You can configure graphical representation of Domain Name System (DNS) tracking of network flows in the Overview view. Using DNS tracking with extended Berkeley Packet Filter (eBPF) tracepoint hooks can serve various purposes: Network Monitoring: Gain insights into DNS queries and responses, helping network administrators identify unusual patterns, potential bottlenecks, or performance issues. Security Analysis: Detect suspicious DNS activities, such as domain name generation algorithms (DGA) used by malware, or identify unauthorized DNS resolutions that might indicate a security breach. Troubleshooting: Debug DNS-related issues by tracing DNS resolution steps, tracking latency, and identifying misconfigurations. By default, when DNS tracking is enabled, you can see the following non-empty metrics represented in a donut or line chart in the Overview : Top X DNS Response Code Top X average DNS latencies with overall Top X 90th percentile DNS latencies Other DNS tracking panels can be added in Manage panels : Bottom X minimum DNS latencies Top X maximum DNS latencies Top X 99th percentile DNS latencies This feature is supported for IPv4 and IPv6 UDP and TCP protocols. See the Additional resources in this section for more information about enabling and working with this view. Additional resources Working with DNS tracking Network Observability metrics 7.1.5. Round-Trip Time You can use TCP smoothed Round-Trip Time (sRTT) to analyze network flow latencies. You can use RTT captured from the fentry/tcp_rcv_established eBPF hookpoint to read sRTT from the TCP socket to help with the following: Network Monitoring: Gain insights into TCP latencies, helping network administrators identify unusual patterns, potential bottlenecks, or performance issues. Troubleshooting: Debug TCP-related issues by tracking latency and identifying misconfigurations. By default, when RTT is enabled, you can see the following TCP RTT metrics represented in the Overview : Top X 90th percentile TCP Round Trip Time with overall Top X average TCP Round Trip Time with overall Bottom X minimum TCP Round Trip Time with overall Other RTT panels can be added in Manage panels : Top X maximum TCP Round Trip Time with overall Top X 99th percentile TCP Round Trip Time with overall See the Additional resources in this section for more information about enabling and working with this view. Additional resources Working with RTT tracing 7.1.6. eBPF flow rule filter You can use rule-based filtering to control the volume of packets cached in the eBPF flow table. For example, a filter can specify that only packets coming from port 100 should be recorded. Then only the packets that match the filter are cached and the rest are not cached. 7.1.6.1. Ingress and egress traffic filtering CIDR notation efficiently represents IP address ranges by combining the base IP address with a prefix length. For both ingress and egress traffic, the source IP address is first used to match filter rules configured with CIDR notation. If there is a match, then the filtering proceeds. If there is no match, then the destination IP is used to match filter rules configured with CIDR notation. After matching either the source IP or the destination IP CIDR, you can pinpoint specific endpoints using the peerIP to differentiate the destination IP address of the packet. Based on the provisioned action, the flow data is either cached in the eBPF flow table or not cached. 7.1.6.2. Dashboard and metrics integrations When this option is enabled, the Netobserv/Health dashboard for eBPF agent statistics now has the Filtered flows rate view. Additionally, in Observe Metrics you can query netobserv_agent_filtered_flows_total to observe metrics with the reason in FlowFilterAcceptCounter , FlowFilterNoMatchCounter or FlowFilterRecjectCounter . 7.1.6.3. Flow filter configuration parameters The flow filter rules consist of required and optional parameters. Table 7.1. Required configuration parameters Parameter Description enable Set enable to true to enable the eBPF flow filtering feature. cidr Provides the IP address and CIDR mask for the flow filter rule. Supports both IPv4 and IPv6 address format. If you want to match against any IP, you can use 0.0.0.0/0 for IPv4 or ::/0 for IPv6. action Describes the action that is taken for the flow filter rule. The possible values are Accept or Reject . For the Accept action matching rule, the flow data is cached in the eBPF table and updated with the global metric, FlowFilterAcceptCounter . For the Reject action matching rule, the flow data is dropped and not cached in the eBPF table. The flow data is updated with the global metric, FlowFilterRejectCounter . If the rule is not matched, the flow is cached in the eBPF table and updated with the global metric, FlowFilterNoMatchCounter . Table 7.2. Optional configuration parameters Parameter Description direction Defines the direction of the flow filter rule. Possible values are Ingress or Egress . protocol Defines the protocol of the flow filter rule. Possible values are TCP , UDP , SCTP , ICMP , and ICMPv6 . tcpFlags Defines the TCP flags to filter flows. Possible values are SYN , SYN-ACK , ACK , FIN , RST , PSH , URG , ECE , CWR , FIN-ACK , and RST-ACK . ports Defines the ports to use for filtering flows. It can be used for either source or destination ports. To filter a single port, set a single port as an integer value. For example ports: 80 . To filter a range of ports, use a "start-end" range in string format. For example ports: "80-100" sourcePorts Defines the source port to use for filtering flows. To filter a single port, set a single port as an integer value, for example sourcePorts: 80 . To filter a range of ports, use a "start-end" range, string format, for example sourcePorts: "80-100" . destPorts DestPorts defines the destination ports to use for filtering flows. To filter a single port, set a single port as an integer value, for example destPorts: 80 . To filter a range of ports, use a "start-end" range in string format, for example destPorts: "80-100" . icmpType Defines the ICMP type to use for filtering flows. icmpCode Defines the ICMP code to use for filtering flows. peerIP Defines the IP address to use for filtering flows, for example: 10.10.10.10 . Additional resources Filtering eBPF flow data with rules Network Observability metrics Health dashboards 7.1.7. OVN Kubernetes networking events Important OVN-Kubernetes networking events tracking is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You use network event tracking in Network Observability to gain insight into OVN-Kubernetes events, including network policies, admin network policies, and egress firewalls. You can use the insights from tracking network events to help with the following tasks: Network monitoring: Monitor allowed and blocked traffic, detecting whether packets are allowed or blocked based on network policies and admin network policies. Network security: You can track outbound traffic and see whether it adheres to egress firewall rules. Detect unauthorized outbound connections and flag outbound traffic that violates egress rules. See the Additional resources in this section for more information about enabling and working with this view. Additional resources Viewing network events 7.2. Observing the network traffic from the Traffic flows view The Traffic flows view displays the data of the network flows and the amount of traffic in a table. As an administrator, you can monitor the amount of traffic across the application by using the traffic flow table. 7.2.1. Working with the Traffic flows view As an administrator, you can navigate to Traffic flows table to see network flow information. Procedure Navigate to Observe Network Traffic . In the Network Traffic page, click the Traffic flows tab. You can click on each row to get the corresponding flow information. 7.2.2. Configuring advanced options for the Traffic flows view You can customize and export the view by using Show advanced options . You can set the row size by using the Display options drop-down menu. The default value is Normal . 7.2.2.1. Managing columns You can select the required columns to be displayed, and reorder them. To manage columns, click Manage columns . 7.2.2.2. Exporting the traffic flow data You can export data from the Traffic flows view. Procedure Click Export data . In the pop-up window, you can select the Export all data checkbox to export all the data, and clear the checkbox to select the required fields to be exported. Click Export . 7.2.3. Working with conversation tracking As an administrator, you can group network flows that are part of the same conversation. A conversation is defined as a grouping of peers that are identified by their IP addresses, ports, and protocols, resulting in an unique Conversation Id . You can query conversation events in the web console. These events are represented in the web console as follows: Conversation start : This event happens when a connection is starting or TCP flag intercepted Conversation tick : This event happens at each specified interval defined in the FlowCollector spec.processor.conversationHeartbeatInterval parameter while the connection is active. Conversation end : This event happens when the FlowCollector spec.processor.conversationEndTimeout parameter is reached or the TCP flag is intercepted. Flow : This is the network traffic flow that occurs within the specified interval. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster then select the YAML tab. Configure the FlowCollector custom resource so that spec.processor.logTypes , conversationEndTimeout , and conversationHeartbeatInterval parameters are set according to your observation needs. A sample configuration is as follows: Configure FlowCollector for conversation tracking apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: logTypes: Flows 1 advanced: conversationEndTimeout: 10s 2 conversationHeartbeatInterval: 30s 3 1 When logTypes is set to Flows , only the Flow event is exported. If you set the value to All , both conversation and flow events are exported and visible in the Network Traffic page. To focus only on conversation events, you can specify Conversations which exports the Conversation start , Conversation tick and Conversation end events; or EndedConversations exports only the Conversation end events. Storage requirements are highest for All and lowest for EndedConversations . 2 The Conversation end event represents the point when the conversationEndTimeout is reached or the TCP flag is intercepted. 3 The Conversation tick event represents each specified interval defined in the FlowCollector conversationHeartbeatInterval parameter while the network connection is active. Note If you update the logType option, the flows from the selection do not clear from the console plugin. For example, if you initially set logType to Conversations for a span of time until 10 AM and then move to EndedConversations , the console plugin shows all conversation events before 10 AM and only ended conversations after 10 AM. Refresh the Network Traffic page on the Traffic flows tab. Notice there are two new columns, Event/Type and Conversation Id . All the Event/Type fields are Flow when Flow is the selected query option. Select Query Options and choose the Log Type , Conversation . Now the Event/Type shows all of the desired conversation events. you can filter on a specific conversation ID or switch between the Conversation and Flow log type options from the side panel. 7.2.4. Working with packet drops Packet loss occurs when one or more packets of network flow data fail to reach their destination. You can track these drops by editing the FlowCollector to the specifications in the following YAML example. Important CPU and memory usage increases when this feature is enabled. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster , and then select the YAML tab. Configure the FlowCollector custom resource for packet drops, for example: Example FlowCollector configuration apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketDrop 1 privileged: true 2 1 You can start reporting the packet drops of each network flow by listing the PacketDrop parameter in the spec.agent.ebpf.features specification list. 2 The spec.agent.ebpf.privileged specification value must be true for packet drop tracking. Verification When you refresh the Network Traffic page, the Overview , Traffic Flow , and Topology views display new information about packet drops: Select new choices in Manage panels to choose which graphical visualizations of packet drops to display in the Overview . Select new choices in Manage columns to choose which packet drop information to display in the Traffic flows table. In the Traffic Flows view, you can also expand the side panel to view more information about packet drops. Host drops are prefixed with SKB_DROP and OVS drops are prefixed with OVS_DROP . In the Topology view, red lines are displayed where drops are present. 7.2.5. Working with DNS tracking Using DNS tracking, you can monitor your network, conduct security analysis, and troubleshoot DNS issues. You can track DNS by editing the FlowCollector to the specifications in the following YAML example. Important CPU and memory usage increases are observed in the eBPF agent when this feature is enabled. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for Network Observability , select Flow Collector . Select cluster then select the YAML tab. Configure the FlowCollector custom resource. A sample configuration is as follows: Configure FlowCollector for DNS tracking apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - DNSTracking 1 sampling: 1 2 1 You can set the spec.agent.ebpf.features parameter list to enable DNS tracking of each network flow in the web console. 2 You can set sampling to a value of 1 for more accurate metrics and to capture DNS latency . For a sampling value greater than 1, you can observe flows with DNS Response Code and DNS Id , and it is unlikely that DNS Latency can be observed. When you refresh the Network Traffic page, there are new DNS representations you can choose to view in the Overview and Traffic Flow views and new filters you can apply. Select new DNS choices in Manage panels to display graphical visualizations and DNS metrics in the Overview . Select new choices in Manage columns to add DNS columns to the Traffic Flows view. Filter on specific DNS metrics, such as DNS Id , DNS Error DNS Latency and DNS Response Code , and see more information from the side panel. The DNS Latency and DNS Response Code columns are shown by default. Note TCP handshake packets do not have DNS headers. TCP protocol flows without DNS headers are shown in the traffic flow data with DNS Latency , ID , and Response code values of "n/a". You can filter out flow data to view only flows that have DNS headers using the Common filter "DNSError" equal to "0". 7.2.6. Working with RTT tracing You can track RTT by editing the FlowCollector to the specifications in the following YAML example. Procedure In the web console, navigate to Operators Installed Operators . In the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster , and then select the YAML tab. Configure the FlowCollector custom resource for RTT tracing, for example: Example FlowCollector configuration apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - FlowRTT 1 1 You can start tracing RTT network flows by listing the FlowRTT parameter in the spec.agent.ebpf.features specification list. Verification When you refresh the Network Traffic page, the Overview , Traffic Flow , and Topology views display new information about RTT: In the Overview , select new choices in Manage panels to choose which graphical visualizations of RTT to display. In the Traffic flows table, the Flow RTT column can be seen, and you can manage display in Manage columns . In the Traffic Flows view, you can also expand the side panel to view more information about RTT. Example filtering Click the Common filters Protocol . Filter the network flow data based on TCP , Ingress direction, and look for FlowRTT values greater than 10,000,000 nanoseconds (10ms). Remove the Protocol filter. Filter for Flow RTT values greater than 0 in the Common filters. In the Topology view, click the Display option dropdown. Then click RTT in the edge labels drop-down list. 7.2.6.1. Using the histogram You can click Show histogram to display a toolbar view for visualizing the history of flows as a bar chart. The histogram shows the number of logs over time. You can select a part of the histogram to filter the network flow data in the table that follows the toolbar. 7.2.7. Working with availability zones You can configure the FlowCollector to collect information about the cluster availability zones. This allows you to enrich network flow data with the topology.kubernetes.io/zone label value applied to the nodes. Procedure In the web console, go to Operators Installed Operators . Under the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster then select the YAML tab. Configure the FlowCollector custom resource so that the spec.processor.addZone parameter is set to true . A sample configuration is as follows: Configure FlowCollector for availability zones collection apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: # ... processor: addZone: true # ... Verification When you refresh the Network Traffic page, the Overview , Traffic Flow , and Topology views display new information about availability zones: In the Overview tab, you can see Zones as an available Scope . In Network Traffic Traffic flows , Zones are viewable under the SrcK8S_Zone and DstK8S_Zone fields. In the Topology view, you can set Zones as Scope or Group . 7.2.8. Filtering eBPF flow data using a global rule You can configure the FlowCollector to filter eBPF flows using a global rule to control the flow of packets cached in the eBPF flow table. Procedure In the web console, navigate to Operators Installed Operators . Under the Provided APIs heading for Network Observability , select Flow Collector . Select cluster , then select the YAML tab. Configure the FlowCollector custom resource, similar to the following sample configurations: Example 7.1. Filter Kubernetes service traffic to a specific Pod IP endpoint apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 172.210.150.1/24 2 protocol: SCTP direction: Ingress destPortRange: 80-100 peerIP: 10.10.10.10 enable: true 3 1 The required action parameter describes the action that is taken for the flow filter rule. Possible values are Accept or Reject . 2 The required cidr parameter provides the IP address and CIDR mask for the flow filter rule and supports IPv4 and IPv6 address formats. If you want to match against any IP address, you can use 0.0.0.0/0 for IPv4 or ::/0 for IPv6. 3 You must set spec.agent.ebpf.flowFilter.enable to true to enable this feature. Example 7.2. See flows to any addresses outside the cluster apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 0.0.0.0/0 2 protocol: TCP direction: Egress sourcePort: 100 peerIP: 192.168.127.12 3 enable: true 4 1 You can Accept flows based on the criteria in the flowFilter specification. 2 The cidr value of 0.0.0.0/0 matches against any IP address. 3 See flows after peerIP is configured with 192.168.127.12 . 4 You must set spec.agent.ebpf.flowFilter.enable to true to enable the feature. 7.2.9. Endpoint translation (xlat) You can gain visibility into the endpoints serving traffic in a consolidated view using Network Observability and extended Berkeley Packet Filter (eBPF). Typically, when traffic flows through a service, egressIP, or load balancer, the traffic flow information is abstracted as it is routed to one of the available pods. If you try to get information about the traffic, you can only view service related info, such as service IP and port, and not information about the specific pod that is serving the request. Often the information for both the service traffic and the virtual service endpoint is captured as two separate flows, which complicates troubleshooting. To solve this, endpoint xlat can help in the following ways: Capture the network flows at the kernel level, which has a minimal impact on performance. Enrich the network flows with translated endpoint information, showing not only the service but also the specific backend pod, so you can see which pod served a request. As network packets are processed, the eBPF hook enriches flow logs with metadata about the translated endpoint that includes the following pieces of information that you can view in the Network Traffic page in a single row: Source Pod IP Source Port Destination Pod IP Destination Port Conntrack Zone ID 7.2.10. Working with endpoint translation (xlat) You can use Network Observability and eBPF to enrich network flows from a Kubernetes service with translated endpoint information, gaining insight into the endpoints serving traffic. Procedure In the web console, navigate to Operators Installed Operators . In the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster , and then select the YAML tab. Configure the FlowCollector custom resource for PacketTranslation , for example: Example FlowCollector configuration apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketTranslation 1 1 You can start enriching network flows with translated packet information by listing the PacketTranslation parameter in the spec.agent.ebpf.features specification list. Example filtering When you refresh the Network Traffic page you can filter for information about translated packets: Filter the network flow data based on Destination kind: Service . You can see the xlat column, which distinguishes where translated information is displayed, and the following default columns: Xlat Zone ID Xlat Src Kubernetes Object Xlat Dst Kubernetes Object You can manage the display of additional xlat columns in Manage columns . 7.2.11. Viewing network events Important OVN-Kubernetes networking events tracking is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can edit the FlowCollector to view information about network traffic events, such as network flows that are dropped or allowed by the following resources: NetworkPolicy AdminNetworkPolicy BaselineNetworkPolicy EgressFirewall UserDefinedNetwork isolation Multicast ACLs Prerequisites You must have OVNObservability enabled by setting the TechPreviewNoUpgrade feature set in the FeatureGate custom resource (CR) named cluster . For more information, see "Enabling feature sets using the CLI" and "Checking OVN-Kubernetes network traffic with OVS sampling using the CLI". You have created at least one of the following network APIs: NetworkPolicy , AdminNetworkPolicy , BaselineNetworkPolicy , UserDefinedNetwork isolation, multicast, or EgressFirewall . Procedure In the web console, navigate to Operators Installed Operators . In the Provided APIs heading for the NetObserv Operator , select Flow Collector . Select cluster , and then select the YAML tab. Configure the FlowCollector CR to enable viewing NetworkEvents , for example: Example FlowCollector configuration apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: agent: type: eBPF ebpf: # sampling: 1 1 privileged: true 2 features: - "NetworkEvents" 1 Optional: The sampling parameter is set to a value of 1 so that all network events are captured. If sampling 1 is too resource heavy, set sampling to something more appropriate for your needs. 2 The privileged parameter is set to true because the OVN observability library needs to access local Open vSwitch (OVS) socket and OpenShift Virtual Network (OVN) databases. Verification Navigate to the Network Traffic view and select the Traffic flows table. You should see the new column, Network Events , where you can view information about impacts of one of the following network APIs you have enabled: NetworkPolicy , AdminNetworkPolicy , BaselineNetworkPolicy , UserDefinedNetwork isolation, multicast, or egress firewalls. An example of the kind of events you could see in this column is as follows: + .Example of Network Events output <Dropped_or_Allowed> by <network_event_and_event_name>, direction <Ingress_or_Egress> Additional resources Enabling feature sets using the CLI Checking OVN-Kubernetes network traffic with OVS sampling using the CLI 7.3. Observing the network traffic from the Topology view The Topology view provides a graphical representation of the network flows and the amount of traffic. As an administrator, you can monitor the traffic data across the application by using the Topology view. 7.3.1. Working with the Topology view As an administrator, you can navigate to the Topology view to see the details and metrics of the component. Procedure Navigate to Observe Network Traffic . In the Network Traffic page, click the Topology tab. You can click each component in the Topology to view the details and metrics of the component. 7.3.2. Configuring the advanced options for the Topology view You can customize and export the view by using Show advanced options . The advanced options view has the following features: Find in view : To search the required components in the view. Display options : To configure the following options: Edge labels : To show the specified measurements as edge labels. The default is to show the Average rate in Bytes . Scope : To select the scope of components between which the network traffic flows. The default value is Namespace . Groups : To enhance the understanding of ownership by grouping the components. The default value is None . Layout : To select the layout of the graphical representation. The default value is ColaNoForce . Show : To select the details that need to be displayed. All the options are checked by default. The options available are: Edges , Edges label , and Badges . Truncate labels : To select the required width of the label from the drop-down list. The default value is M . Collapse groups : To expand or collapse the groups. The groups are expanded by default. This option is disabled if Groups has the value of None . 7.3.2.1. Exporting the topology view To export the view, click Export topology view . The view is downloaded in PNG format. 7.4. Filtering the network traffic By default, the Network Traffic page displays the traffic flow data in the cluster based on the default filters configured in the FlowCollector instance. You can use the filter options to observe the required data by changing the preset filter. Query Options You can use Query Options to optimize the search results, as listed below: Log Type : The available options Conversation and Flows provide the ability to query flows by log type, such as flow log, new conversation, completed conversation, and a heartbeat, which is a periodic record with updates for long conversations. A conversation is an aggregation of flows between the same peers. Match filters : You can determine the relation between different filter parameters selected in the advanced filter. The available options are Match all and Match any . Match all provides results that match all the values, and Match any provides results that match any of the values entered. The default value is Match all . Datasource : You can choose the datasource to use for queries: Loki , Prometheus , or Auto . Notable performance improvements can be realized when using Prometheus as a datasource rather than Loki, but Prometheus supports a limited set of filters and aggregations. The default datasource is Auto , which uses Prometheus on supported queries or uses Loki if the query does not support Prometheus. Drops filter : You can view different levels of dropped packets with the following query options: Fully dropped shows flow records with fully dropped packets. Containing drops shows flow records that contain drops but can be sent. Without drops shows records that contain sent packets. All shows all the aforementioned records. Limit : The data limit for internal backend queries. Depending upon the matching and the filter settings, the number of traffic flow data is displayed within the specified limit. Quick filters The default values in Quick filters drop-down menu are defined in the FlowCollector configuration. You can modify the options from console. Advanced filters You can set the advanced filters, Common , Source , or Destination , by selecting the parameter to be filtered from the dropdown list. The flow data is filtered based on the selection. To enable or disable the applied filter, you can click on the applied filter listed below the filter options. You can toggle between One way and Back and forth filtering. The One way filter shows only Source and Destination traffic according to your filter selections. You can use Swap to change the directional view of the Source and Destination traffic. The Back and forth filter includes return traffic with the Source and Destination filters. The directional flow of network traffic is shown in the Direction column in the Traffic flows table as Ingress`or `Egress for inter-node traffic and `Inner`for traffic inside a single node. You can click Reset defaults to remove the existing filters, and apply the filter defined in FlowCollector configuration. Note To understand the rules of specifying the text value, click Learn More . Alternatively, you can access the traffic flow data in the Network Traffic tab of the Namespaces , Services , Routes , Nodes , and Workloads pages which provide the filtered data of the corresponding aggregations. Additional resources Configuring Quick Filters Flow Collector sample resource
|
[
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: logTypes: Flows 1 advanced: conversationEndTimeout: 10s 2 conversationHeartbeatInterval: 30s 3",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketDrop 1 privileged: true 2",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - DNSTracking 1 sampling: 1 2",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - FlowRTT 1",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: processor: addZone: true",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 172.210.150.1/24 2 protocol: SCTP direction: Ingress destPortRange: 80-100 peerIP: 10.10.10.10 enable: true 3",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv deploymentModel: Direct agent: type: eBPF ebpf: flowFilter: action: Accept 1 cidr: 0.0.0.0/0 2 protocol: TCP direction: Egress sourcePort: 100 peerIP: 192.168.127.12 3 enable: true 4",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: namespace: netobserv agent: type: eBPF ebpf: features: - PacketTranslation 1",
"apiVersion: flows.netobserv.io/v1beta2 kind: FlowCollector metadata: name: cluster spec: agent: type: eBPF ebpf: # sampling: 1 1 privileged: true 2 features: - \"NetworkEvents\"",
"<Dropped_or_Allowed> by <network_event_and_event_name>, direction <Ingress_or_Egress>"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/network_observability/nw-observe-network-traffic
|
14.4. Automatic ID Range Extension After Deleting a Replica
|
14.4. Automatic ID Range Extension After Deleting a Replica When you delete a functioning replica, the ipa-replica-manage del command retrieves the ID ranges that were assigned to the replica and adds them as a range to other available IdM replicas. This ensures that ID ranges remain available to be used by other replicas. After you delete a replica, you can verify which ID ranges are configured for other servers by using the ipa-replica-manage dnarange-show and ipa-replica-manage dnanextrange-show commands, described in Section 14.3, "Displaying Currently Assigned ID Ranges" .
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/auto-extend-id-ranges
|
23.16. Write Changes to Disk
|
23.16. Write Changes to Disk The installer prompts you to confirm the partitioning options that you selected. Click Write changes to disk to allow the installer to partition your hard drive and install Red Hat Enterprise Linux. Figure 23.45. Writing storage configuration to disk If you are certain that you want to proceed, click Write changes to disk . Warning Up to this point in the installation process, the installer has made no lasting changes to your computer. When you click Write changes to disk , the installer will allocate space on your hard drive and start to transfer Red Hat Enterprise Linux into this space. Depending on the partitioning option that you chose, this process might include erasing data that already exists on your computer. To revise any of the choices that you made up to this point, click Go back . To cancel installation completely, switch off your computer. After you click Write changes to disk , allow the installation process to complete. If the process is interrupted (for example, by you switching off or resetting the computer, or by a power outage) you will probably not be able to use your computer until you restart and complete the Red Hat Enterprise Linux installation process, or install a different operating system.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/write_changes_to_disk-s390
|
Deploying RHEL 8 on Amazon Web Services
|
Deploying RHEL 8 on Amazon Web Services Red Hat Enterprise Linux 8 Obtaining RHEL system images and creating RHEL instances on AWS Red Hat Customer Content Services
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_rhel_8_on_amazon_web_services/index
|
Chapter 16. Changing Variable Values
|
Chapter 16. Changing Variable Values Overview When the Camel debugger hits a breakpoint, the Variables view displays the values of all variables available at that point in the routing context. Some variables are editable, allowing you to change their value. This enables you to see how the application handles changes in program state. Note Not all variables are editable. The context menu of editable variables displays the Change Value... option. Procedure To change the value of a variable: If necessary, start the debugger. See Chapter 14, Running the Camel Debugger . In the Variables view, select a variable whose value you want to change, and then click its Value field. The variable's value field turns a lighter shade of blue, indicating that it is in edit mode. Note Alternatively, you can right-click the variable to open its context menu, and select Change Value... to edit its value. Enter the new value and then click Enter . The Console view displays an INFO level log entry noting the change in the variable's value (for example, Breakpoint at node to1 is updating message header on exchangeId: ID-dhcp-97-16-bos-redhat-com-52574-1417298894070-0-2 with header: Destination and value: UNITED KINGDOM ). Continue stepping through the breakpoints and check whether the message is processed as expected. At each step, check the Debug view, the Variables view, and the Console output. Related topics Chapter 18, Disabling Breakpoints in a Running Context Chapter 17, Adding Variables to the Watch List
| null |
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/editVariables
|
Chapter 5. Testing the cluster configuration
|
Chapter 5. Testing the cluster configuration Before the HA cluster setup is put in production, it is recommended to perform the following tests to ensure that the HA cluster setup works as expected. These tests should also be repeated later on as part of regular HA/DR drills to ensure that the cluster still works as expected and that admins stay familiar with the procedures required to bring the setup back to a healthy state in case an issue occurs during normal operation, or if manual maintenance of the setup is required. 5.1. Manually moving ASCS instance using pcs command To verify that the pacemaker cluster is able to move the instances to the other HA cluster node on demand. Test Preconditions Both cluster nodes are up, with the resource groups for the ASCS and ERS running on different HA cluster nodes: * Resource Group: S4H_ASCS20_group: * S4H_lvm_ascs20 (ocf:heartbeat:LVM-activate): Started node1 * S4H_fs_ascs20 (ocf:heartbeat:Filesystem): Started node1 * S4H_vip_ascs20 (ocf:heartbeat:IPaddr2): Started node1 * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node1 * Resource Group: S4H_ERS29_group: * S4H_lvm_ers29 (ocf:heartbeat:LVM-activate): Started node2 * S4H_fs_ers29 (ocf:heartbeat:Filesystem): Started node2 * S4H_vip_ers29 (ocf:heartbeat:IPaddr2): Started node2 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node2 All failures for the resources and resource groups have been cleared and the failcounts have been reset. Test Procedure Run the following command from any node to initiate the move of the ASCS instance to the other HA cluster node: [root@node1]# pcs resource move S4H_ascs20 Monitoring Run the following command in a separate terminal during the test: [root@node2]# watch -n 1 pcs status Expected behavior The ASCS resource group is moved to the other node. The ERS resource group stops after that and moves to the node where the ASCS resource group was running before. Test Result ASCS resource group moves to other node, in this scenario node node2 and ERS resource group moves to node node1: * Resource Group: S4H_ASCS20_group: * S4H_lvm_ascs20 (ocf:heartbeat:LVM-activate): Started node2 * S4H_fs_ascs20 (ocf:heartbeat:Filesystem): Started node2 * S4H_vip_ascs20 (ocf:heartbeat:IPaddr2): Started node2 * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node2 * Resource Group: S4H_ERS29_group: * S4H_lvm_ers29 (ocf:heartbeat:LVM-activate): Started node1 * S4H_fs_ers29 (ocf:heartbeat:Filesystem): Started node1 * S4H_vip_ers29 (ocf:heartbeat:IPaddr2): Started node1 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node1 Recovery Procedure: Remove the location constraints, if any: [root@node1]# pcs resource clear S4H_ascs20 5.2. Manually moving of the ASCS instance using sapcontrol (with SAP HA interface enabled) To verify that the sapcontrol command is able to move the instances to the other HA cluster node when the SAP HA interface is enabled for the instance. Test Preconditions The SAP HA interface is enabled for the SAP instance. Both cluster nodes are up with the resource groups for the ASCS and ERS running. [root@node2: ~]# pcs status | egrep -e "S4H_ascs20|S4H_ers29" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node2 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node1 All failures for the resources and resource groups have been cleared and the failcounts have been reset. Test Procedure As the <sid>adm user, run the HAFailoverToNode function of sapcontrol to move the ASCS instance to the other node. Monitoring Run the following command in a separate terminal during the test: [root@node2]# watch -n 1 pcs status Expected behavior ASCS instances should move to the other HA cluster node, creating a temporary location constraint for the move to complete. Test [root@node2]# su - s4hadm node2:s4hadm 52> sapcontrol -nr 20 -function HAFailoverToNode "" 06.12.2023 12:57:04 HAFailoverToNode OK Test result ASCS and ERS both move to the other node: [root@node2]# pcs status | egrep -e "S4H_ascs20|S4H_ers29" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node1 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node2 Constraints are created as shown below: [root@node1]# pcs constraint Location Constraints: Resource: S4H_ASCS20_group Constraint: cli-ban-S4H_ASCS20_group-on-node2 Rule: boolean-op=and score=-INFINITY Expression: #uname eq string node1 Expression: date lt xxxx-xx-xx xx:xx:xx +xx:xx Recovery Procedure The constraint shown above is cleared automatically when the date lt mentioned in the Expression is reached. Alternatively, the constraint can be removed with the following command: [root@node1]# pcs resource clear S4H_ascs20 5.3. Testing failure of the ASCS instance To verify that the pacemaker cluster takes necessary action when the enqueue server of the ASCS instance or the whole ASCS instance fails. Test Preconditions Both cluster nodes are up with the resource groups for the ASCS and ERS running: [root@node2]# pcs status | egrep -e "S4H_ascs20|S4H_ers29" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node1 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node2 All failures for the resources and resource groups have been cleared and the failcounts have been reset. Test Procedure Identify the PID of the enqueue server on the node where ASCS is running. Send a SIGKILL signal to the identified process. Monitoring Run the following command in a separate terminal during the test: [root@node2]# watch -n 1 pcs status Expected behavior Enqueue server process gets killed. The pacemaker cluster takes the required action as per configuration, in this case moving the ASCS to the other node. Test Switch to the <sid>adm user on the node where ASCS is running: [root@node1]# su - s4hadm Identify the PID of en.sap(NetWeaver) enq.sap(S/4HANA): node1:s4hadm 51> pgrep -af "(en|enq).sap" 31464 enq.sapS4H_ASCS20 pf=/usr/sap/S4H/SYS/profile/S4H_ASCS20_s4ascs Kill the identified process: node1:s4hadm 52> kill -9 31464 Notice the cluster Failed Resource Actions : [root@node2]# pcs status | grep "Failed Resource Actions" -A1 Failed Resource Actions: * S4H_ascs20 2m-interval monitor on node1 returned 'not running' at Wed Dec 6 15:37:24 2023 ASCS and ERS move to the other node: [root@node2]# pcs status | egrep -e "S4H_ascs20|S4H_ers29" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node2 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node1 * S4H_ascs20 2m-interval monitor on node1 returned 'not running' at Wed Dec 6 15:37:24 2023 Recovery Procedure Clear the failed action: [root@node2]# pcs resource cleanup S4H_ascs20 ... Waiting for 1 reply from the controller ... got reply (done) 5.4. Testing failure of the ERS instance To verify that the pacemaker cluster takes necessary action when the enqueue replication server ( ERS ) of the ASCS instance fails. Test Preconditions Both cluster nodes are up with the resource groups for the ASCS and ERS running: [root@node1]# pcs status | egrep -e "S4H_ascs20|S4H_ers29" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node2 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node1 All failures for the resources and resource groups have been cleared and the failcounts have been reset. Test Procedure Identify the PID of the enqueue replication server process on the node where the ERS instance is running. Send a SIGKILL signal to the identified process. Monitoring Run the following command in a separate terminal during the test: [root@node2]# watch -n 1 pcs status Expected behavior Enqueue Replication server process gets killed. Pacemaker cluster takes the required action as per configuration, in this case, restarting the ERS instance on the same node. Test Switch to the <sid>adm user: [root@node1]# su - s4hadm Identify the PID of enqr.sap : node1:s4hadm 56> pgrep -af enqr.sap 532273 enqr.sapS4H_ERS29 pf=/usr/sap/S4H/SYS/profile/S4H_ERS29_s4ers Kill the identified process: node1:s4hadm 58> kill -9 532273 Notice the cluster "Failed Resource Actions": [root@node1]# pcs status | grep "Failed Resource Actions" -A1 Failed Resource Actions: * S4H_ers29 2m-interval monitor on node1 returned 'not running' at Thu Dec 7 13:15:02 2023 ERS restarts on the same node without disturbing the ASCS already running on the other node: [root@node1]# pcs status | egrep -e "S4H_ascs20|S4H_ers29" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node2 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node1 * S4H_ers29 2m-interval monitor on node1 returned 'not running' at Thu Dec 7 13:15:02 2023 Recovery Procedure Clear the failed action: [root@node1]# pcs resource cleanup S4H_ers29 ... Waiting for 1 reply from the controller ... got reply (done) 5.5. Failover of ASCS instance due to node crash To verify that the ASCS instance moves correctly in case of a node crash. Test Preconditions Both cluster nodes are up with the resource groups for the ASCS and ERS running: [root@node1]# pcs status | egrep -e "S4H_ascs20|S4H_ers29" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node2 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node1 All failures for the resources and resource groups have been cleared and the failcounts have been reset. Test Procedure Crash the node where ASCS is running. Monitoring Run the following command in a separate terminal on the other node during the test: [root@node1]# watch -n 1 pcs status Expected behavior Node where ASCS is running gets crashed and shuts down or restarts as per configuration. Meanwhile ASCS moves to the other node. ERS starts on the previously crashed node, after it comes back online. Test Run the following command as the root user on the node where ASCS is running: [root@node2]# echo c > /proc/sysrq-trigger ASCS moves to the other node: [root@node1]# pcs status | egrep -e "S4H_ascs20|S4H_ers29" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node1 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node1 ERS stops and moves to the previously crashed node once it comes back online: [root@node1]# pcs status | egrep -e "S4H_ascs20|S4H_ers29" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node1 * S4H_ers29 (ocf:heartbeat:SAPInstance): Stopped [root@node1]# pcs status | egrep -e "S4H_ascs20|S4H_ers29" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node1 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node2 Recovery Procedure Clean up failed actions, if any: [root@node1]# pcs resource cleanup 5.6. Failure of ERS instance due to node crash To verify that the ERS instance restarts on the same node. Test Preconditions Both cluster nodes are up with the resource groups for the ASCS and ERS running: [root@node1]# pcs status | egrep -e "S4H_ascs20|S4H_ers29" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node1 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node2 All failures for the resources and resource groups have been cleared and the failcounts have been reset. Test Procedure Crash the node where ERS is running. Monitoring Run the following command in a separate terminal on the other node during the test: [root@nod1]# watch -n 1 pcs status Expected behavior Node where ERS is running gets crashed and shuts down or restarts as per configuration. Meanwhile ASCS continues to run to the other node. ERS restarts on the crashed node, after it comes back online. Test Run the following command as the root user on the node where ERS is running: [root@node2]# echo c > /proc/sysrq-trigger ERS restarts on the crashed node, after it comes back online, without disturbing the ASCS instance throughout the test: [root@node1]# pcs status | egrep -e "S4H_ascs20|S4H_ers29" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node1 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node2 Recovery Procedure Clean up failed actions if any: [root@node2]# pcs resource cleanup 5.7. Failure of ASCS Instance due to node crash (ENSA2) In case of 3 node ENSA 2 cluster environment, the third node is considered during failover events of any instance. Test Preconditions A 3 node SAP S/4HANA cluster with the resource groups for the ASCS and ERS running. The 3rd node has access to all the file systems and can provision the required instance specific IP addresses the same way as the first 2 nodes. In the example setup, the underlying shared NFS filesystems are as follows: Node List: * Online: [ node1 node2 node3 ] Active Resources: * s4r9g2_fence (stonith:fence_rhevm): Started node1 * Clone Set: s4h_fs_sapmnt-clone [fs_sapmnt]: * Started: [ node1 node2 node3 ] * Clone Set: s4h_fs_sap_trans-clone [fs_sap_trans]: * Started: [ node1 node2 node3 ] * Clone Set: s4h_fs_sap_SYS-clone [fs_sap_SYS]: * Started: [ node1 node2 node3 ] * Resource Group: S4H_ASCS20_group: * S4H_lvm_ascs20 (ocf:heartbeat:LVM-activate): Started node1 * S4H_fs_ascs20 (ocf:heartbeat:Filesystem): Started node1 * S4H_vip_ascs20 (ocf:heartbeat:IPaddr2): Started node1 * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node1 * Resource Group: S4H_ERS29_group: * S4H_lvm_ers29 (ocf:heartbeat:LVM-activate): Started node2 * S4H_fs_ers29 (ocf:heartbeat:Filesystem): Started node2 * S4H_vip_ers29 (ocf:heartbeat:IPaddr2): Started node2 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node2 All failures for the resources and resource groups have been cleared and the failcounts have been reset. Test Procedure Crash the node where ASCS is running. Monitoring Run the following command in a separate terminal on one of the nodes where the ASCS group is currently not running during the test: [root@node2]# watch -n 1 pcs status Expected behavior ASCS moves to the 3rd node. ERS continues to run on the same node where it is already running. Test Crash the node where the ASCS group is currently running: [root@node1]# echo c > /proc/sysrq-trigger ASCS moves to the 3rd node without disturbing the already running ERS instance on 2nd node: [root@node2]# pcs status | egrep -e "S4H_ascs20|S4H_ers29" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node3 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node2 Recovery Procedure Clean up failed actions if any: [root@node2]# pcs resource cleanup
|
[
"* Resource Group: S4H_ASCS20_group: * S4H_lvm_ascs20 (ocf:heartbeat:LVM-activate): Started node1 * S4H_fs_ascs20 (ocf:heartbeat:Filesystem): Started node1 * S4H_vip_ascs20 (ocf:heartbeat:IPaddr2): Started node1 * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node1 * Resource Group: S4H_ERS29_group: * S4H_lvm_ers29 (ocf:heartbeat:LVM-activate): Started node2 * S4H_fs_ers29 (ocf:heartbeat:Filesystem): Started node2 * S4H_vip_ers29 (ocf:heartbeat:IPaddr2): Started node2 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node2",
"pcs resource move S4H_ascs20",
"watch -n 1 pcs status",
"* Resource Group: S4H_ASCS20_group: * S4H_lvm_ascs20 (ocf:heartbeat:LVM-activate): Started node2 * S4H_fs_ascs20 (ocf:heartbeat:Filesystem): Started node2 * S4H_vip_ascs20 (ocf:heartbeat:IPaddr2): Started node2 * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node2 * Resource Group: S4H_ERS29_group: * S4H_lvm_ers29 (ocf:heartbeat:LVM-activate): Started node1 * S4H_fs_ers29 (ocf:heartbeat:Filesystem): Started node1 * S4H_vip_ers29 (ocf:heartbeat:IPaddr2): Started node1 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node1",
"pcs resource clear S4H_ascs20",
"pcs status | egrep -e \"S4H_ascs20|S4H_ers29\" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node2 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node1",
"watch -n 1 pcs status",
"su - s4hadm node2:s4hadm 52> sapcontrol -nr 20 -function HAFailoverToNode \"\" 06.12.2023 12:57:04 HAFailoverToNode OK",
"pcs status | egrep -e \"S4H_ascs20|S4H_ers29\" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node1 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node2",
"pcs constraint Location Constraints: Resource: S4H_ASCS20_group Constraint: cli-ban-S4H_ASCS20_group-on-node2 Rule: boolean-op=and score=-INFINITY Expression: #uname eq string node1 Expression: date lt xxxx-xx-xx xx:xx:xx +xx:xx",
"pcs resource clear S4H_ascs20",
"pcs status | egrep -e \"S4H_ascs20|S4H_ers29\" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node1 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node2",
"watch -n 1 pcs status",
"su - s4hadm",
"node1:s4hadm 51> pgrep -af \"(en|enq).sap\" 31464 enq.sapS4H_ASCS20 pf=/usr/sap/S4H/SYS/profile/S4H_ASCS20_s4ascs",
"node1:s4hadm 52> kill -9 31464",
"pcs status | grep \"Failed Resource Actions\" -A1 Failed Resource Actions: * S4H_ascs20 2m-interval monitor on node1 returned 'not running' at Wed Dec 6 15:37:24 2023",
"pcs status | egrep -e \"S4H_ascs20|S4H_ers29\" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node2 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node1 * S4H_ascs20 2m-interval monitor on node1 returned 'not running' at Wed Dec 6 15:37:24 2023",
"pcs resource cleanup S4H_ascs20 ... Waiting for 1 reply from the controller ... got reply (done)",
"pcs status | egrep -e \"S4H_ascs20|S4H_ers29\" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node2 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node1",
"watch -n 1 pcs status",
"su - s4hadm",
"node1:s4hadm 56> pgrep -af enqr.sap 532273 enqr.sapS4H_ERS29 pf=/usr/sap/S4H/SYS/profile/S4H_ERS29_s4ers",
"node1:s4hadm 58> kill -9 532273",
"pcs status | grep \"Failed Resource Actions\" -A1 Failed Resource Actions: * S4H_ers29 2m-interval monitor on node1 returned 'not running' at Thu Dec 7 13:15:02 2023",
"pcs status | egrep -e \"S4H_ascs20|S4H_ers29\" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node2 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node1 * S4H_ers29 2m-interval monitor on node1 returned 'not running' at Thu Dec 7 13:15:02 2023",
"pcs resource cleanup S4H_ers29 ... Waiting for 1 reply from the controller ... got reply (done)",
"pcs status | egrep -e \"S4H_ascs20|S4H_ers29\" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node2 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node1",
"watch -n 1 pcs status",
"echo c > /proc/sysrq-trigger",
"pcs status | egrep -e \"S4H_ascs20|S4H_ers29\" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node1 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node1",
"pcs status | egrep -e \"S4H_ascs20|S4H_ers29\" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node1 * S4H_ers29 (ocf:heartbeat:SAPInstance): Stopped pcs status | egrep -e \"S4H_ascs20|S4H_ers29\" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node1 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node2",
"pcs resource cleanup",
"pcs status | egrep -e \"S4H_ascs20|S4H_ers29\" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node1 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node2",
"watch -n 1 pcs status",
"echo c > /proc/sysrq-trigger",
"pcs status | egrep -e \"S4H_ascs20|S4H_ers29\" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node1 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node2",
"pcs resource cleanup",
"Node List: * Online: [ node1 node2 node3 ] Active Resources: * s4r9g2_fence (stonith:fence_rhevm): Started node1 * Clone Set: s4h_fs_sapmnt-clone [fs_sapmnt]: * Started: [ node1 node2 node3 ] * Clone Set: s4h_fs_sap_trans-clone [fs_sap_trans]: * Started: [ node1 node2 node3 ] * Clone Set: s4h_fs_sap_SYS-clone [fs_sap_SYS]: * Started: [ node1 node2 node3 ] * Resource Group: S4H_ASCS20_group: * S4H_lvm_ascs20 (ocf:heartbeat:LVM-activate): Started node1 * S4H_fs_ascs20 (ocf:heartbeat:Filesystem): Started node1 * S4H_vip_ascs20 (ocf:heartbeat:IPaddr2): Started node1 * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node1 * Resource Group: S4H_ERS29_group: * S4H_lvm_ers29 (ocf:heartbeat:LVM-activate): Started node2 * S4H_fs_ers29 (ocf:heartbeat:Filesystem): Started node2 * S4H_vip_ers29 (ocf:heartbeat:IPaddr2): Started node2 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node2",
"watch -n 1 pcs status",
"echo c > /proc/sysrq-trigger",
"pcs status | egrep -e \"S4H_ascs20|S4H_ers29\" * S4H_ascs20 (ocf:heartbeat:SAPInstance): Started node3 * S4H_ers29 (ocf:heartbeat:SAPInstance): Started node2",
"pcs resource cleanup"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/configuring_ha_clusters_to_manage_sap_netweaver_or_sap_s4hana_application_server_instances_using_the_rhel_ha_add-on/asmb_test_cluster_config_configuring-clusters-to-manage
|
6.4. Recovering Physical Volume Metadata
|
6.4. Recovering Physical Volume Metadata If the volume group metadata area of a physical volume is accidentally overwritten or otherwise destroyed, you will get an error message indicating that the metadata area is incorrect, or that the system was unable to find a physical volume with a particular UUID. You may be able to recover the data the physical volume by writing a new metadata area on the physical volume specifying the same UUID as the lost metadata. Warning You should not attempt this procedure with a working LVM logical volume. You will lose your data if you specify the incorrect UUID. The following example shows the sort of output you may see if the metadata area is missing or corrupted. You may be able to find the UUID for the physical volume that was overwritten by looking in the /etc/lvm/archive directory. Look in the file VolumeGroupName_xxxx .vg for the last known valid archived LVM metadata for that volume group. Alternately, you may find that deactivating the volume and setting the partial ( -P ) argument will enable you to find the UUID of the missing corrupted physical volume. Use the --uuid and --restorefile arguments of the pvcreate command to restore the physical volume. The following example labels the /dev/sdh1 device as a physical volume with the UUID indicated above, FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk . This command restores the physical volume label with the metadata information contained in VG_00050.vg , the most recent good archived metatdata for volume group . The restorefile argument instructs the pvcreate command to make the new physical volume compatible with the old one on the volume group, ensuring that the the new metadata will not be placed where the old physical volume contained data (which could happen, for example, if the original pvcreate command had used the command line arguments that control metadata placement, or it the physical volume was originally created using a different version of the software that used different defaults). The pvcreate command overwrites only the LVM metadata areas and does not affect the existing data areas. You can then use the vgcfgrestore command to restore the volume group's metadata. You can now display the logical volumes. The following commands activate the volumes and display the active volumes. If the on-disk LVM metadata takes as least as much space as what overrode it, this command can recover the physical volume. If what overrode the metadata went past the metadata area, the data on the volume may have been affected. You might be able to use the fsck command to recover that data.
|
[
"lvs -a -o +devices Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'. Couldn't find all physical volumes for volume group VG. Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'. Couldn't find all physical volumes for volume group VG.",
"vgchange -an --partial Partial mode. Incomplete volume groups will be activated read-only. Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'. Couldn't find device with uuid 'FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk'.",
"pvcreate --uuid \"FmGRh3-zhok-iVI8-7qTD-S5BI-MAEN-NYM5Sk\" --restorefile /etc/lvm/archive/VG_00050.vg /dev/sdh1 Physical volume \"/dev/sdh1\" successfully created",
"vgcfgrestore VG Restored volume group VG",
"lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices stripe VG -wi--- 300.00G /dev/sdh1 (0),/dev/sda1(0) stripe VG -wi--- 300.00G /dev/sdh1 (34728),/dev/sdb1(0)",
"lvchange -ay /dev/VG/stripe lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices stripe VG -wi-a- 300.00G /dev/sdh1 (0),/dev/sda1(0) stripe VG -wi-a- 300.00G /dev/sdh1 (34728),/dev/sdb1(0)"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/mdatarecover
|
Part I. Installing Local Storage Operator
|
Part I. Installing Local Storage Operator Use this procedure to install the Local Storage Operator from the Operator Hub before creating OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword... box to find the Local Storage Operator from the list of operators and select the same. Set the following options on the Install Operator page: Update channel as stable . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Approval Strategy as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator by using the Red Hat OpenShift Container Platform Operator Hub. For information about the hardware and software requirements, see Planning your deployment . Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and Operator installation permissions. You must have at least three worker nodes in the Red Hat OpenShift Container Platform cluster. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command in the command line interface to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see How to use dedicated worker nodes for Red Hat OpenShift Data Foundation chapter in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.15 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps Verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. After the operator is successfully installed, a pop-up with a message Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if Data Foundation is available. 2. Creating standalone Multicloud Object Gateway on IBM Z You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. (For deploying using local storage devices only) Ensure that Local Storage Operator is installed. To identify storage devices on each node, see Finding available storage devices . Procedure Log into the OpenShift Web Console. In openshift-local-storage namespace, click Operators Installed Operators to view the installed operators. Click the Local Storage installed operator. On the Operator Details page, click the Local Volume link. Click Create Local Volume . Click on YAML view for configuring Local Volume. Define a LocalVolume custom resource for filesystem PVs using the following YAML. The above definition selects sda local device from the worker-0 , worker-1 and worker-2 nodes. The localblock storage class is created and persistent volumes are provisioned from sda . Important Specify appropriate values of nodeSelector as per your environment. The device name should be same on all the worker nodes. You can also specify more than one devicePaths. Click Create . In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option for Backing storage type . Select the Storage Class that you used while installing LocalVolume. Click . Optional: In the Security page, select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in the Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate , and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . Click the Storage Systems tab and then click on ocs-storagecluster-storagesystem . In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) noobaa-default-backing-store-noobaa-pod-* (1 pod on any storage node)
|
[
"oc annotate namespace openshift-storage openshift.io/node-selector=",
"apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: localblock namespace: openshift-local-storage spec: logLevel: Normal managementState: Managed nodeSelector: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - worker-0 - worker-1 - worker-2 storageClassDevices: - devicePaths: - /dev/sda storageClassName: localblock volumeMode: Filesystem"
] |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_ibm_z/installing-local-storage-operator-ibm-z_ibmz
|
Chapter 4. Installing a cluster on GCP with customizations
|
Chapter 4. Installing a cluster on GCP with customizations In OpenShift Container Platform version 4.14, you can install a customized cluster on infrastructure that the installation program provisions on Google Cloud Platform (GCP). To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured a GCP project to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.14, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.5. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Google Cloud Platform (GCP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. Configure a GCP account. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select gcp as the platform to target. If you have not configured the service account key for your GCP account on your computer, you must obtain it from GCP and paste the contents of the file or enter the absolute path to the file. Select the project ID to provision the cluster in. The default value is specified by the service account that you configured. Select the region to deploy the cluster to. Select the base domain to deploy the cluster to. The base domain corresponds to the public DNS zone that you created for your cluster. Enter a descriptive name for your cluster. Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that the cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on GCP". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for GCP 4.5.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 4.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see RHEL Architectures . If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 4.5.2. Tested instance types for GCP The following Google Cloud Platform instance types have been tested with OpenShift Container Platform. Example 4.1. Machine series C2 C2D C3 E2 M1 N1 N2 N2D Tau T2D 4.5.3. Tested instance types for GCP on 64-bit ARM infrastructures The following Google Cloud Platform (GCP) 64-bit ARM instance types have been tested with OpenShift Container Platform. Example 4.2. Machine series for 64-bit ARM machines Tau T2A 4.5.4. Using custom machine types Using a custom machine type to install a OpenShift Container Platform cluster is supported. Consider the following when using a custom machine type: Similar to predefined instance types, custom machine types must meet the minimum resource requirements for control plane and compute machines. For more information, see "Minimum resource requirements for cluster installation". The name of the custom machine type must adhere to the following syntax: custom-<number_of_cpus>-<amount_of_memory_in_mb> For example, custom-6-20480 . As part of the installation process, you specify the custom machine type in the install-config.yaml file. Sample install-config.yaml file with a custom machine type compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3 4.5.5. Enabling Shielded VMs You can use Shielded VMs when installing your cluster. Shielded VMs have extra security features including secure boot, firmware and integrity monitoring, and rootkit detection. For more information, see Google's documentation on Shielded VMs . Note Shielded VMs are currently not supported on clusters with 64-bit ARM infrastructures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use shielded VMs for only control plane machines: controlPlane: platform: gcp: secureBoot: Enabled To use shielded VMs for only compute machines: compute: - platform: gcp: secureBoot: Enabled To use shielded VMs for all machines: platform: gcp: defaultMachinePlatform: secureBoot: Enabled 4.5.6. Enabling Confidential VMs You can use Confidential VMs when installing your cluster. Confidential VMs encrypt data while it is being processed. For more information, see Google's documentation on Confidential Computing . You can enable Confidential VMs and Shielded VMs at the same time, although they are not dependent on each other. Note Confidential VMs are currently not supported on 64-bit ARM architectures. Procedure Use a text editor to edit the install-config.yaml file prior to deploying your cluster and add one of the following stanzas: To use confidential VMs for only control plane machines: controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3 1 Enable confidential VMs. 2 Specify a machine type that supports Confidential VMs. Confidential VMs require the N2D or C2D series of machine types. For more information on supported machine types, see Supported operating systems and machine types . 3 Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to Terminate , which stops the VM. Confidential VMs do not support live VM migration. To use confidential VMs for only compute machines: compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate To use confidential VMs for all machines: platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate 4.5.7. Sample customized install-config.yaml file for GCP You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name pullSecret: '{"auths": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 1 15 17 18 21 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode. By default, the CCO uses the root credentials in the kube-system namespace to dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the "About the Cloud Credential Operator" section in the Authentication and authorization guide. 3 9 If you do not provide these parameters and values, the installation program provides the default value. 4 10 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 11 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger machine types, such as n1-standard-8 , for your machines if you disable simultaneous multithreading. 6 12 Optional: The custom encryption key section to encrypt both virtual machines and persistent volumes. Your default compute service account must have the permissions granted to use your KMS key and have the correct IAM role assigned. The default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern. For more information about granting the correct permissions for your service account, see "Machine management" "Creating compute machine sets" "Creating a compute machine set on GCP". 7 13 19 Optional: A set of network tags to apply to the control plane or compute machine sets. The platform.gcp.defaultMachinePlatform.tags parameter will apply to both control plane and compute machines. If the compute.platform.gcp.tags or controlPlane.platform.gcp.tags parameters are set, they override the platform.gcp.defaultMachinePlatform.tags parameter. 8 14 20 Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) that should be used to boot control plane and compute machines. The project and name parameters under platform.gcp.defaultMachinePlatform.osImage apply to both control plane and compute machines. If the project and name parameters under controlPlane.platform.gcp.osImage or compute.platform.gcp.osImage are set, they override the platform.gcp.defaultMachinePlatform.osImage parameters. 16 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Additional resources Enabling customer-managed encryption keys for a compute machine set 4.5.8. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 4.6. Managing user-defined labels and tags for GCP Important Support for user-defined labels and tags for GCP is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Google Cloud Platform (GCP) provides labels and tags that help to identify and organize the resources created for a specific OpenShift Container Platform cluster, making them easier to manage. You can define labels and tags for each GCP resource only during OpenShift Container Platform cluster installation. Important User-defined labels and tags are not supported for OpenShift Container Platform clusters upgraded to OpenShift Container Platform 4.14 version. User-defined labels User-defined labels and OpenShift Container Platform specific labels are applied only to resources created by OpenShift Container Platform installation program and its core components such as: GCP filestore CSI Driver Operator GCP PD CSI Driver Operator Image Registry Operator Machine API provider for GCP User-defined labels and OpenShift Container Platform specific labels are not applied on the resources created by any other operators or the Kubernetes in-tree components that create resources, for example, the Ingress load balancers. User-defined labels and OpenShift Container Platform labels are available on the following GCP resources: Compute disk Compute instance Compute image Compute forwarding rule DNS managed zone Filestore instance Storage bucket Limitations to user-defined labels Labels for ComputeAddress are supported in the GCP beta version. OpenShift Container Platform does not add labels to the resource. User-defined tags User-defined tags are attached to resources created by the OpenShift Container Platform Image Registry Operator and not on the resources created by any other Operators or the Kubernetes in-tree components. User-defined tags are available on the following GCP resources: * Storage bucket Limitations to the user-defined tags Tags will not be attached to the following items: Control plane instances and storage buckets created by the installation program Compute instances created by the Machine API provider for GCP Filestore instance resources created by the GCP filestore CSI driver Operator Compute disk and compute image resources created by the GCP PD CSI driver Operator Tags are not supported for buckets located in the following regions: us-east2 us-east3 Image Registry Operator does not throw any error but skips processing tags when the buckets are created in the tags unsupported region. Tags must not be restricted to particular service accounts, because Operators create and use service accounts with minimal roles. OpenShift Container Platform does not create any key and value resources of the tag. OpenShift Container Platform specific tags are not added to any resource. Additional resources For more information about identifying the OrganizationID , see: OrganizationID For more information about identifying the ProjectID , see: ProjectID For more information about labels, see Labels Overview . For more information about tags, see Tags Overview . 4.6.1. Configuring user-defined labels and tags for GCP Prerequisites The installation program requires that a service account includes a TagUser role, so that the program can create the OpenShift Container Platform cluster with defined tags at both organization and project levels. Procedure Update the install-config.yaml file to define the list of desired labels and tags. Note Labels and tags are defined during the install-config.yaml creation phase, and cannot be modified or updated with new labels and tags after cluster creation. Sample install-config.yaml file apiVersion: v1 featureSet: TechPreviewNoUpgrade platform: gcp: userLabels: 1 - key: <label_key> 2 value: <label_value> 3 userTags: 4 - parentID: <OrganizationID/ProjectID> 5 key: <tag_key_short_name> value: <tag_value_short_name> 1 Adds keys and values as labels to the resources created on GCP. 2 Defines the label name. 3 Defines the label content. 4 Adds keys and values as tags to the resources created on GCP. 5 The ID of the hierarchical resource where the tags are defined, at the organization or the project level. The following are the requirements for user-defined labels: A label key and value must have a minimum of 1 character and can have a maximum of 63 characters. A label key and value must contain only lowercase letters, numeric characters, underscore ( _ ), and dash ( - ). A label key must start with a lowercase letter. You can configure a maximum of 32 labels per resource. Each resource can have a maximum of 64 labels, and 32 labels are reserved for internal use by OpenShift Container Platform. The following are the requirements for user-defined tags: Tag key and tag value must already exist. OpenShift Container Platform does not create the key and the value. A tag parentID can be either OrganizationID or ProjectID : OrganizationID must consist of decimal numbers without leading zeros. ProjectID must be 6 to 30 characters in length, that includes only lowercase letters, numbers, and hyphens. ProjectID must start with a letter, and cannot end with a hyphen. A tag key must contain only uppercase and lowercase alphanumeric characters, hyphen ( - ), underscore ( _ ), and period ( . ). A tag value must contain only uppercase and lowercase alphanumeric characters, hyphen ( - ), underscore ( _ ), period ( . ), at sign ( @ ), percent sign ( % ), equals sign ( = ), plus ( + ), colon ( : ), comma ( , ), asterisk ( * ), pound sign ( USD ), ampersand ( & ), parentheses ( () ), square braces ( [] ), curly braces ( {} ), and space. A tag key and value must begin and end with an alphanumeric character. Tag value must be one of the pre-defined values for the key. You can configure a maximum of 50 tags. There should be no tag key defined with the same value as any of the existing tag keys that will be inherited from the parent resource. 4.6.2. Querying user-defined labels and tags for GCP After creating the OpenShift Container Platform cluster, you can access the list of the labels and tags defined for the GCP resources in the infrastructures.config.openshift.io/cluster object as shown in the following sample infrastructure.yaml file. Sample infrastructure.yaml file apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: platformSpec: type: GCP status: infrastructureName: <cluster_id> 1 platform: GCP platformStatus: gcp: resourceLabels: - key: <label_key> value: <label_value> resourceTags: - key: <tag_key_short_name> parentID: <OrganizationID/ProjectID> value: <tag_value_short_name> type: GCP 1 The cluster ID that is generated during cluster installation. Along with the user-defined labels, resources have a label defined by the OpenShift Container Platform. The format of the OpenShift Container Platform labels is kubernetes-io-cluster-<cluster_id>:owned . 4.7. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.14. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.14 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.14 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.8. Alternatives to storing administrator-level secrets in the kube-system project By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual , you must use one of the following alternatives: To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials . To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring a GCP cluster to use short-term credentials . 4.8.1. Manually creating long-term credentials The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. 4.8.2. Configuring a GCP cluster to use short-term credentials To install a cluster that is configured to use GCP Workload Identity, you must configure the CCO utility and create the required GCP resources for your cluster. 4.8.2.1. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 4.8.2.2. Creating GCP resources with the Cloud Credential Operator utility You can use the ccoctl gcp create-all command to automate the creation of GCP resources. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. Prerequisites You must have: Extracted and prepared the ccoctl binary. Procedure Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest objects from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. Note This command might take a few moments to run. Use the ccoctl tool to process all CredentialsRequest objects by running the following command: USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4 1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. Verification To verify that the OpenShift Container Platform secrets are created, list the files in the <path_to_ccoctl_output_dir>/manifests directory: USD ls <path_to_ccoctl_output_dir>/manifests Example output cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml You can verify that the IAM service accounts are created by querying GCP. For more information, refer to GCP documentation on listing IAM service accounts. 4.8.2.3. Incorporating the Cloud Credential Operator utility manifests To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility ( ccoctl ) created to the correct directories for the installation program. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have configured the Cloud Credential Operator utility ( ccoctl ). You have created the cloud provider resources that are required for your cluster with the ccoctl utility. Procedure If you did not set the credentialsMode parameter in the install-config.yaml configuration file to Manual , modify the value as shown: Sample configuration file snippet apiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Copy the manifests that the ccoctl utility generated to the manifests directory that the installation program created by running the following command: USD cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ Copy the tls directory that contains the private key to the installation directory: USD cp -a /<path_to_ccoctl_output_dir>/tls . 4.9. Using the GCP Marketplace offering Using the GCP Marketplace offering lets you deploy an OpenShift Container Platform cluster, which is billed on pay-per-use basis (hourly, per core) through GCP, while still being supported directly by Red Hat. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to deploy compute machines. To deploy an OpenShift Container Platform cluster using an RHCOS image from the GCP Marketplace, override the default behavior by modifying the install-config.yaml file to reference the location of GCP Marketplace offer. Prerequisites You have an existing install-config.yaml file. Procedure Edit the compute.platform.gcp.osImage parameters to specify the location of the GCP Marketplace image: Set the project parameter to redhat-marketplace-public Set the name parameter to one of the following offers: OpenShift Container Platform redhat-coreos-ocp-413-x86-64-202305021736 OpenShift Platform Plus redhat-coreos-opp-413-x86-64-202305021736 OpenShift Kubernetes Engine redhat-coreos-oke-413-x86-64-202305021736 Save the file and reference it when deploying the cluster. Sample install-config.yaml file that specifies a GCP Marketplace image for compute machines apiVersion: v1 baseDomain: example.com controlPlane: # ... compute: platform: gcp: osImage: project: redhat-marketplace-public name: redhat-coreos-ocp-413-x86-64-202305021736 # ... 4.10. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Remove any existing GCP credentials that do not use the service account key for the GCP account that you configured for your cluster and that are stored in the following locations: The GOOGLE_CREDENTIALS , GOOGLE_CLOUD_KEYFILE_JSON , or GCLOUD_KEYFILE_JSON environment variables The ~/.gcp/osServiceAccount.json file The gcloud cli default credentials Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Optional: You can reduce the number of permissions for the service account that you used to install the cluster. If you assigned the Owner role to your service account, you can remove that role and replace it with the Viewer role. If you included the Service Account Key Admin role, you can remove it. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.11. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 4.12. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.14, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 4.13. steps Customize your cluster . If necessary, you can opt out of remote health reporting .
|
[
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"tar -xvf openshift-install-linux.tar.gz",
"./openshift-install create install-config --dir <installation_directory> 1",
"rm -rf ~/.powervs",
"compute: - architecture: amd64 hyperthreading: Enabled name: worker platform: gcp: type: custom-6-20480 replicas: 2 controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: gcp: type: custom-6-20480 replicas: 3",
"controlPlane: platform: gcp: secureBoot: Enabled",
"compute: - platform: gcp: secureBoot: Enabled",
"platform: gcp: defaultMachinePlatform: secureBoot: Enabled",
"controlPlane: platform: gcp: confidentialCompute: Enabled 1 type: n2d-standard-8 2 onHostMaintenance: Terminate 3",
"compute: - platform: gcp: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"platform: gcp: defaultMachinePlatform: confidentialCompute: Enabled type: n2d-standard-8 onHostMaintenance: Terminate",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-ssd diskSizeGB: 1024 encryptionKey: 6 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 7 - control-plane-tag1 - control-plane-tag2 osImage: 8 project: example-project-name name: example-image-name replicas: 3 compute: 9 10 - hyperthreading: Enabled 11 name: worker platform: gcp: type: n2-standard-4 zones: - us-central1-a - us-central1-c osDisk: diskType: pd-standard diskSizeGB: 128 encryptionKey: 12 kmsKey: name: worker-key keyRing: test-machine-keys location: global projectID: project-id tags: 13 - compute-tag1 - compute-tag2 osImage: 14 project: example-project-name name: example-image-name replicas: 3 metadata: name: test-cluster 15 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 16 serviceNetwork: - 172.30.0.0/16 platform: gcp: projectID: openshift-production 17 region: us-central1 18 defaultMachinePlatform: tags: 19 - global-tag1 - global-tag2 osImage: 20 project: example-project-name name: example-image-name pullSecret: '{\"auths\": ...}' 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"apiVersion: v1 featureSet: TechPreviewNoUpgrade platform: gcp: userLabels: 1 - key: <label_key> 2 value: <label_value> 3 userTags: 4 - parentID: <OrganizationID/ProjectID> 5 key: <tag_key_short_name> value: <tag_value_short_name>",
"apiVersion: config.openshift.io/v1 kind: Infrastructure metadata: name: cluster spec: platformSpec: type: GCP status: infrastructureName: <cluster_id> 1 platform: GCP platformStatus: gcp: resourceLabels: - key: <label_key> value: <label_value> resourceTags: - key: <tag_key_short_name> parentID: <OrganizationID/ProjectID> value: <tag_value_short_name> type: GCP",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/storage.admin - roles/iam.serviceAccountUser skipServiceCheck: true",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)",
"oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret",
"chmod 775 ccoctl",
"./ccoctl.rhel9",
"OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> 4",
"ls <path_to_ccoctl_output_dir>/manifests",
"cluster-authentication-02-config.yaml openshift-cloud-controller-manager-gcp-ccm-cloud-credentials-credentials.yaml openshift-cloud-credential-operator-cloud-credential-operator-gcp-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capg-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-gcp-pd-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-gcp-cloud-credentials-credentials.yaml",
"apiVersion: v1 baseDomain: example.com credentialsMode: Manual",
"openshift-install create manifests --dir <installation_directory>",
"cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/",
"cp -a /<path_to_ccoctl_output_dir>/tls .",
"apiVersion: v1 baseDomain: example.com controlPlane: compute: platform: gcp: osImage: project: redhat-marketplace-public name: redhat-coreos-ocp-413-x86-64-202305021736",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_gcp/installing-gcp-customizations
|
5.279. Red Hat Enterprise Linux Release Notes
|
5.279. Red Hat Enterprise Linux Release Notes 5.279.1. RHEA-2012:0979 - Red Hat Enterprise Linux 6.3 Release Notes Updated packages containing the Release Notes for Red Hat Enterprise Linux 6.3 are now available. Red Hat Enterprise Linux minor releases are an aggregation of individual enhancement, security and bug fix errata. The Red Hat Enterprise Linux 6.3 Release Notes documents the major changes made to the Red Hat Enterprise Linux 6 operating system and its accompanying applications for this minor release. Detailed notes on all changes in this minor release are available in the Technical Notes. Refer to the Online Release Notes for the most up-to-date version of the Red Hat Enterprise Linux 6.3 Release Notes: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/6.3_Release_Notes/index.html
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/release-notes
|
Node APIs
|
Node APIs OpenShift Container Platform 4.16 Reference guide for node APIs Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/node_apis/index
|
Chapter 1. Operators overview
|
Chapter 1. Operators overview Operators are among the most important components of OpenShift Container Platform. Operators are the preferred method of packaging, deploying, and managing services on the control plane. They can also provide advantages to applications that users run. Operators integrate with Kubernetes APIs and CLI tools such as kubectl and oc commands. They provide the means of monitoring applications, performing health checks, managing over-the-air (OTA) updates, and ensuring that applications remain in your specified state. While both follow similar Operator concepts and goals, Operators in OpenShift Container Platform are managed by two different systems, depending on their purpose: Cluster Operators, which are managed by the Cluster Version Operator (CVO), are installed by default to perform cluster functions. Optional add-on Operators, which are managed by Operator Lifecycle Manager (OLM), can be made accessible for users to run in their applications. With Operators, you can create applications to monitor the running services in the cluster. Operators are designed specifically for your applications. Operators implement and automate the common Day 1 operations such as installation and configuration as well as Day 2 operations such as autoscaling up and down and creating backups. All these activities are in a piece of software running inside your cluster. 1.1. For developers As a developer, you can perform the following Operator tasks: Install Operator SDK CLI . Create Go-based Operators , Ansible-based Operators , Java-based Operators , and Helm-based Operators . Use Operator SDK to build,test, and deploy an Operator . Install and subscribe an Operator to your namespace . Create an application from an installed Operator through the web console . Additional resources Machine deletion lifecycle hook examples for Operator developers 1.2. For administrators As a cluster administrator, you can perform the following Operator tasks: Manage custom catalogs Allow non-cluster administrators to install Operators Install an Operator from OperatorHub View Operator status . Manage Operator conditions Upgrade installed Operators Delete installed Operators Configure proxy support Use Operator Lifecycle Manager on restricted networks To know all about the cluster Operators that Red Hat provides, see Cluster Operators reference . 1.3. steps To understand more about Operators, see What are Operators?
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/operators/operators-overview
|
Chapter 8. Virtual Machine Snapshots
|
Chapter 8. Virtual Machine Snapshots 8.1. Snapshots Snapshots are a storage function that allows an administrator to create a restore point of a virtual machine's operating system, applications, and data at a certain point in time. Snapshots save the data currently present in a virtual machine hard disk image as a COW volume and allow for a recovery to the data as it existed at the time the snapshot was taken. A snapshot causes a new COW layer to be created over the current layer. All write actions performed after a snapshot is taken are written to the new COW layer. It is important to understand that a virtual machine hard disk image is a chain of one or more volumes. From the perspective of a virtual machine, these volumes appear as a single disk image. A virtual machine is oblivious to the fact that its disk is comprised of multiple volumes. The term COW volume and COW layer are used interchangeably, however, layer more clearly recognizes the temporal nature of snapshots. Each snapshot is created to allow an administrator to discard unsatisfactory changes made to data after the snapshot is taken. Snapshots provide similar functionality to the Undo function present in many word processors. Note Snapshots of virtual machine hard disks marked shareable and those that are based on Direct LUN connections are not supported, live or otherwise. The three primary snapshot operations are: Creation, which involves the first snapshot created for a virtual machine. Previews, which involves previewing a snapshot to determine whether or not to restore the system data to the point in time that the snapshot was taken. Deletion, which involves deleting a restoration point that is no longer required. For task-based information about snapshot operations, see Snapshots in the Red Hat Virtualization Virtual Machine Management Guide .
| null |
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/technical_reference/chap-virtual_machine_snapshots
|
Chapter 1. Configuring persistent storage
|
Chapter 1. Configuring persistent storage When you deploy Red Hat OpenStack Services on OpenShift (RHOSO), you can configure your deployment to use Red Hat Ceph Storage as the back end for storage and you can configure RHOSO storage services for block, image, object, and file storage. You can integrate an external Red Hat Ceph Storage cluster with the Compute service (nova) and a combination of one or more RHOSO storage services, and you can create a hyperconverged infrastructure (HCI) environment. RHOSO supports Red Hat Ceph Storage 7.1 or later. For information about creating a hyperconverged infrastructure (HCI) environment, see Deploying a hyperconverged infrastructure environment . Note Red Hat OpenShift Data Foundation (ODF) can be used in external mode to integrate with Red Hat Ceph Storage. The use of ODF in internal mode is not supported. For more information on deploying ODF in external mode, see Deploying OpenShift Data Foundation in external mode . RHOSO recognizes two types of storage - ephemeral and persistent: Ephemeral storage is associated with a specific Compute instance. When that instance is terminated, so is the associated ephemeral storage. This type of storage is useful for runtime requirements, such as storing the operating system of an instance. Persistent storage is designed to survive (persist) independent of any running instance. This storage is used for any data that needs to be reused, either by different instances or beyond the life of a specific instance. RHOSO storage services correspond with the following persistent storage types: Block Storage service (cinder): Volumes Image service (glance): Images Object Storage service (swift): Objects Shared File Systems service (manila): Shares All persistent storage services store data in a storage back end. Red Hat Ceph Storage can serve as a back end for all four services, and the features and functionality of OpenStack services are optimized when you use Red Hat Ceph Storage. Storage solutions RHOSO supports the following storage solutions: Configure the Block Storage service with a Ceph RBD back end, iSCSI, FC, or NVMe-TCP storage protocols, or a generic NFS back end. Configure the Image service with a Ceph RBD, Block Storage, Object Storage, or NFS back end. Configure the Object Storage service to use PersistentVolumes (PVs) on OpenShift nodes or disks on external data plane nodes. Configure the Shared File Systems service with a native CephFS, Ceph-NFS, or alternative back end, such as NetApp or Pure Storage. For information about planning the storage solution and related requirements for your RHOSO deployment, for example, networking and security, see Planning storage and shared file systems in Planning your deployment . To promote the use of best practices, Red Hat has a certification process for OpenStack back ends. For improved supportability and interoperability, ensure that your storage back end is certified for RHOSO. You can check certification status in the Red Hat Ecosystem Catalog . Ceph RBD is certified as a back end in all RHOSO releases.
| null |
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_persistent_storage/assembly_introduction-to-configuring-storage_introduction
|
B.61. openssh
|
B.61. openssh B.61.1. RHBA-2010:0943 - openssh bug fix update Updated openssh packages that fix two bugs are now available for Red Hat Enterprise Linux 6. OpenSSH is OpenBSD's SSH (Secure Shell) protocol implementation. These packages include the core files necessary for both the OpenSSH client and server. Bug Fixes BZ# 651820 When the ~/.bashrc startup file contained a command that produced an output to standard error (STDERR), the sftp utility was unable to log in to that account. This bug has been fixed, and the output to STDERR no longer prevents sftp from establishing the connection. BZ# 655043 Prior to this update, the authentication based on a GSS key exchange did not work, rendering users unable to authenticate using this method. With this update, the underlying source code has been modified to target this issue, and the GSSKEX-based authentication now works as expected. All OpenSSH users are advised to upgrade to these updated packages, which resolve these issues.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/openssh
|
Providing feedback on Red Hat documentation
|
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html/deploying_openshift_data_foundation_in_external_mode/providing-feedback-on-red-hat-documentation_rhodf
|
Preface
|
Preface As a developer, you can use Red Hat Developer Hub to experience a streamlined development environment. Red Hat Developer Hub is driven by a centralized software catalog, providing efficiency to your microservices and infrastructure. It enables your product team to deliver quality code without any compromises.
| null |
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/getting_started_with_red_hat_developer_hub/pr01
|
Chapter 1. Overview
|
Chapter 1. Overview AMQ Spring Boot Starter is an adapter for creating Spring-based applications that use AMQ messaging. It provides a Spring Boot starter module that enables you to build standalone Spring applications. The starter uses the Red Hat build of Apache Qpid JMS client to communicate using the AMQP 1.0 protocol. This release supports jakarta.jms and requires Java version 17 or higher. For javax.jms support, see the 2.x release of AMQ Spring Boot Starter. AMQ Spring Boot Starter is based on the AMQP 1.0 JMS Spring Boot project. 1.1. Key features Quickly build standalone Spring applications with built-in messaging Automatic configuration of JMS resources Configurable pooling of JMS connections and sessions 1.2. Supported standards and protocols Version 3.1 of the Spring Boot API Version 2.0 of the Java Message Service API Version 1.0 of the Advanced Message Queueing Protocol (AMQP) 1.3. Supported configurations Refer to Red Hat AMQ Supported Configurations on the Red Hat Customer Portal for current information regarding AMQ Spring Boot Starter supported configurations. 1.4. Document conventions The sudo command In this document, sudo is used for any command that requires root privileges. Exercise caution when using sudo because any changes can affect the entire system. For more information about sudo , see Using the sudo command . File paths In this document, all file paths are valid for Linux, UNIX, and similar operating systems (for example, /home/andrea ). On Microsoft Windows, you must use the equivalent Windows paths (for example, C:\Users\andrea ). Variable text This document contains code blocks with variables that you must replace with values specific to your environment. Variable text is enclosed in arrow braces and styled as italic monospace. For example, in the following command, replace <project-dir> with the value for your environment: USD cd <project-dir>
|
[
"cd <project-dir>"
] |
https://docs.redhat.com/en/documentation/amq_spring_boot_starter/3.0/html/using_the_amq_spring_boot_starter/overview
|
Chapter 10. Configuring Persistence
|
Chapter 10. Configuring Persistence 10.1. About Persistence in JBoss EAP 7 Messaging JBoss EAP ships with two persistence options for storing binding data and messages: You can use the default file-based journal , which is highly optimized for messaging use cases and provides great performance. This option is provided by default and is used if you do not do any additional configuration. You can store the data in a JDBC data store , which uses JDBC to connect to a database of your choice. This option requires configuration of the datasources and messaging-activemq subsystems in the server configuration file. 10.2. Messaging Journal Persistence Using the Default File Journal JBoss EAP messaging ships with a high-performance, file-based journal that is optimized for messaging. The JBoss EAP messaging journal has a configurable file size and is append only, which improves performance by enabling single write operations. It consists of a set of files on disk, which are initially pre-created to a fixed size and filled with padding. As server operations, such as add message, delete message, update message, are performed, records of the operations are appended to the journal until the journal file is full, at which point the journal file is used. A sophisticated garbage collection algorithm determines whether journal files can be reclaimed and re-used when all of their data has been deleted. A compaction algorithm removes dead space from journal files and compresses the data. The journal also fully supports both local and XA transactions. 10.2.1. Messaging Journal File System Implementations The majority of the journal is written in Java, but interaction with the file system has been abstracted to allow different pluggable implementations. The two implementations shipped with JBoss EAP messaging are: Java New I/O (NIO) This implementation uses standard Java NIO to interface with the file system. It provides extremely good performance and runs on any platform with a Java 6 or later runtime. Note that JBoss EAP 7 requires Java 8. Using NIO is supported on any operating system that JBoss EAP supports. Linux Asynchronous IO (ASYNCIO) This implementation uses a native code wrapper to talk to the Linux asynchronous IO library (ASYNCIO). This implementation removes the need for explicit synchronization. ASYNCIO typically provides better performance than Java NIO. To check which journal type is in use, issue the following CLI request: The system returns one of the following values: Table 10.1. Journal Type Return Values Return Value Description NONE Persistence is disabled NIO Java NIO is in use ASYNCIO AsyncIO with libaio is in use DATABASE JDBC persistence is in use The following file systems have been tested and are supported only on Red Hat Enterprise Linux 6, Red Hat Enterprise Linux 7, and Red Hat Enterprise Linux 8 when using the libaio natives. They are not tested and are not supported on other operating systems. EXT4 XFS NFSv4 GFS2 The following table lists the HA shared store file systems that have been tested, both with and without the libaio natives, and whether they are supported. Operating System File System Supported Using libaio Natives? (journal-type="ASYNCIO") Supported Without Using libaio Natives? (journal-type="NIO") Red Hat Enterprise Linux 6 NFSv4 Yes Yes Red Hat Enterprise Linux 7 and later NFSv4 Yes Yes Red Hat Enterprise Linux 6 GFS2 Yes No Red Hat Enterprise Linux 7 and later GFS2 Yes No 10.2.2. Standard Messaging Journal File System Instances The standard JBoss EAP messaging core server uses the following journal instances: Bindings Journal This journal is used to store bindings related data, including the set of queues that are deployed on the server and their attributes. It also stores data such as id sequence counters. The bindings journal is always a NIO journal as it is typically low throughput compared to the message journal. The files on this journal are prefixed as activemq-bindings. Each file has a bindings extension. File size is 1048576, and it is located at the bindings folder. Jakarta Messaging Journal This journal instance stores all Jakarta Messaging related data, such as any Jakarta Messaging queues,topics, connection factories and any JNDI bindings for these resources. Any Jakarta Messaging Resource created via the management API will be persisted to this journal. Any resource configured via configuration files will not. The Jakarta Messaging Journal will only be created if Jakarta Messaging is being used. The files on this journal are prefixed as activemq-jms. Each file has a jms extension. File size is 1048576, and it is located at the bindings folder. Message Journal This journal instance stores all message related data, including the message themselves and also duplicate-id caches. By default JBoss EAP messaging will try to use an ASYNCIO journal. If ASYNCIO is not available, for example the platform is not Linux with the correct kernel version or ASYNCIO has not been installed then it will automatically fall back to using Java NIO which is available on any Java platform. The files on this journal are prefixed as activemq-data. Each file has an amq extension. File size is by default 10485760 (configurable), and it is located at the journal folder. For large messages, JBoss EAP messaging persists them outside the message journal. This is discussed in the section on Large Messages . JBoss EAP messaging can also be configured to page messages to disk in low memory situations. This is discussed in the Paging section . If no persistence is required at all, JBoss EAP messaging can also be configured not to persist any data at all to storage as discussed in the Configuring JBoss EAP Messaging for Zero Persistence section. 10.2.3. Configuring the Bindings and Jakarta Messaging Journals Because the bindings journal shares its configuration with the Jakarta Messaging journal, you can read the current configuration for both by using the single management CLI command below. The output is also included to highlight default configuration. Note that by default the path to the journal is activemq/bindings . You can change the location for path by using the following management CLI command. Also note the relative-to attribute in the output above. When relative-to is used, the value of the path attribute is treated as relative to the file path specified by relative-to . By default this value is the JBoss EAP property jboss.server.data.dir . For standalone servers, jboss.server.data.dir is located at EAP_HOME /standalone/data . For domains, each server will have its own serverX/data/activemq directory located under EAP_HOME /domain/servers . You can change the value of relative-to using the following management CLI command. By default, JBoss EAP is configured to automatically create the bindings directory if it does not exist. Use the following management CLI command to toggle this behavior. Setting value to true will enable automatic directory creation. Setting value to false will disable it. 10.2.4. Configuring the Message Journal Location You can read the location information for the message journal by using the management CLI command below. The output is also included to highlight default configuration. Note that by default the path to the journal is activemq/journal . You can change the location for path by using the following management CLI command. Note For the best performance, Red Hat recommends that the journal be located on its own physical volume in order to minimize disk head movement. If the journal is on a volume which is shared with other processes which might be writing other files, such as a bindings journal, database, or transaction coordinator, then the disk head may well be moving rapidly between these files as it writes them, thus drastically reducing performance. Also note the relative-to attribute in the output above. When relative-to is used, the value of the path attribute is treated as relative to the file path specified by relative-to . By default this value is the JBoss EAP property jboss.server.data.dir . For standalone servers, jboss.server.data.dir is located at EAP_HOME /standalone/data . For domains, each server will have its own serverX/data/activemq directory located under EAP_HOME /domain/servers . You can change the value of relative-to using the following management CLI command. By default, JBoss EAP is configured to automatically create the journal directory if it does not exist. Use the following management CLI command to toggle this behavior. Setting value to true will enable automatic directory creation. Setting value to false will disable it. 10.2.5. Configuring Message Journal Attributes The attributes listed below are all child properties of the messaging server. Therefore, the command syntax for getting and setting their values using the management CLI is the same for each. To read the current value of a given attribute, the syntax is as follows: The syntax for writing an attribute's value follows a corresponding pattern. create-journal-dir If this is set to true , the journal directory will be automatically created at the location specified in journal-directory if it does not already exist. The default value is true . journal-file-open-timeout This attribute modifies the timeout value for opening a journal file. The default value is 5 seconds. journal-buffer-timeout Instead of flushing on every write that requires a flush, we maintain an internal buffer, and flush the entire buffer either when it is full, or when a timeout expires, whichever is sooner. This is used for both NIO and ASYNCIO and allows the system to scale better with many concurrent writes that require flushing. This parameter controls the timeout at which the buffer will be flushed if it has not filled already. ASYNCIO can typically cope with a higher flush rate than NIO, so the system maintains different defaults for both NIO and ASYNCIO. The default for NIO is 3333333 nanoseconds, or 300 times per second. The default for ASYNCIO is 500000 nanoseconds, or 2000 times per second. Note By increasing the timeout, you may be able to increase system throughput at the expense of latency, the default parameters are chosen to give a reasonable balance between throughput and latency. journal-buffer-size The size, in bytes, of the timed buffer on ASYNCIO. Both journal-buffer-size and journal-file-size must be set larger than min-large-message-size . Otherwise, messages will not be written to the journal. See Configuring Large Messages for more information. journal-compact-min-files The minimal number of files before we can consider compacting the journal. The compacting algorithm won't start until you have at least journal-compact-min-files . Setting this to 0 will disable the feature to compact completely. This could be dangerous though as the journal could grow indefinitely. Use it wisely! The default for this parameter is 10 journal-compact-percentage The threshold to start compacting. When less than this percentage is considered live data, we start compacting. Note also that compacting will not kick in until you have at least journal-compact-min-files data files on the journal The default for this parameter is 30 . journal-file-size The size of each journal file, in bytes. The default value for this is 10485760 bytes, or 10MB. Both journal-file-size and journal-buffer-size must be set larger than min-large-message-size . Otherwise, messages will not be written to the journal. See Configuring Large Messages for more information. journal-max-io Write requests are queued up before being submitted to the system for execution. This parameter controls the maximum number of write requests that can be in the IO queue at any one time. If the queue becomes full then writes will block until space is freed up. The system maintains different defaults for this parameter depending on whether it's NIO or ASYNCIO. The default for NIO is 1 , and the default for ASYNCIO is 500 . There is a limit and the total max ASYNCIO cannot be higher than what is configured at the OS level, found at /proc/sys/fs/aio-max-nr, usually 65536 . journal-min-files The minimum number of files the journal will maintain. When JBoss EAP starts and there is no initial message data, JBoss EAP will pre-create journal-min-files number of files. The default is 2 . Creating journal files and filling them with padding is a fairly expensive operation and we want to minimize doing this at run-time as files get filled. By pre-creating files, as one is filled the journal can immediately resume with the one without pausing to create it. Depending on how much data you expect your queues to contain at steady state you should tune this number of files to match that total amount of data. journal-pool-files The number of journal files that can be reused. ActiveMQ will create as many files as needed however when reclaiming files it will shrink back to the value. The default is -1 , which means no limit. journal-sync-transactional If this is set to true then JBoss EAP will make sure all transaction data is flushed to disk on transaction boundaries, such as a commit, prepare, or rollback. The default value is true . journal-sync-non-transactional If this is set to true then JBoss EAP will make sure non transactional message data, such as sends and acknowledgements, are flushed to disk each time. The default value is true . journal-type Valid values are NIO or ASYNCIO . Choosing NIO tells JBoss EAP to use a Java NIO journal. ASYNCIO tells it to use a Linux asynchronous IO journal. If you choose ASYNCIO but are not running Linux, or you do not have libaio installed, JBoss EAP will use a Java NIO journal. 10.2.6. Note on Disabling Disk Write Cache This happens irrespective of whether you have executed a fsync() from the operating system or correctly synced data from inside a Java program! By default many systems ship with disk write cache enabled. This means that even after syncing from the operating system there is no guarantee the data has actually made it to disk, so if a failure occurs, critical data can be lost. Some more expensive disks have non volatile or battery backed write caches which will not necessarily lose data on event of failure, but you need to test them! If your disk does not have an expensive non volatile or battery backed cache and it's not part of some kind of redundant array, for example RAID, and you value your data integrity you need to make sure disk write cache is disabled. Be aware that disabling disk write cache can give you a nasty shock performance wise. If you've been used to using disks with write cache enabled in their default setting, unaware that your data integrity could be compromised, then disabling it will give you an idea of how fast your disk can perform when acting really reliably. On Linux you can inspect or change your disk's write cache settings using the tools hdparm for IDE disks, or sdparm or sginfo for SDSI/SATA disks. On Windows, you can check and change the setting by right clicking on the disk and then clicking properties . 10.2.7. Installing libaio The Java NIO journal is highly performant, but if you are running JBoss EAP messaging using Linux Kernel 2.6 or later, Red Hat highly recommends that you use the ASYNCIO journal for the very best persistence performance. Note JBoss EAP supports ASYNCIO only when installed on versions 6, 7 or 8 of Red Hat Enterprise Linux and only when using the ext4, xfs, gfs2 or nfs4 file systems. It is not possible to use the ASYNCIO journal under other operating systems or earlier versions of the Linux kernel. You will need libaio installed to use the ASYNCIO journal. To install, use the following command: For Red Hat Enterprise Linux 6 and 7: For Red Hat Enterprise Linux 8: Warning Do not place your messaging journals on a tmpfs file system, which is used for the /tmp directory for example. JBoss EAP will fail to start if the ASYNCIO journal is using tmpfs. 10.2.8. Configuring the NFS Shared Store for Messaging When using dedicated, shared store, high availability for data replication, you must configure both the live server and the backup server to use a shared directory on the NFS client. If you configure one server to use a shared directory on the NFS server and the other server to use a shared directory on the NFS client, the backup server cannot recognize when the live server starts or is running. So to work properly, both servers must specify a shared directory on the NFS client. You must also configure the following options for the NFS client mount: sync : This option specifies that all changes are immediately flushed to disk. intr : This option allows NFS requests to be interrupted if the server goes down or cannot be reached. noac : This option disables attribute caching and is needed to achieve attribute cache coherence among multiple clients. soft : This option specifies that if the host serving the exported file system is unavailable, the error should be reported rather than waiting for the server to come back online. lookupcache=none : This option disables lookup caching. timeo= n : The time in deciseconds (tenths of a second) the NFS client waits for a response before it retries an NFS request. For NFS over TCP, the default timeo value is 600 (60 seconds). For NFS over UDP, the client uses an adaptive algorithm to estimate an appropriate timeout value for frequently used request types, such as read and write requests. retrans= n : The number of times the NFS client retries a request before it attempts further recovery action. If the retrans option is not specified, the NFS client tries each request three times. Important It is important to use reasonable values when you configure the timeo and retrans options. A default timeo wait time of 600 deciseconds (60 seconds) combined with a retrans value of 5 retries can result in a five minute wait for ActiveMQ Artemis to detect an NFS disconnection. See the Shared Store section in this guide for more information about how to use a shared file system for high availability. 10.3. Messaging journal persistence using a JDBC database To use JDBC to persist messages and binding data to a database instead of using the default file-based journal, you must configure JBoss EAP 7 messaging. To do this, you must first configure the datasource element in the datasources subsystem, and then define a journal-datasource attribute on the server element in the messaging-activemq subsystem to use that datasource. The presence of the journal-datasource attribute notifies the messaging subsystem to persist the journal entries to the database instead of the file-based journal. The journal-database attribute on the server resource in the messaging-activemq subsystem defines the SQL dialect that is used to communicate with the database. This attribute is configured automatically using the datasource metadata. When persisting messages to a file-based journal, the large message size is limited only by the size of the disk. However, when persisting messages to a database, the large message size is limited to the maximum size of the BLOB data type for that database. Important JBoss EAP 7.4 currently supports only the Oracle 12c and IBM DB2 Enterprise databases. 10.3.1. Considerations to configure a database persistent store For improved reliability, JBoss EAP makes messaging calls through a connection pool, which provides a set of open connections to a specified database that can be shared among multiple applications. This means if JBoss EAP drops a connection, another connection in the pool replaces that failed connection to avoid failure. Note versions of JBoss EAP support only one connection from a pool. When you configure a database persistent store or pool in the datasources subsystem, consider the following points: Set the value of the min-pool-size attribute to at least 4 to have a connection dedicated to each of the following usage: One for the binding One for the messages journal One for the lease lock, if using High Availability (HA) One for the node manager shared state, if using HA Set the value of the max-pool-size attribute based on the number of concurrent threads that perform paging or large message streaming operations. No rules are defined for configuring the max-pool-size attribute because the relation between the number of threads and the number of connections is not one-to-one. The number of connections depends on the number of threads that process paging and large messages operations and the attribute blocking-timeout-wait-millis that defines the time involved in waiting to get a connection. New large messages or paging operations occur in a dedicated thread and need a connection. Those dedicated threads are enqueued until a connection is ready or the time to obtain the connection runs out, which results in a failure. You can customize the pool configuration according to your needs and test the configured pool in your environment. 10.3.2. Configuring a messaging journal JDBC persistence store Follow these steps to configure JBoss EAP 7 messaging to use JDBC to persist messages and binding data to a database: Configure a datasource in the datasources subsystem for use by the messaging-activemq subsystem. For information about how to create and configure a datasource, see Datasource Management in the JBoss EAP Configuration Guide . Configure the messaging-activemq subsystem to use the new datasource. This creates the following configuration in the messaging-activemq subsystem of the server configuration file: <server name="default"> <journal datasource="MessagingOracle12cDS"/> ... </server> JBoss EAP messaging is now configured to use the database to store messaging data. 10.3.3. Configuring messaging journal table names JBoss EAP 7 messaging uses a separate JDBC table to store binding information, messages, large messages, and paging information. The names of these tables can be configured using the journal-bindings-table , journal-jms-bindings-table , journal-messages-table , journal-large-messages-table , and journal-page-store-table attributes on the server resource in the messaging-activemq subsystem of the server configuration file. The following is a list of table name restrictions: JBoss EAP 7 messaging generates identifiers for paging tables using pattern TABLE_NAME + GENERATED_ID , where the GENERATED_ID can be up to 20 characters long. Because the maximum table name length in Oracle Database 12c is 30 characters, you must limit the table name to 10 characters. Otherwise, you might see the error ORA-00972: identifier is too long and paging will no longer work. Table names that do not follow Schema Object Naming Rules for Oracle Database 12c must be enclosed within double quotes. Quoted identifiers can begin with any character and can contain any characters and punctuation marks as well as spaces. However, neither quoted nor nonquoted identifiers can contain double quotation marks or the null character (\0). It is important to note that quoted identifiers are case sensitive. If multiple JBoss EAP server instances use the same database to persist messages and binding data, the table names must be unique for each server instance. Multiple JBoss EAP servers cannot access the same tables. The following is an example of the management CLI command that configures the journal-page-store-table name using a quoted identifier: This creates the following configuration in the messaging-activemq subsystem of the server configuration file: <server name="default"> <journal datasource="MessagingOracle12cDS" journal-page-store-table=""PAGED_DATA""/> ... </server> 10.3.4. Configuring messaging journals in a managed domain As mentioned in Configuring messaging journal table names , multiple JBoss EAP servers cannot access the same database tables when using JDBC to persist messages and binding data to a database. In a managed domain, all JBoss EAP server instances in a server group share the same profile configuration, so you must use expressions to configure the messaging journal names or datasources. If all servers are configured to use the same database to store messaging data, the table names must be unique for each server instance. The following is an example of a management CLI command that creates a unique journal-page-store-table table name for each server in a server group by using an expression that includes the unique node identifier in the name. If each server instance accesses a different database, you can use expressions to allow the messaging configuration for each server to connect to a different datasource. The following management CLI command uses the DB_CONNECTION_URL environment variable in the connection-url to connect to a different datasource. 10.3.5. Configuring the messaging journal network timeout You can configure the maximum amount of time, in milliseconds, that the JDBC connection will wait for the database to reply a request. This is useful in the event that the network goes down or a connection between JBoss EAP messaging and the database is closed for any reason. When this occurs, clients are blocked until the timeout occurs. You configure the timeout by updating the journal-jdbc-network-timeout attribute. The default value is 20000 milliseconds, or 20 seconds. The following is an example of the management CLI command that sets the journal-jdbc-network-timeout attribute value to 10000 milliseconds, or 10 seconds: 10.3.6. Configuring HA for Messaging JDBC Persistence Store The JBoss EAP messaging-activemq subsystem activates the JDBC HA shared store functionality when the broker is configured with a database store type. The broker then uses a shared database table to ensure that the live and backup servers coordinate actions over a shared JDBC journal store. You can configure HA for JDBC persistence store using the following attributes: journal-node-manager-store-table : Name of the JDBC database table to store the node manager. journal-jdbc-lock-expiration : The time a JDBC lock is considered valid without keeping it alive. You specify this attribute value in seconds. The default value is 20 seconds. journal-jdbc-lock-renew-period : The period of the keep alive service of a JDBC lock. You specify this attribute value in seconds. The default value is 2 seconds. The default values are taken into account based on the value of the server's ha-policy and journal-datasource attributes. For backward compatibility, you can also specify their values using the respective Artemis-specific system properties: brokerconfig.storeConfiguration.nodeManagerStoreTableName brokerconfig.storeConfiguration.jdbcLockExpirationMillis brokerconfig.storeConfiguration.jdbcLockRenewPeriodMillis When configured, these Artemis-specific system properties have precedence over the corresponding attribute's default value. 10.4. Managing Messaging Journal Prepared Transactions You can manage messaging journal prepared transactions using the following management CLI commands. Commit a prepared transaction: Roll back a prepared transaction: Show the details of all prepared transactions: Note You can also show the prepared transaction details in HTML format using the list-prepared-transaction-details-as-html operation, or in JSON format using the list-prepared-transaction-details-as-json operation. 10.5. Configuring JBoss EAP Messaging for Zero Persistence In some situations, zero persistence is required for a messaging system. Zero persistence means that no bindings data, message data, large message data, duplicate id caches, or paging data should be persisted. To configure the messaging-activemq subsystem to perform zero persistence, set the persistence-enabled parameter to false . Important Be aware that if persistence is disabled, but paging is enabled, page files continue to be stored in the location specified by the paging-directory element. Paging is enabled when the address-full-policy attribute is set to PAGE . If full zero persistence is required, be sure to configure the address-full-policy attribute of the address-setting element to use BLOCK , DROP or FAIL . 10.6. Importing and Exporting Journal Data See the JBoss EAP 7 Migration Guide for information on importing and exporting journal data.
|
[
"/subsystem=messaging-activemq/server=default:read-attribute(name=runtime-journal-type)",
"/subsystem=messaging-activemq/server=default/path=bindings-directory:read-resource { \"outcome\" => \"success\", \"result\" => { \"path\" => \"activemq/bindings\", \"relative-to\" => \"jboss.server.data.dir\" } }",
"/subsystem=messaging-activemq/server=default/path=bindings-directory:write-attribute(name=path,value= PATH_LOCATION )",
"/subsystem=messaging-activemq/server=default/path=bindings-directory:write-attribute(name=relative-to,value= RELATIVE_LOCATION )",
"/subsystem=messaging-activemq/server=default:write-attribute(name=create-bindings-dir,value= TRUE/FALSE )",
"/subsystem=messaging-activemq/server=default/path=journal-directory:read-resource { \"outcome\" => \"success\", \"result\" => { \"path\" => \"activemq/journal\", \"relative-to\" => \"jboss.server.data.dir\" } }",
"/subsystem=messaging-activemq/server=default/path=journal-directory:write-attribute(name=path,value= PATH_LOCATION )",
"/subsystem=messaging-activemq/server=default/path=journal-directory:write-attribute(name=relative-to,value= RELATIVE_LOCATION )",
"/subsystem=messaging-activemq/server=default:write-attribute(name=create-journal-dir,value= TRUE/FALSE )",
"/subsystem=messaging-activemq/server=default:read-attribute(name= ATTRIBUTE_NAME )",
"/subsystem=messaging-activemq/server=default:write-attribute(name= ATTRIBUTE_NAME ,value= NEW_VALUE )",
"install libaio",
"dnf install libaio",
"/subsystem=messaging-activemq/server=default:write-attribute(name=journal-datasource,value=\"MessagingOracle12cDS\")",
"<server name=\"default\"> <journal datasource=\"MessagingOracle12cDS\"/> </server>",
"/subsystem=messaging-activemq/server=default:write-attribute(name=journal-page-store-table,value=\"\\\"PAGE_DATA\\\"\")",
"<server name=\"default\"> <journal datasource=\"MessagingOracle12cDS\" journal-page-store-table=\""PAGED_DATA"\"/> </server>",
"/subsystem=messaging-activemq/server=default:write-attribute(name=journal-page-store-table,value=\"USD{env.NODE_ID}_page_store\")",
"data-source add --name=messaging-journal --jndi-name=java:jboss/datasources/messaging-journal --driver-name=oracle12c --connection-url=USD{env.DB_CONNECTION_URL}",
"/subsystem=messaging-activemq/server=default:write-attribute(name=journal-jdbc-network-timeout,value=10000)",
"/subsystem=messaging-activemq/server=default:commit-prepared-transaction(transaction-as-base-64= XID )",
"/subsystem=messaging-activemq/server=default:rollback-prepared-transaction(transaction-as-base-64= XID )",
"/subsystem=messaging-activemq/server=default:list-prepared-transactions",
"/subsystem=messaging-activemq/server=default:write-attribute(name=persistence-enabled,value=false)"
] |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuring_messaging/configuring_persistence
|
Chapter 2. Distributed tracing architecture
|
Chapter 2. Distributed tracing architecture 2.1. Distributed tracing architecture Every time a user takes an action in an application, a request is executed by the architecture that may require dozens of different services to participate to produce a response. Red Hat OpenShift distributed tracing platform lets you perform distributed tracing, which records the path of a request through various microservices that make up an application. Distributed tracing is a technique that is used to tie the information about different units of work together - usually executed in different processes or hosts - to understand a whole chain of events in a distributed transaction. Developers can visualize call flows in large microservice architectures with distributed tracing. It is valuable for understanding serialization, parallelism, and sources of latency. Red Hat OpenShift distributed tracing platform records the execution of individual requests across the whole stack of microservices, and presents them as traces. A trace is a data/execution path through the system. An end-to-end trace is comprised of one or more spans. A span represents a logical unit of work in Red Hat OpenShift distributed tracing platform that has an operation name, the start time of the operation, and the duration, as well as potentially tags and logs. Spans may be nested and ordered to model causal relationships. 2.1.1. Distributed tracing overview As a service owner, you can use distributed tracing to instrument your services to gather insights into your service architecture. You can use the Red Hat OpenShift distributed tracing platform for monitoring, network profiling, and troubleshooting the interaction between components in modern, cloud-native, microservices-based applications. With the distributed tracing platform, you can perform the following functions: Monitor distributed transactions Optimize performance and latency Perform root cause analysis 2.1.2. Red Hat OpenShift distributed tracing platform features Red Hat OpenShift distributed tracing platform provides the following capabilities: Integration with Kiali - When properly configured, you can view distributed tracing platform data from the Kiali console. High scalability - The distributed tracing platform back end is designed to have no single points of failure and to scale with the business needs. Distributed Context Propagation - Enables you to connect data from different components together to create a complete end-to-end trace. Backwards compatibility with Zipkin - Red Hat OpenShift distributed tracing platform has APIs that enable it to be used as a drop-in replacement for Zipkin, but Red Hat is not supporting Zipkin compatibility in this release. 2.1.3. Red Hat OpenShift distributed tracing platform architecture Red Hat OpenShift distributed tracing platform is made up of several components that work together to collect, store, and display tracing data. Red Hat OpenShift distributed tracing platform (Tempo) - This component is based on the open source Grafana Tempo project . Gateway - The Gateway handles authentication, authorization, and forwarding requests to the Distributor or Query front-end service. Distributor - The Distributor accepts spans in multiple formats including Jaeger, OpenTelemetry, and Zipkin. It routes spans to Ingesters by hashing the traceID and using a distributed consistent hash ring. Ingester - The Ingester batches a trace into blocks, creates bloom filters and indexes, and then flushes it all to the back end. Query Frontend - The Query Frontend is responsible for sharding the search space for an incoming query. The search query is then sent to the Queriers. The Query Frontend deployment exposes the Jaeger UI through the Tempo Query sidecar. Querier - The Querier is responsible for finding the requested trace ID in either the Ingesters or the back-end storage. Depending on parameters, it can query the Ingesters and pull Bloom indexes from the back end to search blocks in object storage. Compactor - The Compactors stream blocks to and from the back-end storage to reduce the total number of blocks. Red Hat build of OpenTelemetry - This component is based on the open source OpenTelemetry project . OpenTelemetry Collector - The OpenTelemetry Collector is a vendor-agnostic way to receive, process, and export telemetry data. The OpenTelemetry Collector supports open-source observability data formats, for example, Jaeger and Prometheus, sending to one or more open-source or commercial back-ends. The Collector is the default location instrumentation libraries export their telemetry data. Red Hat OpenShift distributed tracing platform (Jaeger) - This component is based on the open source Jaeger project . Important The Red Hat OpenShift distributed tracing platform (Jaeger) is a deprecated feature. Deprecated functionality is still included in OpenShift Container Platform and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog in a future release. For more information, see the Red Hat Knowledgebase solution Jaeger Deprecation and Removal in OpenShift . Users must migrate to the Tempo Operator and the Red Hat build of OpenTelemetry for distributed tracing collection and storage. For the most recent list of major functionality that has been deprecated or removed within OpenShift Container Platform, refer to the Deprecated and removed features section of the OpenShift Container Platform release notes. Client (Jaeger client, Tracer, Reporter, instrumented application, client libraries)- The distributed tracing platform (Jaeger) clients are language-specific implementations of the OpenTracing API. They might be used to instrument applications for distributed tracing either manually or with a variety of existing open source frameworks, such as Camel (Fuse), Spring Boot (RHOAR), MicroProfile (RHOAR/Thorntail), Wildfly (EAP), and many more, that are already integrated with OpenTracing. Agent (Jaeger agent, Server Queue, Processor Workers) - The distributed tracing platform (Jaeger) agent is a network daemon that listens for spans sent over User Datagram Protocol (UDP), which it batches and sends to the Collector. The agent is meant to be placed on the same host as the instrumented application. This is typically accomplished by having a sidecar in container environments such as Kubernetes. Jaeger Collector (Collector, Queue, Workers) - Similar to the Jaeger agent, the Jaeger Collector receives spans and places them in an internal queue for processing. This allows the Jaeger Collector to return immediately to the client/agent instead of waiting for the span to make its way to the storage. Storage (Data Store) - Collectors require a persistent storage backend. Red Hat OpenShift distributed tracing platform (Jaeger) has a pluggable mechanism for span storage. Red Hat OpenShift distributed tracing platform (Jaeger) supports the Elasticsearch storage. Query (Query Service) - Query is a service that retrieves traces from storage. Ingester (Ingester Service) - Red Hat OpenShift distributed tracing platform can use Apache Kafka as a buffer between the Collector and the actual Elasticsearch backing storage. Ingester is a service that reads data from Kafka and writes to the Elasticsearch storage backend. Jaeger Console - With the Red Hat OpenShift distributed tracing platform (Jaeger) user interface, you can visualize your distributed tracing data. On the Search page, you can find traces and explore details of the spans that make up an individual trace. 2.1.4. Additional resources Red Hat build of OpenTelemetry
| null |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/distributed_tracing/distributed-tracing-architecture
|
Chapter 2. Deploying Red Hat build of OpenJDK application in containers
|
Chapter 2. Deploying Red Hat build of OpenJDK application in containers You can deploy Red Hat build of OpenJDK applications in containers and have them run when the container is loaded. Procedure Copy the application JAR to the /deployments directory in the image JAR file. For example, the following shows a brief Dockerfile that adds an application called testubi.jar to the Red Hat build of OpenJDK 17 UBI8 image:
|
[
"FROM registry.access.redhat.com/ubi8/openjdk-17 COPY target/testubi.jar /deployments/testubi.jar"
] |
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/packaging_red_hat_build_of_openjdk_17_applications_in_containers/deploying-openjdk-apps-in-containers
|
Chapter 9. Troubleshooting high availability resources
|
Chapter 9. Troubleshooting high availability resources In case of resource failure, you must investigate the cause and location of the problem, fix the failed resource, and optionally clean up the resource. There are many possible causes of resource failures depending on your deployment, and you must investigate the resource to determine how to fix the problem. For example, you can check the resource constraints to ensure that the resources are not interrupting each other, and that the resources can connect to each other. You can also examine a Controller node that is fenced more often than other Controller nodes to identify possible communication problems. Depending on the location of the resource problem, you choose one of the following options: Controller node problems If health checks to a Controller node are failing, this can indicate a communication problem between Controller nodes. To investigate, log in to the Controller node and check if the services can start correctly. Individual resource problems If most services on a Controller are running correctly, you can run the pcs status command and check the output for information about a specific Pacemaner resource failure or run the systemctl command to investigate a non-Pacemaker resource failure. 9.1. Viewing resource constraints in a high availability cluster Before you investigate resource problems, you can view constraints on how services are launched, including constraints related to where each resource is located, the order in which the resource starts, and whether the resource must be colocated with another resource. Procedure Use one of the following options: To view all resource constraints, log in to any Controller node and run the pcs constraint show command: The following example shows a truncated output from the pcs constraint show command on a Controller node: This output displays the following main constraint types: Location Constraints Lists the locations to which resources can be assigned: The first constraint defines a rule that sets the galera-bundle resource to run on nodes with the galera-role attribute set to true . The second location constraint specifies that the IP resource ip-192.168.24.15 runs only on nodes with the haproxy-role attribute set to true . This means that the cluster associates the IP address with the haproxy service, which is necessary to make the services reachable. The third location constraint shows that the ipmilan resource is disabled on each of the Controller nodes. Ordering Constraints Lists the order in which resources can launch. This example shows a constraint that sets the virtual IP address resources IPaddr2 to start before the HAProxy service. Note Ordering constraints only apply to IP address resources and to HAproxy. Systemd manages all other resources, because services such as Compute are expected to withstand an interruption of a dependent service, such as Galera. Colocation Constraints Lists which resources must be located together. All virtual IP addresses are linked to the haproxy-bundle resource. To view constraints for a specific resource, log in to any Controller node and run the pcs property show command: Example output: In this output, you can verify the that the resource constraints are set correctly. For example, the galera-role attribute is true for all Controller nodes, which means that the galera-bundle resource runs only on these nodes. 9.2. Investigating Pacemaker resource problems To investigate failed resources that Pacemaker manages, log in to the Controller node on which the resource is failing and check the status and log events for the resource. For example, investigate the status and log events for the openstack-cinder-volume resource. Prerequisites A Controller node with Pacemaker services Root user permissions to view log events Procedure Log in to the Controller node on which the resource is failing. Run the pcs status command with the grep option to get the status of the service: View the log events for the resource: Correct the failed resource based on the information from the output and from the logs. Run the pcs resource cleanup command to reset the status and the fail count of the resource. 9.3. Investigating systemd resource problems To investigate failed resources that systemd manages, log in to the Controller node on which the resource is failing and check the status and log events for the resource. For example, investigate the status and log events for the tripleo_nova_conductor resource. Prerequisites A Controller node with systemd services Root user permissions to view log events Procedure Run the systemctl status command to show the resource status and recent log events: View the log events for the resource: Correct the failed resource based on the information from the output and from the logs. Restart the resource and check the status of the service:
|
[
"sudo pcs constraint show",
"Location Constraints: Resource: galera-bundle Constraint: location-galera-bundle (resource-discovery=exclusive) Rule: score=0 Expression: galera-role eq true [...] Resource: ip-192.168.24.15 Constraint: location-ip-192.168.24.15 (resource-discovery=exclusive) Rule: score=0 Expression: haproxy-role eq true [...] Resource: my-ipmilan-for-controller-0 Disabled on: overcloud-controller-0 (score:-INFINITY) Resource: my-ipmilan-for-controller-1 Disabled on: overcloud-controller-1 (score:-INFINITY) Resource: my-ipmilan-for-controller-2 Disabled on: overcloud-controller-2 (score:-INFINITY) Ordering Constraints: start ip-172.16.0.10 then start haproxy-bundle (kind:Optional) start ip-10.200.0.6 then start haproxy-bundle (kind:Optional) start ip-172.19.0.10 then start haproxy-bundle (kind:Optional) start ip-192.168.1.150 then start haproxy-bundle (kind:Optional) start ip-172.16.0.11 then start haproxy-bundle (kind:Optional) start ip-172.18.0.10 then start haproxy-bundle (kind:Optional) Colocation Constraints: ip-172.16.0.10 with haproxy-bundle (score:INFINITY) ip-172.18.0.10 with haproxy-bundle (score:INFINITY) ip-10.200.0.6 with haproxy-bundle (score:INFINITY) ip-172.19.0.10 with haproxy-bundle (score:INFINITY) ip-172.16.0.11 with haproxy-bundle (score:INFINITY) ip-192.168.1.150 with haproxy-bundle (score:INFINITY)",
"sudo pcs property show",
"Cluster Properties: cluster-infrastructure: corosync cluster-name: tripleo_cluster dc-version: 2.0.1-4.el8-0eb7991564 have-watchdog: false redis_REPL_INFO: overcloud-controller-0 stonith-enabled: false Node Attributes: overcloud-controller-0: cinder-volume-role=true galera-role=true haproxy-role=true rabbitmq-role=true redis-role=true rmq-node-attr-last-known-rabbitmq=rabbit@overcloud-controller-0 overcloud-controller-1: cinder-volume-role=true galera-role=true haproxy-role=true rabbitmq-role=true redis-role=true rmq-node-attr-last-known-rabbitmq=rabbit@overcloud-controller-1 overcloud-controller-2: cinder-volume-role=true galera-role=true haproxy-role=true rabbitmq-role=true redis-role=true rmq-node-attr-last-known-rabbitmq=rabbit@overcloud-controller-2",
"sudo pcs status | grep cinder Podman container: openstack-cinder-volume [192.168.24.1:8787/rh-osbs/rhosp161-openstack-cinder-volume:pcmklatest] openstack-cinder-volume-podman-0 (ocf::heartbeat:podman): Started controller-1",
"sudo less /var/log/containers/stdouts/openstack-cinder-volume.log [...] 2021-04-12T12:32:17.607179705+00:00 stderr F ++ cat /run_command 2021-04-12T12:32:17.609648533+00:00 stderr F + CMD='/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf' 2021-04-12T12:32:17.609648533+00:00 stderr F + ARGS= 2021-04-12T12:32:17.609648533+00:00 stderr F + [[ ! -n '' ]] 2021-04-12T12:32:17.609648533+00:00 stderr F + . kolla_extend_start 2021-04-12T12:32:17.611214130+00:00 stderr F +++ stat -c %U:%G /var/lib/cinder 2021-04-12T12:32:17.616637578+00:00 stderr F ++ [[ cinder:kolla != \\c\\i\\n\\d\\e\\r\\:\\k\\o\\l\\l\\a ]] 2021-04-12T12:32:17.616722778+00:00 stderr F + echo 'Running command: '\\''/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf'\\''' 2021-04-12T12:32:17.616751172+00:00 stdout F Running command: '/usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf' 2021-04-12T12:32:17.616775368+00:00 stderr F + exec /usr/bin/cinder-volume --config-file /usr/share/cinder/cinder-dist.conf --config-file /etc/cinder/cinder.conf",
"sudo pcs resource cleanup openstack-cinder-volume Resource: openstack-cinder-volume successfully cleaned up",
"[tripleo-admin@controller-0 ~]USD sudo systemctl status tripleo_nova_conductor ● tripleo_nova_conductor.service - nova_conductor container Loaded: loaded (/etc/systemd/system/tripleo_nova_conductor.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2021-04-12 10:54:46 UTC; 1h 38min ago Main PID: 5125 (conmon) Tasks: 2 (limit: 126564) Memory: 1.2M CGroup: /system.slice/tripleo_nova_conductor.service └─5125 /usr/bin/conmon --api-version 1 -c cc3c63b54e0864c94ac54a5789be96aea1dd60b2f3216b37c3e020c76e7887d4 -u cc3c63b54e0864c94ac54a5789be96aea1dd60b2f3216b37c3e020c76e7887d4 -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/cc3c63b54e0864c94ac54a5789be96aea1dd60b2f3216b37c3e02> Apr 12 10:54:42 controller-0.redhat.local systemd[1]: Starting nova_conductor container Apr 12 10:54:46 controller-0.redhat.local podman[2855]: nova_conductor Apr 12 10:54:46 controller-0.redhat.local systemd[1]: Started nova_conductor container.",
"sudo less /var/log/containers/tripleo_nova_conductor.log",
"systemctl restart tripleo_nova_conductor systemctl status tripleo_nova_conductor ● tripleo_nova_conductor.service - nova_conductor container Loaded: loaded (/etc/systemd/system/tripleo_nova_conductor.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2021-04-22 14:28:35 UTC; 7s ago Process: 518937 ExecStopPost=/usr/bin/podman stop -t 10 nova_conductor (code=exited, status=0/SUCCESS) Process: 518653 ExecStop=/usr/bin/podman stop -t 10 nova_conductor (code=exited, status=0/SUCCESS) Process: 519063 ExecStart=/usr/bin/podman start nova_conductor (code=exited, status=0/SUCCESS) Main PID: 519198 (conmon) Tasks: 2 (limit: 126564) Memory: 1.1M CGroup: /system.slice/tripleo_nova_conductor.service └─519198 /usr/bin/conmon --api-version 1 -c 0d6583beb20508e6bacccd5fea169a2fe949471207cb7d4650fec5f3638c2ce6 -u 0d6583beb20508e6bacccd5fea169a2fe949471207cb7d4650fec5f3638c2ce6 -r /usr/bin/runc -b /var/lib/containe> Apr 22 14:28:34 controller-0.redhat.local systemd[1]: Starting nova_conductor container Apr 22 14:28:35 controller-0.redhat.local podman[519063]: nova_conductor Apr 22 14:28:35 controller-0.redhat.local systemd[1]: Started nova_conductor container."
] |
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/managing_high_availability_services/assembly_troubleshooting-ha-resources_ext-lb-example
|
Providing feedback on JBoss EAP documentation
|
Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Please include the Document URL , the section number and describe the issue . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
| null |
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/using_jboss_eap_in_microsoft_azure/proc_providing-feedback-on-red-hat-documentation_default
|
E.2.4. /proc/crypto
|
E.2.4. /proc/crypto This file lists all installed cryptographic ciphers used by the Linux kernel, including additional details for each. A sample /proc/crypto file looks like the following:
|
[
"name : sha1 module : kernel type : digest blocksize : 64 digestsize : 20 name : md5 module : md5 type : digest blocksize : 64 digestsize : 16"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-crypto
|
Chapter 18. Configuring artifact types
|
Chapter 18. Configuring artifact types As a Red Hat Quay administrator, you can configure Open Container Initiative (OCI) artifact types and other experimental artifact types through the FEATURE_GENERAL_OCI_SUPPORT , ALLOWED_OCI_ARTIFACT_TYPES , and IGNORE_UNKNOWN_MEDIATYPES configuration fields. The following Open Container Initiative (OCI) artifact types are built into Red Hat Quay by default and are enabled through the FEATURE_GENERAL_OCI_SUPPORT configuration field: Field Media Type Supported content types Helm application/vnd.cncf.helm.config.v1+json application/tar+gzip , application/vnd.cncf.helm.chart.content.v1.tar+gzip Cosign application/vnd.oci.image.config.v1+json application/vnd.dev.cosign.simplesigning.v1+json , application/vnd.dsse.envelope.v1+json SPDX application/vnd.oci.image.config.v1+json text/spdx , text/spdx+xml , text/spdx+json Syft application/vnd.oci.image.config.v1+json application/vnd.syft+json CycloneDX application/vnd.oci.image.config.v1+json application/vnd.cyclonedx , application/vnd.cyclonedx+xml , application/vnd.cyclonedx+json In-toto application/vnd.oci.image.config.v1+json application/vnd.in-toto+json Unknown application/vnd.cncf.openpolicyagent.policy.layer.v1+rego application/vnd.cncf.openpolicyagent.policy.layer.v1+rego , application/vnd.cncf.openpolicyagent.data.layer.v1+json Additionally, Red Hat Quay uses the ZStandard , or zstd , to reduce the size of container images or other related artifacts. Zstd helps optimize storage and improve transfer speeds when working with container images. Use the following procedures to configure support for the default and experimental OCI media types. 18.1. Configuring OCI artifact types Use the following procedure to configure artifact types that are embedded in Red Hat Quay by default. Prerequisites You have Red Hat Quay administrator privileges. Procedure In your Red Hat Quay config.yaml file, enable support for general OCI support by setting the FEATURE_GENERAL_OCI_SUPPORT field to true . For example: FEATURE_GENERAL_OCI_SUPPORT: true With FEATURE_GENERAL_OCI_SUPPORT set to true, Red Hat Quay users can now push and pull charts of the default artifact types to their Red Hat Quay deployment. 18.2. Configuring additional artifact types Use the following procedure to configure additional, and specific, artifact types for your Red Hat Quay deployment. Note Using the ALLOWED_OCI_ARTIFACT_TYPES configuration field, you can restrict which artifact types are accepted by your Red Hat Quay registry. If you want your Red Hat Quay deployment to accept all artifact types, see "Configuring unknown media types". Prerequistes You have Red Hat Quay administrator privileges. Procedure Add the ALLOWED_OCI_ARTIFACT_TYPES configuration field, along with the configuration and layer types: FEATURE_GENERAL_OCI_SUPPORT: true ALLOWED_OCI_ARTIFACT_TYPES: <oci config type 1>: - <oci layer type 1> - <oci layer type 2> <oci config type 2>: - <oci layer type 3> - <oci layer type 4> For example, you can add Singularity Image Format (SIF) support by adding the following to your config.yaml file: ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.dev.cosign.simplesigning.v1+json application/vnd.cncf.helm.config.v1+json: - application/tar+gzip application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar Note When adding OCI artifact types that are not configured by default, Red Hat Quay administrators will also need to manually add support for Cosign and Helm if desired. Now, users can tag SIF images for their Red Hat Quay registry. 18.3. Configuring unknown media types Use the following procedure to enable all artifact types for your Red Hat Quay deployment. Note With this field enabled, your Red Hat Quay deployment accepts all artifact types. Prerequistes You have Red Hat Quay administrator privileges. Procedure Add the IGNORE_UNKNOWN_MEDIATYPES configuration field to your Red Hat Quay config.yaml file: IGNORE_UNKNOWN_MEDIATYPES: true With this field enabled, your Red Hat Quay deployment accepts unknown and unrecognized artifact types.
|
[
"FEATURE_GENERAL_OCI_SUPPORT: true",
"FEATURE_GENERAL_OCI_SUPPORT: true ALLOWED_OCI_ARTIFACT_TYPES: <oci config type 1>: - <oci layer type 1> - <oci layer type 2> <oci config type 2>: - <oci layer type 3> - <oci layer type 4>",
"ALLOWED_OCI_ARTIFACT_TYPES: application/vnd.oci.image.config.v1+json: - application/vnd.dev.cosign.simplesigning.v1+json application/vnd.cncf.helm.config.v1+json: - application/tar+gzip application/vnd.sylabs.sif.config.v1+json: - application/vnd.sylabs.sif.layer.v1+tar",
"IGNORE_UNKNOWN_MEDIATYPES: true"
] |
https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/manage_red_hat_quay/supported-oci-media-types
|
8.132. openhpi
|
8.132. openhpi 8.132.1. RHBA-2013:1532 - openhpi bug fix update Updated openhpi packages that fix several bugs are now available for Red Hat Enterprise Linux 6. OpenHPI provides an open source implementation of the Service Availability Forum (SAF) Hardware Platform Interface (HPI). HPI is an abstracted interface for managing computer hardware, typically chassis- and rack-based servers. HPI includes resource modeling; access to and control over sensor, control, watchdog, and inventory data associated with resources; abstracted System Event Log interfaces; hardware events and alarms; and a managed hot swap interface. Bug Fixes BZ# 891626 Due to a bug in the power_supply() parsing routines, some returned strings could contain incorrectly displayed characters. Consequently, retrieving a serial or part number of a power supply unit (PSU) via the OpenHPI API resulted in strings containing these characters. This update ensures that proper serial and part numbers are returned for PSUs and the returned strings now only contain valid characters. BZ# 924852 Previously, code supporting certain RDR (Request Data with Reply) sensors was missing in OpenHPI. Consequently, after the extraction and reinsertion of an enclosure monitored via the Onboard Administrator (OA) SOAP plug-in, the following error messages were returned to the log file: openhpid: ERROR: (oa_soap_sensor.c, 2005, RDR not present) openhpid: ERROR: (oa_soap_fan_event.c, 279, processing the sensor event for sensor 24 has failed) This bug has been fixed and no error messages are now logged after a component is extracted and reinserted. BZ# 948386 Under certain conditions, when using OpenHPI with the Onboard Administrator (OA) SOAP plug-in when an OA switch-over took place, HPI clients became unresponsive or the openhpi daemon failed to connect to the new active OA. Consequently, clients were unable to retrieve events and data. A series of patches has been provided to better account for OA failover situations, thus fixing this bug. BZ# 953515 Prior to this update, support for certain blade servers was missing in OpenHPI. Consequently, the OpenHPI daemon terminated unexpectedly with a segmentation fault at startup on these servers. A patch has been provided to add the missing support and the OpenHPI daemon no longer crashes in the described scenario. BZ# 953525 Due to missing support for certain thermal sensors, the getBladeInfo() function could terminate unexpectedly, causing the whole discovery process to fail. This update adds the support for these sensors and OpenHPI discovery now works as expected. Users of openhpi are advised to upgrade to these updated packages, which fix these bugs.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/openhpi
|
Securing Red Hat Quay
|
Securing Red Hat Quay Red Hat Quay 3.13 Securing Red Hat Quay Red Hat OpenShift Documentation Team
| null |
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/securing_red_hat_quay/index
|
4.81. gpxe
|
4.81. gpxe 4.81.1. RHBA-2011:1765 - gpxe bug fix update Updated gpxe packages that fix one bug are now available for Red Hat Enterprise Linux 6. The gpxe packages provide an open source Preboot Execution Environment (PXE) implementation and bootloader. gPXE also supports additional protocols such as DNS, HTTP, iSCSI and ATA over Ethernet. Bug Fix BZ# 743893 Prior to this update, PXE failed to boot a virtual machine which used the virtio network interface card (NIC). An upstream patch, which incorporates the latest upstream gPXE paravirtualized network adapter (virtio-net) driver and removes the legacy Etherboot virtio-net driver, has been applied to fix this problem. Now, PXE can successfully boot virtual machines that use virtio NIC. All users of gpxe are advised to upgrade to these updated packages, which fix this bug.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/gpxe
|
5.69. firstboot
|
5.69. firstboot 5.69.1. RHEA-2012:0928 - firstboot enhancement update Updated firstboot packages that add two enhancements are now available for Red Hat Enterprise Linux 6. The firstboot utility runs after installation and guides the user through a series of steps that allows for easier configuration of the machine. Enhancements BZ# 704187 Prior to this update, the firstboot utility did not allow users to change the timezone. This update adds the timezone module to firstboot so that users can now change the timezone in the reconfiguration mode. BZ# 753658 Prior to this update, the firstboot service did not provide a status option. This update adds the "firstboot service status" option to show if firstboot is scheduled to run on the boot or not. All users of firstboot are advised to upgrade to these updated packages, which add these enhancements.
| null |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/firstboot
|
Chapter 44. Defining spreadsheet decision tables
|
Chapter 44. Defining spreadsheet decision tables Spreadsheet decision tables (XLS or XLSX) require two key areas that define rule data: a RuleSet area and a RuleTable area. The RuleSet area of the spreadsheet defines elements that you want to apply globally to all rules in the same package (not only the spreadsheet), such as a rule set name or universal rule attributes. The RuleTable area defines the actual rules (rows) and the conditions, actions, and other rule attributes (columns) that constitute that rule table within the specified rule set. A spreadsheet of decision tables can contain multiple RuleTable areas, but only one RuleSet area. Important You should typically upload only one spreadsheet of decision tables, containing all necessary RuleTable definitions, per rule package in Business Central. You can upload separate decision table spreadsheets for separate packages, but uploading multiple spreadsheets in the same package can cause compilation errors from conflicting RuleSet or RuleTable attributes and is therefore not recommended. Refer to the following sample spreadsheet as you define your decision table: Figure 44.1. Sample spreadsheet decision table for shipping charges Procedure In a new XLS or XLSX spreadsheet, go to the second or third column and label a cell RuleSet (row 1 in example). Reserve the column or columns to the left for descriptive metadata (optional). In the cell to the right, enter a name for the RuleSet . This named rule set will contain all RuleTable rules defined in the rule package. Under the RuleSet cell, define any rule attributes (one per cell) that you want to apply globally to all rule tables in the package. Specify attribute values in the cells to the right. For example, you can enter an Import label and in the cell to the right, specify relevant data objects from other packages that you want to import into the package for the decision table (in the format package.name.object.name ). For supported cell labels and values, see Section 44.1, "RuleSet definitions" . Below the RuleSet area and in the same column as the RuleSet cell, skip a row and label a new cell RuleTable (row 7 in example) and enter a table name in the same cell. The name is used as the initial part of the name for all rules derived from this rule table, with the row number appended for distinction. You can override this automatic naming by inserting a NAME attribute column. Use the four rows to define the following elements as needed (rows 8-11 in example): Rule attributes: Conditions, actions, or other attributes. For supported cell labels and values, see Section 44.2, "RuleTable definitions" . Object types: The data objects to which the rule attributes apply. If the same object type applies to multiple columns, merge the object cells into one cell across multiple columns (as shown in the sample decision table), instead of repeating the object type in multiple cells. When an object type is merged, all columns below the merged range will be combined into one set of constraints within a single pattern for matching a single fact at a time. When an object is repeated in separate columns, the separate columns can create different patterns, potentially matching different or identical facts. Constraints: Constraints on the object types. Column label: (Optional) Any descriptive label for the column, as a visual aid. Leave blank if unused. Note As an alternative to populating both the object type and constraint cells, you can leave the object type cell or cells empty and enter the full expression in the corresponding constraint cell or cells. For example, instead of Order as the object type and itemsCount > USD1 as a constraint (separate cells), you can leave the object type cell empty and enter Order( itemsCount > USD1 ) in the constraint cell, and then do the same for other constraint cells. After you have defined all necessary rule attributes (columns), enter values for each column as needed, row by row, to generate rules (rows 12-17 in example). Cells with no data are ignored (such as when a condition or action does not apply). If you need to add more rule tables to this decision table spreadsheet, skip a row after the last rule in the table, label another RuleTable cell in the same column as the RuleTable and RuleSet cells, and create the new table following the same steps in this section (rows 19-29 in example). Save your XLS or XLSX spreadsheet to finish. Note By default, only the first worksheet in a spreadsheet workbook is processed as a decision table when you upload the spreadsheet in Business Central. Each RuleSet name combined with the RuleTable name must be unique across all decision table files in the same package. If you want to process multiple worksheet decision tables, then create a .properties file with the same name as the spreadsheet workbook. The .properties file must contain a property sheet with comma-separated values (CSV) for the names of the worksheets, for example: After you upload the decision table in Business Central, the rules are rendered as DRL rules like the following example, from the sample spreadsheet: Enabling white space used in cell values By default, any white space before or after values in decision table cells is removed before the decision table is processed by the decision engine. To retain white space that you use intentionally before or after values in cells, set the drools.trimCellsInDTable system property to false in your Red Hat Process Automation Manager distribution. For example, if you use Red Hat Process Automation Manager with Red Hat JBoss EAP, add the following system property to your USDEAP_HOME/standalone/configuration/standalone-full.xml file: If you use the decision engine embedded in your Java application, add the system property with the following command: 44.1. RuleSet definitions Entries in the RuleSet area of a decision table define DRL constructs and rule attributes that you want to apply to all rules in a package (not only in the spreadsheet). Entries must be in a vertically stacked sequence of cell pairs, where the first cell contains a label and the cell to the right contains the value. A decision table spreadsheet can have only one RuleSet area. The following table lists the supported labels and values for RuleSet definitions: Table 44.1. Supported RuleSet definitions Label Value Usage RuleSet The package name for the generated DRL file. Optional, the default is rule_table . Must be the first entry. Sequential true or false . If true , then salience is used to ensure that rules fire from the top down. Optional, at most once. If omitted, no firing order is imposed. SequentialMaxPriority Integer numeric value Optional, at most once. In sequential mode, this option is used to set the start value of the salience. If omitted, the default value is 65535. SequentialMinPriority Integer numeric value Optional, at most once. In sequential mode, this option is used to check if this minimum salience value is not violated. If omitted, the default value is 0. EscapeQuotes true or false . If true , then quotation marks are escaped so that they appear literally in the DRL. Optional, at most once. If omitted, quotation marks are escaped. IgnoreNumericFormat true or false . If true , then the format for numeric values is ignored, for example, percent and currency. Optional, at most once. If omitted, DRL takes formatted values. Import A comma-separated list of Java classes to import from another package. Optional, may be used repeatedly. Variables Declarations of DRL globals (a type followed by a variable name). Multiple global definitions must be separated by commas. Optional, may be used repeatedly. Functions One or more function definitions, according to DRL syntax. Optional, may be used repeatedly. Queries One or more query definitions, according to DRL syntax. Optional, may be used repeatedly. Declare One or more declarative types, according to DRL syntax. Optional, may be used repeatedly. Unit The rule units that the rules generated from this decision table belong to. Optional, at most once. If omitted, the rules do not belong to any unit. Dialect java or mvel . The dialect used in the actions of the decision table. Optional, at most once. If omitted, java is imposed. Warning In some cases, Microsoft Office, LibreOffice, and OpenOffice might encode a double quotation mark differently, causing a compilation error. For example, "A" will fail, but "A" will pass. 44.2. RuleTable definitions Entries in the RuleTable area of a decision table define conditions, actions, and other rule attributes for the rules in that rule table. A spreadsheet of decision tables can contain multiple RuleTable areas. The following table lists the supported labels (column headers) and values for RuleTable definitions. For column headers, you can use either the given labels or any custom labels that begin with the letters listed in the table. Table 44.2. Supported RuleTable definitions Label Or custom label that begins with Value Usage NAME N Provides the name for the rule generated from that row. The default is constructed from the text following the RuleTable tag and the row number. At most one column. DESCRIPTION I Results in a comment within the generated rule. At most one column. CONDITION C Code snippet and interpolated values for constructing a constraint within a pattern in a condition. At least one per rule table. ACTION A Code snippet and interpolated values for constructing an action for the consequence of the rule. At least one per rule table. METADATA @ Code snippet and interpolated values for constructing a metadata entry for the rule. Optional, any number of columns. The following sections provide more details about how condition, action, and metadata columns use cell data: Conditions For columns headed CONDITION , the cells in consecutive lines result in a conditional element: First cell: Text in the first cell below CONDITION develops into a pattern for the rule condition, and uses the snippet in the line as a constraint. If the cell is merged with one or more neighboring cells, a single pattern with multiple constraints is formed. All constraints are combined into a parenthesized list and appended to the text in this cell. If this cell is empty, the code snippet in the cell below it must result in a valid conditional element on its own. For example, instead of Order as the object type and itemsCount > USD1 as a constraint (separate cells), you can leave the object type cell empty and enter Order( itemsCount > USD1 ) in the constraint cell, and then do the same for any other constraint cells. To include a pattern without constraints, you can write the pattern in front of the text of another pattern, with or without an empty pair of parentheses. You can also append a from clause to the pattern. If the pattern ends with eval , code snippets produce boolean expressions for inclusion into a pair of parentheses after eval . You can terminate the pattern with @watch annotation, which is used to customize the properties that the pattern is reactive on. Second cell: Text in the second cell below CONDITION is processed as a constraint on the object reference in the first cell. The code snippet in this cell is modified by interpolating values from cells farther down in the column. If you want to create a constraint consisting of a comparison using == with the value from the cells below, then the field selector alone is sufficient. If you use the field selector alone, but you want to use the condition as it is without appending any == comparison, you must terminate the condition with the symbol ? . Any other comparison operator must be specified as the last item within the snippet, and the value from the cells below is appended. For all other constraint forms, you must mark the position for including the contents of a cell with the symbol USDparam . Multiple insertions are possible if you use the symbols USD1 , USD2 , and so on, and a comma-separated list of values in the cells below. However, do not separate USD1 , USD2 , and so on, by commas, or the table will fail to process. To expand a text according to the pattern forall(USDdelimiter){USDsnippet} , repeat the USDsnippet once for each of the values of the comma-separated list in each of the cells below, insert the value in place of the symbol USD , and join these expansions by the given USDdelimiter . Note that the forall construct may be surrounded by other text. If the first cell contains an object, the completed code snippet is added to the conditional element from that cell. A pair of parentheses is provided automatically, as well as a separating comma if multiple constraints are added to a pattern in a merged cell. If the first cell is empty, the code snippet in this cell must result in a valid conditional element on its own. For example, instead of Order as the object type and itemsCount > USD1 as a constraint (separate cells), you can leave the object type cell empty and enter Order( itemsCount > USD1 ) in the constraint cell, and then do the same for any other constraint cells. Third cell: Text in the third cell below CONDITION is a descriptive label that you define for the column, as a visual aid. Fourth cell: From the fourth row on, non-blank entries provide data for interpolation. A blank cell omits the condition or constraint for this rule. Actions For columns headed ACTION , the cells in consecutive lines result in an action statement: First cell: Text in the first cell below ACTION is optional. If present, the text is interpreted as an object reference. Second cell: Text in the second cell below ACTION is a code snippet that is modified by interpolating values from cells farther down in the column. For a singular insertion, mark the position for including the contents of a cell with the symbol USDparam . Multiple insertions are possible if you use the symbols USD1 , USD2 , and so on, and a comma-separated list of values in the cells below. However, do not separate USD1 , USD2 , and so on, by commas, or the table will fail to process. A text without any marker symbols can execute a method call without interpolation. In this case, use any non-blank entry in a row below the cell to include the statement. The forall construct is supported. If the first cell contains an object, then the cell text (followed by a period), the text in the second cell, and a terminating semicolon are strung together, resulting in a method call that is added as an action statement for the consequence. If the first cell is empty, the code snippet in this cell must result in a valid action element on its own. Third cell: Text in the third cell below ACTION is a descriptive label that you define for the column, as a visual aid. Fourth cell: From the fourth row on, non-blank entries provide data for interpolation. A blank cell omits the condition or constraint for this rule. Metadata For columns headed METADATA , the cells in consecutive lines result in a metadata annotation for the generated rules: First cell: Text in the first cell below METADATA is ignored. Second cell: Text in the second cell below METADATA is subject to interpolation, using values from the cells in the rule rows. The metadata marker character @ is prefixed automatically, so you do not need to include that character in the text for this cell. Third cell: Text in the third cell below METADATA is a descriptive label that you define for the column, as a visual aid. Fourth cell: From the fourth row on, non-blank entries provide data for interpolation. A blank cell results in the omission of the metadata annotation for this rule. 44.3. Additional rule attributes for RuleSet or RuleTable definitions The RuleSet and RuleTable areas also support labels and values for other rule attributes, such as PRIORITY or NO-LOOP . Rule attributes specified in a RuleSet area will affect all rule assets in the same package (not only in the spreadsheet). Rule attributes specified in a RuleTable area will affect only the rules in that rule table. You can use each rule attribute only once in a RuleSet area and once in a RuleTable area. If the same attribute is used in both RuleSet and RuleTable areas within the spreadsheet, then RuleTable takes priority and the attribute in the RuleSet area is overridden. The following table lists the supported labels (column headers) and values for additional RuleSet or RuleTable definitions. For column headers, you can use either the given labels or any custom labels that begin with the letters listed in the table. Table 44.3. Additional rule attributes for RuleSet or RuleTable definitions Label Or custom label that begins with Value PRIORITY P An integer defining the salience value of the rule. Rules with a higher salience value are given higher priority when ordered in the activation queue. Overridden by the Sequential flag. Example: PRIORITY 10 DATE-EFFECTIVE V A string containing a date and time definition. The rule can be activated only if the current date and time is after a DATE-EFFECTIVE attribute. Example: DATE-EFFECTIVE "4-Sep-2018" DATE-EXPIRES Z A string containing a date and time definition. The rule cannot be activated if the current date and time is after the DATE-EXPIRES attribute. Example: DATE-EXPIRES "4-Oct-2018" NO-LOOP U A Boolean value. When this option is set to true , the rule cannot be reactivated (looped) if a consequence of the rule re-triggers a previously met condition. Example: NO-LOOP true AGENDA-GROUP G A string identifying an agenda group to which you want to assign the rule. Agenda groups allow you to partition the agenda to provide more execution control over groups of rules. Only rules in an agenda group that has acquired a focus are able to be activated. Example: AGENDA-GROUP "GroupName" ACTIVATION-GROUP X A string identifying an activation (or XOR) group to which you want to assign the rule. In activation groups, only one rule can be activated. The first rule to fire will cancel all pending activations of all rules in the activation group. Example: ACTIVATION-GROUP "GroupName" DURATION D A long integer value defining the duration of time in milliseconds after which the rule can be activated, if the rule conditions are still met. Example: DURATION 10000 TIMER T A string identifying either int (interval) or cron timer definitions for scheduling the rule. Example: TIMER "*/5 * * * *" (every 5 minutes) CALENDAR E A Quartz calendar definition for scheduling the rule. Example: CALENDAR "* * 0-7,18-23 ? * *" (exclude non-business hours) AUTO-FOCUS F A Boolean value, applicable only to rules within agenda groups. When this option is set to true , the time the rule is activated, a focus is automatically given to the agenda group to which the rule is assigned. Example: AUTO-FOCUS true LOCK-ON-ACTIVE L A Boolean value, applicable only to rules within rule flow groups or agenda groups. When this option is set to true , the time the ruleflow group for the rule becomes active or the agenda group for the rule receives a focus, the rule cannot be activated again until the ruleflow group is no longer active or the agenda group loses the focus. This is a stronger version of the no-loop attribute, because the activation of a matching rule is discarded regardless of the origin of the update (not only by the rule itself). This attribute is ideal for calculation rules where you have a number of rules that modify a fact and you do not want any rule re-matching and firing again. Example: LOCK-ON-ACTIVE true RULEFLOW-GROUP R A string identifying a rule flow group. In rule flow groups, rules can fire only when the group is activated by the associated rule flow. Example: RULEFLOW-GROUP "GroupName" Figure 44.2. Sample decision table spreadsheet with attribute columns
|
[
"sheets=Sheet1,Sheet2",
"//row 12 rule \"Basic_12\" salience 10 when USDorder : Order( itemsCount > 0, itemsCount <= 3, deliverInDays == 1 ) then insert( new Charge( 35 ) ); end",
"<property name=\"drools.trimCellsInDTable\" value=\"false\"/>",
"java -jar yourApplication.jar -Ddrools.trimCellsInDTable=false"
] |
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/decision-tables-defining-proc
|
4.2. Local Transactions
|
4.2. Local Transactions A connection uses the autoCommit flag to explicitly control local transactions. By default, autoCommit is set to true , which indicates request level or implicit transaction control: This example demonstrates several things: Setting autoCommit flag to false. This will start a transaction bound to the connection. Executing multiple updates within the context of the transaction. When the statements are complete, the transaction is committed by calling commit() . If an error occurs, the transaction is rolled back using the rollback() method.
|
[
"// Set auto commit to false and start a transaction connection.setAutoCommit(false); try { // Execute multiple updates Statement statement = connection.createStatement(); statement.executeUpdate(\"INSERT INTO Accounts (ID, Name) VALUES (10, 'Mike'\\u0099)\"); statement.executeUpdate(\"INSERT INTO Accounts (ID, Name) VALUES (15, 'John'\\u0099)\"); statement.close(); // Commit the transaction connection.commit(); } catch(SQLException e) { // If an error occurs, rollback the transaction connection.rollback(); }"
] |
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_1_client_development/local_transactions1
|
15.10. Troubleshooting
|
15.10. Troubleshooting 15.10.1. Active VFS Mounts Are Invisible If your active VFS mounts are invisible, it means that your application is not a native GIO client. Native GIO clients are typically all GNOME applications using GNOME libraries (glib, gio). There is a service, gvfs-fuse , provided as a fallback for non-GIO clients. To find the cause of an active but invisible VFS mount, check whether the gvfs-fuse process is running. Since gvfs-fuse runs automatically and it is not recommended to start it by yourself, try logging out and logging in as a first option. Alternatively, you can start the VFS compatibility mount manually in the terminal: Find the UID (system user ID) for the /run/user/ UID /gvfs/ path by running the id command (the gvfsd-fuse daemon requires a path it is supposed to expose its services at). Or, when the /run/user/ UID /gvfs/ path is unavailable, gvfsd-fuse uses a .gvfs path in your home directory. Start the gvfsd-fuse daemon by running the /usr/libexec/gvfsd-fuse -f /run/user/ UID /gvfs command. Now, the VFS mount is available and you can manually browse for the path in your application. 15.10.2. Connected USB Disk Is Invisible Under certain circumstances, when you connect a flash drive, the GNOME Desktop may not display it. If the drive is invisible, it means that: You cannot see the device in the Disks application. You have run the udisksctl dump command, which lists the current state of the udisks daemon and shows information about all objects, but your flash drive is not among them. You have run the dmesg command. Towards the end of the log, there are messages related to USB device detection and a list of detected partitions, but your flash drive is not among them. If your flash drive is not visible, you can attempt to set the Show in user interface flag in Disks : Open Disks by pressing the Super key to enter the Activities Overview , typing Disks , and then pressing Enter . In the Volumes actions menu, click Edit Mount Options... . Click Show in user interface . Confirm by clicking OK . If the flash drive is still not visible, you may try to remove the drive and try connecting it again. For more information about the storage, see the Storage Administration Guide . 15.10.3. Nautilus Shows Unknown or Unwanted Partitions Check whether the device is listed in the /etc/fstab file as the devices are not shown in the user interface by default. The /etc/fstab file typically lists disk partitions that are intended to be used in the operating system, and indicates how they are mounted. Certain mount options may allow or prevent displaying the volume in the user interface. One of the solutions to hide a volume is to uncheck Show in user interface in the Mount Options window in the Disks application: Open Disks by pressing the Super key to enter the Activities Overview , typing Disks , and then pressing Enter . In the Volumes actions menu, click Edit Mount Options... . Uncheck Show in user interface and confirm by clicking OK . 15.10.4. Connection to Remote File System Is Unavailable There is a number of situations in which the client is unexpectedly and unwillingly disconnected from a virtual file system (or a remote disk) mount, afterwards is not reconnected automatically, and error messages are returned. Several causes trigger these situations: The connection is interrupted (for example, your laptop is disconnected from the Wi-Fi). The user is inactive for some time and is disconnected by the server (idle timeout). The computer is resumed from sleeping mode. The solution is to unmount and mount again the file system, which reconnects the resource. Note Should the connection be disabled more often, check the settings in the Network panel in the GNOME Settings . 15.10.5. What to Do If the Disk Is Busy? If you receive a notification about your disk being busy, determine the programs that are accessing the disk. Then, you may regularly end the programs you are running. Or, you can use the System Monitor to kill the programs forcefully. Where and How to View System Processes? Run the lsof command to get the list of open files alongside with processes. If lsof is not available, run the ps ax command that also provides the list of running processes. Alternatively, you can use the System Monitor application to display the running processes in a GUI. Make sure that you have iotop installed by running the following command: Then run iotop as root to view the system processes. When you have determined the programs, end or kill them as follows: On the command line, execute the kill command. In the System Monitor , right-click the line with the program process name, and click the End Process or Kill Process drop-down menu item.
|
[
"yum install iotop"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/desktop_migration_and_administration_guide/virtual-file-systems-disk-management-troubleshooting
|
Chapter 4. Checking policy compliance
|
Chapter 4. Checking policy compliance You can use the roxctl CLI to check deployment YAML files and images for policy compliance. 4.1. Prerequisites You have configured the ROX_ENDPOINT environment variable using the following command: USD export ROX_ENDPOINT= <host:port> 1 1 The host and port information that you want to store in the ROX_ENDPOINT environment variable. 4.2. Configuring output format When you check policy compliance by using the roxctl deployment check or roxctl image check commands, you can specify the output format by using the -o option to the command and specifying the format as json , table , csv , or junit . This option determines how the output of a command is displayed in the terminal. For example, the following command checks a deployment and then displays the result in csv format: USD roxctl deployment check --file = <yaml_filename> -o csv Note When you do not specify the -o option for the output format, the following default behavior is used: The format for the deployment check and the image check commands is table . The default output format for the image scan command is json . This is the old JSON format output for compatibility with older versions of the CLI. To get the output in the new JSON format, specify the option with format, as -o json . Use the old JSON format output when gathering data for troubleshooting purposes. Different options are available to configure the output. The following table lists the options and the format in which they are available. Option Description Formats --compact-output Use this option to display the JSON output in a compact format. json --headers Use this option to specify custom headers. table and csv --no-header Use this option to omit the header row from the output. table and csv --row-jsonpath-expressions Use this option to specify GJSON paths to select specific items from the output. For example, to get the Policy name and Severity for a deployment check, use the following command: USD roxctl deployment check --file= <yaml_filename> \ -o table --headers POLICY-NAME,SEVERITY \ --row-jsonpath-expressions="{results. .violatedPolicies. .name,results. .violatedPolicies. .severity}" table and csv --merge-output Use this options to merge table cells that have the same value. table headers-as-comment Use this option to include the header row as a comment in the output. csv --junit-suite-name Use this option to specify the name of the JUnit test suite. junit 4.3. Checking deployment YAML files Procedure Run the following command to check the build-time and deploy-time violations of your security policies in YAML deployment files: USD roxctl deployment check --file=<yaml_filename> \ 1 --namespace=<cluster_namespace> \ 2 --cluster=<cluster_name_or_id> \ 3 --verbose 4 1 For the <yaml_filename> , specify the YAML file with one or more deployments to send to Central for policy evaluation. You can also specify multiple YAML files to send to Central for policy evaluation by using the --file flag, for example --file=<yaml_filename1> , --file=<yaml_filename2> , and so on. 2 For the <cluster_namespace> , specify a namespace to enhance deployments with context information such as network policies, role-based access controls (RBACs) and services for deployments that do not have a namespace in their specification. The namespace defined in the specification is not changed. The default value is default . 3 For the <cluster_name_or_id> , specify the cluster name or ID that you want to use as the context for the evaluation to enable extended deployments with cluster-specific information. 4 By enabling the --verbose flag, you receive additional information for each deployment during the policy check. The extended information includes the RBAC permission level and a comprehensive list of network policies that is applied. Note You can see the additional information for each deployment in your JSON output, regardless of whether you enable the --verbose flag or not. The format is defined in the API reference. To cause Red Hat Advanced Cluster Security for Kubernetes (RHACS) to re-pull image metadata and image scan results from the associated registry and scanner, add the --force option. Note To check specific image scan results, you must have a token with both read and write permissions for the Image resource. The default Continuous Integration system role already has the required permissions. This command validates the following items: Configuration options in a YAML file, such as resource limits or privilege options Aspects of the images used in a YAML file, such as components or vulnerabilities 4.4. Checking images Procedure Run the following command to check the build-time violations of your security policies in images: USD roxctl image check --image= <image_name> The format is defined in the API reference. To cause Red Hat Advanced Cluster Security for Kubernetes (RHACS) to re-pull image metadata and image scan results from the associated registry and scanner, add the --force option. Note To check specific image scan results, you must have a token with both read and write permissions for the Image resource. The default Continuous Integration system role already has the required permissions. Additional resources roxctl image 4.5. Checking image scan results You can also check the scan results for specific images. Procedure Run the following command to return the components and vulnerabilities found in the image in JSON format: USD roxctl image scan --image <image_name> The format is defined in the API reference. To cause Red Hat Advanced Cluster Security for Kubernetes (RHACS) to re-pull image metadata and image scan results from the associated registry and scanner, add the --force option. Note To check specific image scan results, you must have a token with both read and write permissions for the Image resource. The default Continuous Integration system role already has the required permissions. Additional resources roxctl image
|
[
"export ROX_ENDPOINT= <host:port> 1",
"roxctl deployment check --file = <yaml_filename> -o csv",
"roxctl deployment check --file= <yaml_filename> -o table --headers POLICY-NAME,SEVERITY --row-jsonpath-expressions=\"{results. .violatedPolicies. .name,results. .violatedPolicies. .severity}\"",
"roxctl deployment check --file=<yaml_filename> \\ 1 --namespace=<cluster_namespace> \\ 2 --cluster=<cluster_name_or_id> \\ 3 --verbose 4",
"roxctl image check --image= <image_name>",
"roxctl image scan --image <image_name>"
] |
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/roxctl_cli/checking-policy-compliance-1
|
Installing Satellite Server in a disconnected network environment
|
Installing Satellite Server in a disconnected network environment Red Hat Satellite 6.15 Install and configure Satellite Server in a network without Internet access Red Hat Satellite Documentation Team [email protected]
|
[
"nfs.example.com:/nfsshare /var/lib/pulp nfs context=\"system_u:object_r:var_lib_t:s0\" 1 2",
"restorecon -R /var/lib/pulp",
"firewall-cmd --add-port=\"8000/tcp\" --add-port=\"9090/tcp\"",
"firewall-cmd --add-service=dns --add-service=dhcp --add-service=tftp --add-service=http --add-service=https --add-service=puppetmaster",
"firewall-cmd --runtime-to-permanent",
"firewall-cmd --list-all",
"ping -c1 localhost ping -c1 `hostname -f` # my_system.domain.com",
"ping -c1 localhost PING localhost (127.0.0.1) 56(84) bytes of data. 64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.043 ms --- localhost ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms ping -c1 `hostname -f` PING hostname.gateway (XX.XX.XX.XX) 56(84) bytes of data. 64 bytes from hostname.gateway (XX.XX.XX.XX): icmp_seq=1 ttl=64 time=0.019 ms --- localhost.gateway ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms",
"hostnamectl set-hostname name",
"cp /etc/foreman-installer/custom-hiera.yaml /etc/foreman-installer/custom-hiera.original",
"satellite-installer --tuning medium",
"scp localfile username@hostname:remotefile",
"mkdir /media/rhel8",
"mount -o loop rhel8-DVD .iso /media/rhel8",
"cp /media/rhel8/media.repo /etc/yum.repos.d/rhel8.repo chmod u+w /etc/yum.repos.d/rhel8.repo",
"[RHEL8-BaseOS] name=Red Hat Enterprise Linux BaseOS mediaid=None metadata_expire=-1 gpgcheck=0 cost=500 baseurl=file:///media/rhel8/BaseOS/ [RHEL8-AppStream] name=Red Hat Enterprise Linux Appstream mediaid=None metadata_expire=-1 gpgcheck=0 cost=500 baseurl=file:///media/rhel8/AppStream/",
"yum repolist",
"mkdir /media/sat6",
"mount -o loop sat6-DVD .iso /media/sat6",
"dnf install fapolicyd",
"satellite-maintain packages install fapolicyd",
"systemctl enable --now fapolicyd",
"systemctl status fapolicyd",
"findmnt -t iso9660",
"rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release",
"dnf upgrade",
"cd /media/sat6/",
"./install_packages",
"cd /path-to-package/",
"dnf install package_name",
"cd /media/sat6/",
"./install_packages",
"satellite-installer --scenario satellite --foreman-initial-organization \" My_Organization \" --foreman-initial-location \" My_Location \" --foreman-initial-admin-username admin_user_name --foreman-initial-admin-password admin_password",
"umount /media/sat6 umount /media/rhel8",
"hammer settings set --name subscription_connection_enabled --value false",
"scp ~/ manifest_file .zip root@ satellite.example.com :~/.",
"hammer subscription upload --file ~/ manifest_file .zip --organization \" My_Organization \"",
"hammer organization configure-cdn --name=\" My_Organization \" --type=custom_cdn --url https:// my-cdn.example.com --ssl-ca-credential-id \" My_CDN_CA_Cert_ID \"",
"hammer organization configure-cdn --name=\" My_Organization \" --type=export_sync",
"hammer content-credential show --name=\" My_Upstream_CA_Cert \" --organization=\" My_Downstream_Organization \"",
"hammer organization configure-cdn --name=\" My_Downstream_Organization \" --type=network_sync --url https:// upstream-satellite.example.com --username upstream_username --password upstream_password --ssl-ca-credential-id \" My_Upstream_CA_Cert_ID\" \\ --upstream-organization-label=\"_My_Upstream_Organization \" [--upstream-lifecycle-environment-label=\" My_Lifecycle_Environment \"] [--upstream-content-view-label=\" My_Content_View \"]",
"satellite-installer --foreman-proxy-plugin-remote-execution-script-mode=pull-mqtt",
"firewall-cmd --add-service=mqtt",
"firewall-cmd --runtime-to-permanent",
"satellite-installer --foreman-proxy-bmc \"true\" --foreman-proxy-bmc-default-provider \"freeipmi\"",
"satellite-installer --foreman-proxy-dns true --foreman-proxy-dns-managed true --foreman-proxy-dns-zone example.com --foreman-proxy-dns-reverse 2.0.192.in-addr.arpa --foreman-proxy-dhcp true --foreman-proxy-dhcp-managed true --foreman-proxy-dhcp-range \" 192.0.2.100 192.0.2.150 \" --foreman-proxy-dhcp-gateway 192.0.2.1 --foreman-proxy-dhcp-nameservers 192.0.2.2 --foreman-proxy-tftp true --foreman-proxy-tftp-managed true --foreman-proxy-tftp-servername 192.0.2.3",
"satellite-installer --foreman-proxy-dhcp false --foreman-proxy-dns false --foreman-proxy-tftp false",
"Option 66: IP address of Satellite or Capsule Option 67: /pxelinux.0",
"cp mailca.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust enable update-ca-trust",
"mkdir /root/satellite_cert",
"openssl genrsa -out /root/satellite_cert/satellite_cert_key.pem 4096",
"[ req ] req_extensions = v3_req distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] commonName = satellite.example.com [ v3_req ] basicConstraints = CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment extendedKeyUsage = serverAuth, clientAuth, codeSigning, emailProtection subjectAltName = @alt_names [ alt_names ] DNS.1 = satellite.example.com",
"[req_distinguished_name] CN = satellite.example.com countryName = My_Country_Name 1 stateOrProvinceName = My_State_Or_Province_Name 2 localityName = My_Locality_Name 3 organizationName = My_Organization_Or_Company_Name organizationalUnitName = My_Organizational_Unit_Name 4",
"openssl req -new -key /root/satellite_cert/satellite_cert_key.pem \\ 1 -config /root/satellite_cert/openssl.cnf \\ 2 -out /root/satellite_cert/satellite_cert_csr.pem 3",
"katello-certs-check -c /root/satellite_cert/satellite_cert.pem \\ 1 -k /root/satellite_cert/satellite_cert_key.pem \\ 2 -b /root/satellite_cert/ca_cert_bundle.pem 3",
"Validation succeeded. To install the Red Hat Satellite Server with the custom certificates, run: satellite-installer --scenario satellite --certs-server-cert \" /root/satellite_cert/satellite_cert.pem \" --certs-server-key \" /root/satellite_cert/satellite_cert_key.pem \" --certs-server-ca-cert \" /root/satellite_cert/ca_cert_bundle.pem \" To update the certificates on a currently running Red Hat Satellite installation, run: satellite-installer --scenario satellite --certs-server-cert \" /root/satellite_cert/satellite_cert.pem \" --certs-server-key \" /root/satellite_cert/satellite_cert_key.pem \" --certs-server-ca-cert \" /root/satellite_cert/ca_cert_bundle.pem \" --certs-update-server --certs-update-server-ca",
"dnf install http:// satellite.example.com /pub/katello-ca-consumer-latest.noarch.rpm",
"subscription-manager repos --disable '*' subscription-manager repos --enable=satellite-6.15-for-rhel-8-x86_64-rpms --enable=satellite-maintenance-6.15-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-baseos-rpms --enable=rhel-8-for-x86_64-appstream-rpms",
"dnf module enable satellite:el8",
"dnf install postgresql-server postgresql-evr postgresql-contrib",
"postgresql-setup initdb",
"vi /var/lib/pgsql/data/postgresql.conf",
"listen_addresses = '*'",
"vi /var/lib/pgsql/data/pg_hba.conf",
"host all all Satellite_ip /32 md5",
"systemctl enable --now postgresql",
"firewall-cmd --add-service=postgresql",
"firewall-cmd --runtime-to-permanent",
"su - postgres -c psql",
"CREATE USER \"foreman\" WITH PASSWORD ' Foreman_Password '; CREATE USER \"candlepin\" WITH PASSWORD ' Candlepin_Password '; CREATE USER \"pulp\" WITH PASSWORD ' Pulpcore_Password '; CREATE DATABASE foreman OWNER foreman; CREATE DATABASE candlepin OWNER candlepin; CREATE DATABASE pulpcore OWNER pulp;",
"postgres=# \\c pulpcore You are now connected to database \"pulpcore\" as user \"postgres\".",
"pulpcore=# CREATE EXTENSION IF NOT EXISTS \"hstore\"; CREATE EXTENSION",
"\\q",
"PGPASSWORD=' Foreman_Password ' psql -h postgres.example.com -p 5432 -U foreman -d foreman -c \"SELECT 1 as ping\" PGPASSWORD=' Candlepin_Password ' psql -h postgres.example.com -p 5432 -U candlepin -d candlepin -c \"SELECT 1 as ping\" PGPASSWORD=' Pulpcore_Password ' psql -h postgres.example.com -p 5432 -U pulp -d pulpcore -c \"SELECT 1 as ping\"",
"satellite-installer --foreman-db-database foreman --foreman-db-host postgres.example.com --foreman-db-manage false --foreman-db-password Foreman_Password --foreman-proxy-content-pulpcore-manage-postgresql false --foreman-proxy-content-pulpcore-postgresql-db-name pulpcore --foreman-proxy-content-pulpcore-postgresql-host postgres.example.com --foreman-proxy-content-pulpcore-postgresql-password Pulpcore_Password --foreman-proxy-content-pulpcore-postgresql-user pulp --katello-candlepin-db-host postgres.example.com --katello-candlepin-db-name candlepin --katello-candlepin-db-password Candlepin_Password --katello-candlepin-manage-db false",
"--foreman-db-root-cert <path_to_CA> --foreman-db-sslmode verify-full --foreman-proxy-content-pulpcore-postgresql-ssl true --foreman-proxy-content-pulpcore-postgresql-ssl-root-ca <path_to_CA> --katello-candlepin-db-ssl true --katello-candlepin-db-ssl-ca <path_to_CA> --katello-candlepin-db-ssl-verify true",
"scp root@ dns.example.com :/etc/rndc.key /etc/foreman-proxy/rndc.key",
"restorecon -v /etc/foreman-proxy/rndc.key chown -v root:foreman-proxy /etc/foreman-proxy/rndc.key chmod -v 640 /etc/foreman-proxy/rndc.key",
"echo -e \"server DNS_IP_Address \\n update add aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key nslookup aaa.example.com DNS_IP_Address echo -e \"server DNS_IP_Address \\n update delete aaa.example.com 3600 IN A Host_IP_Address \\n send\\n\" | nsupdate -k /etc/foreman-proxy/rndc.key",
"satellite-installer --foreman-proxy-dns=true --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" DNS_IP_Address \" --foreman-proxy-keyfile=/etc/foreman-proxy/rndc.key",
"dnf install dhcp-server bind-utils",
"tsig-keygen -a hmac-md5 omapi_key",
"cat /etc/dhcp/dhcpd.conf default-lease-time 604800; max-lease-time 2592000; log-facility local7; subnet 192.168.38.0 netmask 255.255.255.0 { range 192.168.38.10 192.168.38.100 ; option routers 192.168.38.1 ; option subnet-mask 255.255.255.0 ; option domain-search \" virtual.lan \"; option domain-name \" virtual.lan \"; option domain-name-servers 8.8.8.8 ; } omapi-port 7911; key omapi_key { algorithm hmac-md5; secret \" My_Secret \"; }; omapi-key omapi_key;",
"firewall-cmd --add-service dhcp",
"firewall-cmd --runtime-to-permanent",
"id -u foreman 993 id -g foreman 990",
"groupadd -g 990 foreman useradd -u 993 -g 990 -s /sbin/nologin foreman",
"chmod o+rx /etc/dhcp/ chmod o+r /etc/dhcp/dhcpd.conf chattr +i /etc/dhcp/ /etc/dhcp/dhcpd.conf",
"systemctl enable --now dhcpd",
"dnf install nfs-utils systemctl enable --now nfs-server",
"mkdir -p /exports/var/lib/dhcpd /exports/etc/dhcp",
"/var/lib/dhcpd /exports/var/lib/dhcpd none bind,auto 0 0 /etc/dhcp /exports/etc/dhcp none bind,auto 0 0",
"mount -a",
"/exports 192.168.38.1 (rw,async,no_root_squash,fsid=0,no_subtree_check) /exports/etc/dhcp 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide) /exports/var/lib/dhcpd 192.168.38.1 (ro,async,no_root_squash,no_subtree_check,nohide)",
"exportfs -rva",
"firewall-cmd --add-port=7911/tcp",
"firewall-cmd --add-service mountd --add-service nfs --add-service rpc-bind --zone public",
"firewall-cmd --runtime-to-permanent",
"satellite-maintain packages install nfs-utils",
"mkdir -p /mnt/nfs/etc/dhcp /mnt/nfs/var/lib/dhcpd",
"chown -R foreman-proxy /mnt/nfs",
"showmount -e DHCP_Server_FQDN rpcinfo -p DHCP_Server_FQDN",
"DHCP_Server_FQDN :/exports/etc/dhcp /mnt/nfs/etc/dhcp nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcp_etc_t:s0\" 0 0 DHCP_Server_FQDN :/exports/var/lib/dhcpd /mnt/nfs/var/lib/dhcpd nfs ro,vers=3,auto,nosharecache,context=\"system_u:object_r:dhcpd_state_t:s0\" 0 0",
"mount -a",
"su foreman-proxy -s /bin/bash cat /mnt/nfs/etc/dhcp/dhcpd.conf cat /mnt/nfs/var/lib/dhcpd/dhcpd.leases exit",
"satellite-installer --enable-foreman-proxy-plugin-dhcp-remote-isc --foreman-proxy-dhcp-provider=remote_isc --foreman-proxy-dhcp-server= My_DHCP_Server_FQDN --foreman-proxy-dhcp=true --foreman-proxy-plugin-dhcp-remote-isc-dhcp-config /mnt/nfs/etc/dhcp/dhcpd.conf --foreman-proxy-plugin-dhcp-remote-isc-dhcp-leases /mnt/nfs/var/lib/dhcpd/dhcpd.leases --foreman-proxy-plugin-dhcp-remote-isc-key-name=omapi_key --foreman-proxy-plugin-dhcp-remote-isc-key-secret= My_Secret --foreman-proxy-plugin-dhcp-remote-isc-omapi-port=7911",
"mkdir -p /mnt/nfs/var/lib/tftpboot",
"TFTP_Server_IP_Address :/exports/var/lib/tftpboot /mnt/nfs/var/lib/tftpboot nfs rw,vers=3,auto,nosharecache,context=\"system_u:object_r:tftpdir_rw_t:s0\" 0 0",
"mount -a",
"satellite-installer --foreman-proxy-tftp-root /mnt/nfs/var/lib/tftpboot --foreman-proxy-tftp=true",
"satellite-installer --foreman-proxy-tftp-servername= TFTP_Server_FQDN",
"kinit idm_user",
"ipa service-add capsule/satellite.example.com",
"satellite-maintain packages install ipa-client",
"ipa-client-install",
"kinit admin",
"rm /etc/foreman-proxy/dns.keytab",
"ipa-getkeytab -p capsule/ [email protected] -s idm1.example.com -k /etc/foreman-proxy/dns.keytab",
"chown foreman-proxy:foreman-proxy /etc/foreman-proxy/dns.keytab",
"kinit -kt /etc/foreman-proxy/dns.keytab capsule/ [email protected]",
"grant capsule\\047 [email protected] wildcard * ANY;",
"grant capsule\\047 [email protected] wildcard * ANY;",
"satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate_gss --foreman-proxy-dns-server=\" idm1.example.com \" --foreman-proxy-dns-tsig-keytab=/etc/foreman-proxy/dns.keytab --foreman-proxy-dns-tsig-principal=\"capsule/ [email protected] \" --foreman-proxy-dns=true",
"######################################################################## include \"/etc/rndc.key\"; controls { inet _IdM_Server_IP_Address_ port 953 allow { _Satellite_IP_Address_; } keys { \"rndc-key\"; }; }; ########################################################################",
"systemctl reload named",
"grant \"rndc-key\" zonesub ANY;",
"scp /etc/rndc.key root@ satellite.example.com :/etc/rndc.key",
"restorecon -v /etc/rndc.key chown -v root:named /etc/rndc.key chmod -v 640 /etc/rndc.key",
"usermod -a -G named foreman-proxy",
"satellite-installer --foreman-proxy-dns-managed=false --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\" IdM_Server_IP_Address \" --foreman-proxy-dns-ttl=86400 --foreman-proxy-dns=true --foreman-proxy-keyfile=/etc/rndc.key",
"key \"rndc-key\" { algorithm hmac-md5; secret \" secret-key ==\"; };",
"echo -e \"server 192.168.25.1\\n update add test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1 Server: 192.168.25.1 Address: 192.168.25.1#53 Name: test.example.com Address: 192.168.25.20",
"echo -e \"server 192.168.25.1\\n update delete test.example.com 3600 IN A 192.168.25.20\\n send\\n\" | nsupdate -k /etc/rndc.key",
"nslookup test.example.com 192.168.25.1",
"satellite-installer",
"satellite-installer --foreman-proxy-dns-managed=true --foreman-proxy-dns-provider=nsupdate --foreman-proxy-dns-server=\"127.0.0.1\" --foreman-proxy-dns=true",
"apache::server_tokens: Prod",
"apache::server_signature: Off",
"cp /etc/dhcp/dhcpd.conf /etc/dhcp/dhcpd.backup",
"journalctl -xe /Stage[main]/Dhcp/File[/etc/dhcp/dhcpd.conf]: Filebucketed /etc/dhcp/dhcpd.conf to puppet with sum 622d9820b8e764ab124367c68f5fa3a1",
"puppet filebucket restore --local --bucket /var/lib/puppet/clientbucket /etc/dhcp/dhcpd.conf \\ 622d9820b8e764ab124367c68f5fa3a1",
"hammer organization configure-cdn --name=\" My_Organization \" --type=redhat_cdn"
] |
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html-single/installing_satellite_server_in_a_disconnected_network_environment/index
|
Chapter 1. OpenShift Virtualization Engine overview
|
Chapter 1. OpenShift Virtualization Engine overview OpenShift Virtualization Engine , an edition of Red Hat OpenShift, is an enterprise-grade virtualization solution for organizations that need a scalable, reliable platform for running and managing virtual machines (VMs). It integrates with your existing IT infrastructure to support VM-exclusive workloads, including those requiring consistent VM performance. 1.1. About OpenShift Virtualization Engine While all editions of Red Hat OpenShift include OpenShift Virtualization , OpenShift Virtualization Engine is a cost-effective, virtualization-only option that does not include container-based or cloud-native features for applications. If you migrate from a traditional virtualization platform to OpenShift Virtualization Engine, you can add containerization and modernization features later by switching to a different edition of Red Hat OpenShift. Note OpenShift Virtualization Engine supports hosting infrastructure services in containers. For more information, see the Self-managed Red Hat OpenShift subscription guide . To simplify migration and VM management, OpenShift Virtualization Engine integrates with the following tools and capabilities: Red Hat Ansible Automation Platform Enables automation at scale, streamlining tasks such as VM provisioning, migration, monitoring, and management. Red Hat Advanced Cluster Management for Virtualization Provides a centralized platform to manage VMs throughout their lifecycle and across clusters. Using a centralized management platform can ensure consistency and compliance, especially for organizations that require robust governance and automated policy enforcement. Demo: Manage and monitor VMs on OpenShift with ACM Red Hat partner ecosystem You can complete your virtualization solution with offerings from Red Hat partners in areas such as networking and disaster recovery. 1.2. Additional resources OpenShift Virtualization documentation OpenShift Virtualization Engine product page Self-managed Red Hat OpenShift subscription guide Virtual infrastructure management with Red Hat Ansible Automation Platform Red Hat Advanced Cluster Management for Kubernetes Migration Toolkit for Virtualization Red Hat OpenShift Virtualization in the Red Hat Ecosystem Catalog 1.3. Get support for OpenShift Virtualization Engine Red Hat offers cluster administrator tools for gathering data, monitoring, and troubleshooting your cluster. If you need help with your OpenShift Virtualization Engine solution, log a case in the appropriate product by using its subscription name. See the Red Hat customer support portal to open a support case.
| null |
https://docs.redhat.com/en/documentation/red_hat_openshift_virtualization_engine/4/html/overview/ove-overview
|
19.3.3. Restoring access to a volume
|
19.3.3. Restoring access to a volume After the encryption keys have been saved (see Section 19.3.1, "Preparation for saving encryption keys" and Section 19.3.2, "Saving encryption keys" ), access can be restored to a driver where needed. Procedure 19.5. Restoring access to a volume Get the escrow packet for the volume from the packet storage and send it to one of the designated users for decryption. The designated user runs: After providing the NSS database password, the designated user chooses a passphrase for encrypting escrow-packet-out . This passphrase can be different every time and only protects the encryption keys while they are moved from the designated user to the target system. Obtain the escrow-packet-out file and the passphrase from the designated user. Boot the target system in an environment that can run volume_key and have the escrow-packet-out file available, such as in a rescue mode. Run: A prompt will appear for the packet passphrase chosen by the designated user, and for a new passphrase for the volume. Mount the volume using the chosen volume passphrase. It is possible to remove the old passphrase that was forgotten by using cryptsetup luksKillSlot , for example, to free up the passphrase slot in the LUKS header of the encrypted volume. This is done with the command cryptsetup luksKillSlot device key-slot . For more information and examples see cryptsetup --help .
|
[
"volume_key --reencrypt -d /the/nss/directory escrow-packet-in -o escrow-packet-out",
"volume_key --restore /path/to/volume escrow-packet-out"
] |
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/volume_key-organization-restore-access
|
Chapter 1. Preparing to install on OpenStack
|
Chapter 1. Preparing to install on OpenStack You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP). 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . 1.2. Choosing a method to install OpenShift Container Platform on OpenStack You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.2.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Red Hat OpenStack Platform (RHOSP) infrastructure that is provisioned by the OpenShift Container Platform installation program, by using one of the following methods: Installing a cluster on OpenStack with customizations : You can install a customized cluster on RHOSP. The installation program allows for some customization to be applied at the installation stage. Many other customization options are available post-installation . Installing a cluster on OpenStack in a restricted network : You can install OpenShift Container Platform on RHOSP in a restricted or disconnected network by creating an internal mirror of the installation release content. You can use this method to install a cluster that does not require an active internet connection to obtain the software components. You can also use this installation method to ensure that your clusters only use container images that satisfy your organizational controls on external content. 1.2.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on RHOSP infrastructure that you provision, by using one of the following methods: Installing a cluster on OpenStack on your own infrastructure : You can install OpenShift Container Platform on user-provisioned RHOSP infrastructure. By using this installation method, you can integrate your cluster with existing infrastructure and modifications. For installations on user-provisioned infrastructure, you must create all RHOSP resources, like Nova servers, Neutron ports, and security groups. You can use the provided Ansible playbooks to assist with the deployment process. 1.3. Scanning RHOSP endpoints for legacy HTTPS certificates Beginning with OpenShift Container Platform 4.10, HTTPS certificates must contain subject alternative name (SAN) fields. Run the following script to scan each HTTPS endpoint in a Red Hat OpenStack Platform (RHOSP) catalog for legacy certificates that only contain the CommonName field. Important OpenShift Container Platform does not check the underlying RHOSP infrastructure for legacy certificates prior to installation or updates. Use the provided script to check for these certificates yourself. Failing to update legacy certificates prior to installing or updating a cluster will result in cluster dysfunction. Prerequisites On the machine where you run the script, have the following software: Bash version 4.0 or greater grep OpenStack client jq OpenSSL version 1.1.1l or greater Populate the machine with RHOSP credentials for the target cloud. Procedure Save the following script to your machine: #!/usr/bin/env bash set -Eeuo pipefail declare catalog san catalog="USD(mktemp)" san="USD(mktemp)" readonly catalog san declare invalid=0 openstack catalog list --format json --column Name --column Endpoints \ | jq -r '.[] | .Name as USDname | .Endpoints[] | select(.interface=="public") | [USDname, .interface, .url] | join(" ")' \ | sort \ > "USDcatalog" while read -r name interface url; do # Ignore HTTP if [[ USD{url#"http://"} != "USDurl" ]]; then continue fi # Remove the schema from the URL noschema=USD{url#"https://"} # If the schema was not HTTPS, error if [[ "USDnoschema" == "USDurl" ]]; then echo "ERROR (unknown schema): USDname USDinterface USDurl" exit 2 fi # Remove the path and only keep host and port noschema="USD{noschema%%/*}" host="USD{noschema%%:*}" port="USD{noschema##*:}" # Add the port if was implicit if [[ "USDport" == "USDhost" ]]; then port='443' fi # Get the SAN fields openssl s_client -showcerts -servername "USDhost" -connect "USDhost:USDport" </dev/null 2>/dev/null \ | openssl x509 -noout -ext subjectAltName \ > "USDsan" # openssl returns the empty string if no SAN is found. # If a SAN is found, openssl is expected to return something like: # # X509v3 Subject Alternative Name: # DNS:standalone, DNS:osp1, IP Address:192.168.2.1, IP Address:10.254.1.2 if [[ "USD(grep -c "Subject Alternative Name" "USDsan" || true)" -gt 0 ]]; then echo "PASS: USDname USDinterface USDurl" else invalid=USD((invalid+1)) echo "INVALID: USDname USDinterface USDurl" fi done < "USDcatalog" # clean up temporary files rm "USDcatalog" "USDsan" if [[ USDinvalid -gt 0 ]]; then echo "USD{invalid} legacy certificates were detected. Update your certificates to include a SAN field." exit 1 else echo "All HTTPS certificates for this cloud are valid." fi Run the script. Replace any certificates that the script reports as INVALID with certificates that contain SAN fields. Important You must replace all legacy HTTPS certificates before you install OpenShift Container Platform 4.10 or update a cluster to that version. Legacy certificates will be rejected with the following message: x509: certificate relies on legacy Common Name field, use SANs instead 1.3.1. Scanning RHOSP endpoints for legacy HTTPS certificates manually Beginning with OpenShift Container Platform 4.10, HTTPS certificates must contain subject alternative name (SAN) fields. If you do not have access to the prerequisite tools that are listed in "Scanning RHOSP endpoints for legacy HTTPS certificates", perform the following steps to scan each HTTPS endpoint in a Red Hat OpenStack Platform (RHOSP) catalog for legacy certificates that only contain the CommonName field. Important OpenShift Container Platform does not check the underlying RHOSP infrastructure for legacy certificates prior to installation or updates. Use the following steps to check for these certificates yourself. Failing to update legacy certificates prior to installing or updating a cluster will result in cluster dysfunction. Procedure On a command line, run the following command to view the URL of RHOSP public endpoints: USD openstack catalog list Record the URL for each HTTPS endpoint that the command returns. For each public endpoint, note the host and the port. Tip Determine the host of an endpoint by removing the scheme, the port, and the path. For each endpoint, run the following commands to extract the SAN field of the certificate: Set a host variable: USD host=<host_name> Set a port variable: USD port=<port_number> If the URL of the endpoint does not have a port, use the value 443 . Retrieve the SAN field of the certificate: USD openssl s_client -showcerts -servername "USDhost" -connect "USDhost:USDport" </dev/null 2>/dev/null \ | openssl x509 -noout -ext subjectAltName Example output X509v3 Subject Alternative Name: DNS:your.host.example.net For each endpoint, look for output that resembles the example. If there is no output for an endpoint, the certificate of that endpoint is invalid and must be re-issued. Important You must replace all legacy HTTPS certificates before you install OpenShift Container Platform 4.10 or update a cluster to that version. Legacy certificates are rejected with the following message: x509: certificate relies on legacy Common Name field, use SANs instead
|
[
"#!/usr/bin/env bash set -Eeuo pipefail declare catalog san catalog=\"USD(mktemp)\" san=\"USD(mktemp)\" readonly catalog san declare invalid=0 openstack catalog list --format json --column Name --column Endpoints | jq -r '.[] | .Name as USDname | .Endpoints[] | select(.interface==\"public\") | [USDname, .interface, .url] | join(\" \")' | sort > \"USDcatalog\" while read -r name interface url; do # Ignore HTTP if [[ USD{url#\"http://\"} != \"USDurl\" ]]; then continue fi # Remove the schema from the URL noschema=USD{url#\"https://\"} # If the schema was not HTTPS, error if [[ \"USDnoschema\" == \"USDurl\" ]]; then echo \"ERROR (unknown schema): USDname USDinterface USDurl\" exit 2 fi # Remove the path and only keep host and port noschema=\"USD{noschema%%/*}\" host=\"USD{noschema%%:*}\" port=\"USD{noschema##*:}\" # Add the port if was implicit if [[ \"USDport\" == \"USDhost\" ]]; then port='443' fi # Get the SAN fields openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName > \"USDsan\" # openssl returns the empty string if no SAN is found. # If a SAN is found, openssl is expected to return something like: # # X509v3 Subject Alternative Name: # DNS:standalone, DNS:osp1, IP Address:192.168.2.1, IP Address:10.254.1.2 if [[ \"USD(grep -c \"Subject Alternative Name\" \"USDsan\" || true)\" -gt 0 ]]; then echo \"PASS: USDname USDinterface USDurl\" else invalid=USD((invalid+1)) echo \"INVALID: USDname USDinterface USDurl\" fi done < \"USDcatalog\" clean up temporary files rm \"USDcatalog\" \"USDsan\" if [[ USDinvalid -gt 0 ]]; then echo \"USD{invalid} legacy certificates were detected. Update your certificates to include a SAN field.\" exit 1 else echo \"All HTTPS certificates for this cloud are valid.\" fi",
"x509: certificate relies on legacy Common Name field, use SANs instead",
"openstack catalog list",
"host=<host_name>",
"port=<port_number>",
"openssl s_client -showcerts -servername \"USDhost\" -connect \"USDhost:USDport\" </dev/null 2>/dev/null | openssl x509 -noout -ext subjectAltName",
"X509v3 Subject Alternative Name: DNS:your.host.example.net",
"x509: certificate relies on legacy Common Name field, use SANs instead"
] |
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_openstack/preparing-to-install-on-openstack
|
Chapter 14. Activating and deactivating telemetry
|
Chapter 14. Activating and deactivating telemetry Activate the telemetry module to help Ceph developers understand how Ceph is used and what problems users might be experiencing. This helps improve the dashboard experience. Activating the telemetry module sends anonymous data about the cluster back to the Ceph developers. View the telemetry data that is sent to the Ceph developers on the public telemetry dashboard . This allows the community to easily see summary statistics on how many clusters are reporting, their total capacity and OSD count, and version distribution trends. The telemetry report is broken down into several channels, each with a different type of information. Assuming telemetry has been enabled, you can turn on and off the individual channels. If telemetry is off, the per-channel setting has no effect. Basic Provides basic information about the cluster. Crash Provides information about daemon crashes. Device Provides information about device metrics. Ident Provides user-provided identifying information about the cluster. Perf Provides various performance metrics of the cluster. The data reports contain information that help the developers gain a better understanding of the way Ceph is used. The data includes counters and statistics on how the cluster has been deployed, the version of Ceph, the distribution of the hosts, and other parameters. Important The data reports do not contain any sensitive data like pool names, object names, object contents, hostnames, or device serial numbers. Note Telemetry can also be managed by using an API. For more information, see the Telemetry chapter in the Red Hat Ceph Storage Developer Guide . Procedure Activate the telemetry module in one of the following ways: From the banner within the Ceph dashboard. Go to Settings->Telemetry configuration . Select each channel that telemetry should be enabled on. Note For detailed information about each channel type, click More Info to the channels. Complete the Contact Information for the cluster. Enter the contact, Ceph cluster description, and organization. Optional: Complete the Advanced Settings field options. Interval Set the interval by hour. The module compiles and sends a new report per this hour interval. The default interval is 24 hours. Proxy Use this to configure an HTTP or HTTPs proxy server if the cluster cannot directly connect to the configured telemetry endpoint. Add the server in one of the following formats: https://10.0.0.1:8080 or https://ceph:[email protected]:8080 The default endpoint is telemetry.ceph.com . Click . This displays the Telemetry report preview before enabling telemetry. Review the Report preview . Note The report can be downloaded and saved locally or copied to the clipboard. Select I agree to my telemetry data being submitted under the Community Data License Agreement . Enable the telemetry module by clicking Update . The following message is displayed, confirming the telemetry activation: 14.1. Deactivating telemetry To deactivate the telemetry module, go to Settings->Telemetry configuration and click Deactivate .
|
[
"The Telemetry module has been configured and activated successfully"
] |
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/dashboard_guide/activating-and-deactivating-telemetry_dash
|
Chapter 1. Overview
|
Chapter 1. Overview Security Red Hat Enterprise Linux 7.4 introduces support for Network Bound Disk Encryption (NBDE), which enables the system administrator to encrypt root volumes of hard drives on bare metal machines without requiring to manually enter password when systems are rebooted. The USBGuard software framework provides system protection against intrusive USB devices by implementing basic whitelisting and blacklisting capabilities based on device attributes. The OpenSSH libraries update includes the ability to resume interrupted uploads in Secure File Transfer Protocol (SFTP) and adds support for a new fingerprint type that uses the SHA-256 algorithm. This OpenSSH version also removes server-side support for the SSH-1 protocol. Multiple new Linux Audit capabilities have been added to enable easier administration, to filter the events logged by the Audit system, gather more information from critical events, and to interpret large numbers of records. The OpenSC set of libraries and utilities adds support for Common Access Card (CAC) cards and now provides also the CoolKey applet functionality. The OpenSSL update includes multiple enhancements, such as support for the Datagram Transport Layer Security (DTLS) version 1.2 protocol and Application-Layer Protocol Negotiation (ALPN). The OpenSCAP tools have been NIST-certified, which enables easier adoption in regulated environments. Cryptographic protocols and algorithms that are considered insecure have been deprecated. However, this version also introduces a lot of other cryptographic-related improvements. For more information, see Part V, "Deprecated Functionality" and the Enhancing the Security of the Operating System with Cryptography Changes in Red Hat Enterprise Linux 7.4 Knowledgebase article on the Red Hat Customer Portal. See Chapter 15, Security for more information on security enhancements. Identity Management The System Security Services Daemon (SSSD) in a container is now fully supported. The Identity Management (IdM) server container is available as a Technology Preview feature. Users are now able to install new Identity Management servers, replicas, and clients on systems with FIPS mode enabled. Several enhancements related to smart card authentication have been introduced. For detailed information on changes in IdM, see Chapter 5, Authentication and Interoperability . For details on deprecated capabilities related to IdM, see Part V, "Deprecated Functionality" . Networking NetworkManager supports additional features for routing, enables the Media Access Control Security (MACsec) technology, and is now able to handle unmanaged devices. Kernel Generic Routing Encapsulation (GRE) tunneling has been enhanced. For more networking features, see Chapter 14, Networking . Kernel Support for NVMe Over Fabric has been added to the NVM-Express kernel driver, which increases flexibility when accessing high performance NVMe storage devices located in the data center on both Ethernet or Infiniband fabric infrastructures. For further kernel-related changes, refer to Chapter 12, Kernel . Storage and File Systems LVM provides full support for RAID takeover, which allows users to convert a RAID logical volume from one RAID level to another, and for RAID reshaping, which allows users to reshape properties, such as the RAID algorithm, stripe size, or number of images. You can now enable SELinux support for containers when you use OverlayFS with Docker. NFS over RDMA (NFSoRDMA) server is now fully supported when accessed by Red Hat Enterprise Linux clients. See Chapter 17, Storage for further storage-related features and Chapter 9, File Systems for enhancements to file systems. Tools The Performance Co-Pilot (PCP) application has been enhanced to support new client tools, such as pcp2influxdb , pcp-mpstat , and pcp-pidstat . Additionally, new PCP performance metrics from several subsystems are available for a variety of Performance Co-Pilot analysis tools. For more information regarding updates to various tools, see Chapter 7, Compiler and Tools . High Availability Red Hat Enterprise Linux 7.4 introduces full support for the following features: clufter , a tool for transforming and analyzing cluster configuration formats Quorum devices (QDevice) in a Pacemaker cluster for managing stretch clusters Booth cluster ticket manager For more information on the high availability features introduced in this release, see Chapter 6, Clustering . Virtualization Red Hat Enterprise Linux 7 guest virtual machines now support the Elastic Network Adapter (ENA), and thus provide enhanced networking capabilities when running on the the Amazon Web Services (AWS) cloud. For further enhancements to Virtualization, see Chapter 19, Virtualization . Management and Automation Red Hat Enterprise Linux 7.4 includes Red Hat Enterprise Linux System Roles powered by Ansible , a configuration interface that simplifies management and maintenance of Red Hat Enterprise Linux deployments. This feature is available as a Technology Preview. For details, refer to Chapter 47, Red Hat Enterprise Linux System Roles Powered by Ansible . Red Hat Insights Since Red Hat Enterprise Linux 7.2, the Red Hat Insights service is available. Red Hat Insights is a proactive service designed to enable you to identify, examine, and resolve known technical issues before they affect your deployment. Insights leverages the combined knowledge of Red Hat Support Engineers, documented solutions, and resolved issues to deliver relevant, actionable information to system administrators. The service is hosted and delivered through the customer portal at https://access.redhat.com/insights/ or through Red Hat Satellite. For further information, data security, and limits, refer to https://access.redhat.com/insights/splash/ . Red Hat Customer Portal Labs Red Hat Customer Portal Labs is a set of tools in a section of the Customer Portal available at https://access.redhat.com/labs/ . The applications in Red Hat Customer Portal Labs can help you improve performance, quickly troubleshoot issues, identify security problems, and quickly deploy and configure complex applications. Some of the most popular applications are: Registration Assistant Code Browser Red Hat Product Certificates Red Hat Network (RHN) System List Exporter Kickstart Generator Log Reaper Load Balancer Configuration Tool Multipath Helper
| null |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.4_release_notes/chap-red_hat_enterprise_linux-7.4_release_notes-overview
|
A.2. Wake-ups
|
A.2. Wake-ups Many applications scan configuration files for changes. In many cases, the scan is performed at a fixed interval, for example, every minute. This can be a problem, because it forces a disk to wake up from spindowns. The best solution is to find a good interval, a good checking mechanism, or to check for changes with inotify and react to events. Inotify can check variety of changes on a file or a directory. For example: #include <stdio.h> #include <stdlib.h> #include <sys/time.h> #include <sys/types.h> #include <sys/inotify.h> #include <unistd.h> int main(int argc, char *argv[]) { int fd; int wd; int retval; struct timeval tv; fd = inotify_init(); /* checking modification of a file - writing into */ wd = inotify_add_watch(fd, "./myConfig", IN_MODIFY); if (wd < 0) { printf("inotify cannot be used\n"); /* switch back to checking */ } fd_set rfds; FD_ZERO(&rfds); FD_SET(fd, &rfds); tv.tv_sec = 5; tv.tv_usec = 0; retval = select(fd + 1, &rfds, NULL, NULL, &tv); if (retval == -1) perror("select()"); else if (retval) { printf("file was modified\n"); } else printf("timeout\n"); return EXIT_SUCCESS; } The advantage of this approach is the variety of checks that you can perform. The main limitation is that only a limited number of watches are available on a system. The number can be obtained from /proc/sys/fs/inotify/max_user_watches and although it can be changed, this is not recommended. Furthermore, in case inotify fails, the code has to fall back to a different check method, which usually means many occurrences of #if #define in the source code. For more information on inotify , see the inotify(7) man page.
|
[
"#include <stdio.h> #include <stdlib.h> #include <sys/time.h> #include <sys/types.h> #include <sys/inotify.h> #include <unistd.h> int main(int argc, char *argv[]) { int fd; int wd; int retval; struct timeval tv; fd = inotify_init(); /* checking modification of a file - writing into */ wd = inotify_add_watch(fd, \"./myConfig\", IN_MODIFY); if (wd < 0) { printf(\"inotify cannot be used\\n\"); /* switch back to previous checking */ } fd_set rfds; FD_ZERO(&rfds); FD_SET(fd, &rfds); tv.tv_sec = 5; tv.tv_usec = 0; retval = select(fd + 1, &rfds, NULL, NULL, &tv); if (retval == -1) perror(\"select()\"); else if (retval) { printf(\"file was modified\\n\"); } else printf(\"timeout\\n\"); return EXIT_SUCCESS; }"
] |
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/power_management_guide/developer_tips-wake-ups
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.