title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
14.6.2. Domain Browsing
14.6.2. Domain Browsing By default, a Windows NT PDC for a domain is also the domain master browser for that domain. A Samba server must be set up as a domain master server in this type of situation. Network browsing may fail if the Samba server is running WINS along with other domain controllers in operation. For subnets that do not include the Windows NT PDC, a Samba server can be implemented as a local master browser. Configuring the smb.conf for a local master browser (or no browsing at all) in a domain controller environment is the same as workgroup configuration.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-samba-domain-browsing
Chapter 36. Getting started with Multipath TCP
Chapter 36. Getting started with Multipath TCP Transmission Control Protocol (TCP) ensures reliable delivery of the data through the internet and automatically adjusts its bandwidth in response to network load. Multipath TCP (MPTCP) is an extension to the original TCP protocol (single-path). MPTCP enables a transport connection to operate across multiple paths simultaneously, and brings network connection redundancy to user endpoint devices. 36.1. Understanding MPTCP The Multipath TCP (MPTCP) protocol allows for simultaneous usage of multiple paths between connection endpoints. The protocol design improves connection stability and also brings other benefits compared to the single-path TCP. Note In MPTCP terminology, links are considered as paths. The following are some of the advantages of using MPTCP: It allows a connection to simultaneously use multiple network interfaces. In case a connection is bound to a link speed, the usage of multiple links can increase the connection throughput. Note, that in case of the connection is bound to a CPU, the usage of multiple links causes the connection slowdown. It increases the resilience to link failures. For more details about MPTCP, review the Additional resources . Additional resources Understanding Multipath TCP: High availability for endpoints and the networking highway of the future RFC8684: TCP Extensions for Multipath Operation with Multiple Addresses 36.2. Preparing RHEL to enable MPTCP support By default the MPTCP support is disabled in RHEL. Enable MPTCP so that applications that support this feature can use it. Additionally, you have to configure user space applications to force use MPTCP sockets if those applications have TCP sockets by default. Prerequisites The following packages are installed: iperf3 mptcpd systemtap Procedure Enable MPTCP sockets in the kernel: Start the iperf3 server, and force it to create MPTCP sockets instead of TCP sockets: Connect the client to the server, and force it to create MPTCP sockets instead of TCP sockets: After the connection is established, verify the ss output to see the subflow-specific status: Verify MPTCP counters: Additional resources tcp(7) and mptcpize(8) man pages on your system 36.3. Using iproute2 to temporarily configure and enable multiple paths for MPTCP applications Each MPTCP connection uses a single subflow similar to plain TCP. To get the MPTCP benefits, specify a higher limit for maximum number of subflows for each MPTCP connection. Then configure additional endpoints to create those subflows. Important The configuration in this procedure will not persist after rebooting your machine. Note that MPTCP does not yet support mixed IPv6 and IPv4 endpoints for the same socket. Use endpoints belonging to the same address family. Prerequisites The mptcpd package is installed The iperf3 package is installed Server network interface settings: enp4s0: 192.0.2.1/24 enp1s0: 198.51.100.1/24 Client network interface settings: enp4s0f0: 192.0.2.2/24 enp4s0f1: 198.51.100.2/24 Procedure Configure the client to accept up to 1 additional remote address, as provided by the server: Add IP address 198.51.100.1 as a new MPTCP endpoint on the server: The signal option ensures that the ADD_ADDR packet is sent after the three-way-handshake. Start the iperf3 server, and force it to create MPTCP sockets instead of TCP sockets: Connect the client to the server, and force it to create MPTCP sockets instead of TCP sockets: Verification Verify the connection is established: Verify the connection and IP address limit: Verify the newly added endpoint: Verify MPTCP counters by using the nstat MPTcp* command on a server: Additional resources mptcpize(8) and ip-mptcp(8) man pages on your system 36.4. Permanently configuring multiple paths for MPTCP applications You can configure MultiPath TCP (MPTCP) using the nmcli command to permanently establish multiple subflows between a source and destination system. The subflows can use different resources, different routes to the destination, and even different networks. Such as Ethernet, cellular, wifi, and so on. As a result, you achieve combined connections, which increase network resilience and throughput. The server uses the following network interfaces in our example: enp4s0: 192.0.2.1/24 enp1s0: 198.51.100.1/24 enp7s0: 192.0.2.3/24 The client uses the following network interfaces in our example: enp4s0f0: 192.0.2.2/24 enp4s0f1: 198.51.100.2/24 enp6s0: 192.0.2.5/24 Prerequisites You configured the default gateway on the relevant interfaces. Procedure Enable MPTCP sockets in the kernel: Optional: The RHEL kernel default for subflow limit is 2. If you require more: Create the /etc/systemd/system/set_mptcp_limit.service file with the following content: The oneshot unit executes the ip mptcp limits set subflows 3 command after your network ( network.target ) is operational during every boot process. The ip mptcp limits set subflows 3 command sets the maximum number of additional subflows for each connection, so 4 in total. It is possible to add maximally 3 additional subflows. Enable the set_mptcp_limit service: Enable MPTCP on all connection profiles that you want to use for connection aggregation: The connection.mptcp-flags parameter configures MPTCP endpoints and the IP address flags. If MPTCP is enabled in a NetworkManager connection profile, the setting will configure the IP addresses of the relevant network interface as MPTCP endpoints. By default, NetworkManager does not add MPTCP flags to IP addresses if there is no default gateway. If you want to bypass that check, you need to use the also-without-default-route flag. Verification Verify that you enabled the MPTCP kernel parameter: Verify that you set the subflow limit correctly, in case the default was not enough: Verify that you configured the per-address MPTCP setting correctly: Additional resources nm-settings-nmcli(5) ip-mptcp(8) Section 36.1, "Understanding MPTCP" Understanding Multipath TCP: High availability for endpoints and the networking highway of the future RFC8684: TCP Extensions for Multipath Operation with Multiple Addresses Using Multipath TCP to better survive outages and increase bandwidth 36.5. Monitoring MPTCP sub-flows The life cycle of a multipath TCP (MPTCP) socket can be complex: The main MPTCP socket is created, the MPTCP path is validated, one or more sub-flows are created and eventually removed. Finally, the MPTCP socket is terminated. The MPTCP protocol allows monitoring MPTCP-specific events related to socket and sub-flow creation and deletion, using the ip utility provided by the iproute package. This utility uses the netlink interface to monitor MPTCP events. This procedure demonstrates how to monitor MPTCP events. For that, it simulates a MPTCP server application, and a client connects to this service. The involved clients in this example use the following interfaces and IP addresses: Server: 192.0.2.1 Client (Ethernet connection): 192.0.2.2 Client (WiFi connection): 192.0.2.3 To simplify this example, all interfaces are within the same subnet. This is not a requirement. However, it is important that routing has been configured correctly, and the client can reach the server via both interfaces. Prerequisites A RHEL client with two network interfaces, such as a laptop with Ethernet and WiFi The client can connect to the server via both interfaces A RHEL server Both the client and the server run RHEL 9.0 or later You installed the mptcpd package on both the client and the server Procedure Set the per connection additional subflow limits to 1 on both client and server: On the server, to simulate a MPTCP server application, start netcat ( nc ) in listen mode with enforced MPTCP sockets instead of TCP sockets: The -k option causes that nc does not close the listener after the first accepted connection. This is required to demonstrate the monitoring of sub-flows. On the client: Identify the interface with the lowest metric: The enp1s0 interface has a lower metric than wlp1s0 . Therefore, RHEL uses enp1s0 by default. On the first terminal, start the monitoring: On the second terminal, start a MPTCP connection to the server: RHEL uses the enp1s0 interface and its associated IP address as a source for this connection. On the monitoring terminal, the ip mptcp monitor command now logs: The token identifies the MPTCP socket as an unique ID, and later it enables you to correlate MPTCP events on the same socket. On the terminal with the running nc connection to the server, press Enter . This first data packet fully establishes the connection. Note that, as long as no data has been sent, the connection is not established. On the monitoring terminal, ip mptcp monitor now logs: Optional: Display the connections to port 12345 on the server: At this point, only one connection to the server has been established. On a third terminal, create another endpoint: This command sets the name and IP address of the WiFi interface of the client in this command. On the monitoring terminal, ip mptcp monitor now logs: The locid field displays the local address ID of the new sub-flow and identifies this sub-flow even if the connection uses network address translation (NAT). The saddr4 field matches the endpoint's IP address from the ip mptcp endpoint add command. Optional: Display the connections to port 12345 on the server: The command now displays two connections: The connection with source address 192.0.2.2 corresponds to the first MPTCP sub-flow that you established previously. The connection from the sub-flow over the wlp1s0 interface with source address 192.0.2.3 . On the third terminal, delete the endpoint: Use the ID from the locid field from the ip mptcp monitor output, or retrieve the endpoint ID using the ip mptcp endpoint show command. On the monitoring terminal, ip mptcp monitor now logs: On the first terminal with the nc client, press Ctrl + C to terminate the session. On the monitoring terminal, ip mptcp monitor now logs: Additional resources ip-mptcp(1) man page on your system How NetworkManager manages multiple default gateways 36.6. Disabling Multipath TCP in the kernel You can explicitly disable the MPTCP option in the kernel. Procedure Disable the mptcp.enabled option. Verification Verify whether the mptcp.enabled is disabled in the kernel.
[ "echo \"net.mptcp.enabled=1\" > /etc/sysctl.d/90-enable-MPTCP.conf sysctl -p /etc/sysctl.d/90-enable-MPTCP.conf", "mptcpize run iperf3 -s Server listening on 5201", "mptcpize iperf3 -c 127.0.0.1 -t 3", "ss -nti '( dport :5201 )' State Recv-Q Send-Q Local Address:Port Peer Address:Port Process ESTAB 0 0 127.0.0.1:41842 127.0.0.1:5201 cubic wscale:7,7 rto:205 rtt:4.455/8.878 ato:40 mss:21888 pmtu:65535 rcvmss:536 advmss:65483 cwnd:10 bytes_sent:141 bytes_acked:142 bytes_received:4 segs_out:8 segs_in:7 data_segs_out:3 data_segs_in:3 send 393050505bps lastsnd:2813 lastrcv:2772 lastack:2772 pacing_rate 785946640bps delivery_rate 10944000000bps delivered:4 busy:41ms rcv_space:43690 rcv_ssthresh:43690 minrtt:0.008 tcp-ulp-mptcp flags:Mmec token:0000(id:0)/2ff053ec(id:0) seq:3e2cbea12d7673d4 sfseq:3 ssnoff:ad3d00f4 maplen:2", "nstat MPTcp * #kernel MPTcpExtMPCapableSYNRX 2 0.0 MPTcpExtMPCapableSYNTX 2 0.0 MPTcpExtMPCapableSYNACKRX 2 0.0 MPTcpExtMPCapableACKRX 2 0.0", "ip mptcp limits set add_addr_accepted 1", "ip mptcp endpoint add 198.51.100.1 dev enp1s0 signal", "mptcpize run iperf3 -s Server listening on 5201", "mptcpize iperf3 -c 192.0.2.1 -t 3", "ss -nti '( sport :5201 )'", "ip mptcp limit show", "ip mptcp endpoint show", "nstat MPTcp * #kernel MPTcpExtMPCapableSYNRX 2 0.0 MPTcpExtMPCapableACKRX 2 0.0 MPTcpExtMPJoinSynRx 2 0.0 MPTcpExtMPJoinAckRx 2 0.0 MPTcpExtEchoAdd 2 0.0", "echo \"net.mptcp.enabled=1\" > /etc/sysctl.d/90-enable-MPTCP.conf sysctl -p /etc/sysctl.d/90-enable-MPTCP.conf", "[Unit] Description=Set MPTCP subflow limit to 3 After=network.target [Service] ExecStart=ip mptcp limits set subflows 3 Type=oneshot [Install] WantedBy=multi-user.target", "systemctl enable --now set_mptcp_limit", "nmcli connection modify <profile_name> connection.mptcp-flags signal,subflow,also-without-default-route", "sysctl net.mptcp.enabled net.mptcp.enabled = 1", "ip mptcp limit show add_addr_accepted 2 subflows 3", "ip mptcp endpoint show 192.0.2.1 id 1 subflow dev enp4s0 198.51.100.1 id 2 subflow dev enp1s0 192.0.2.3 id 3 subflow dev enp7s0 192.0.2.4 id 4 subflow dev enp3s0", "ip mptcp limits set add_addr_accepted 0 subflows 1", "mptcpize run nc -l -k -p 12345", "ip -4 route 192.0.2.0/24 dev enp1s0 proto kernel scope link src 192.0.2.2 metric 100 192.0.2.0/24 dev wlp1s0 proto kernel scope link src 192.0.2.3 metric 600", "ip mptcp monitor", "mptcpize run nc 192.0.2.1 12345", "[ CREATED] token=63c070d2 remid=0 locid=0 saddr4=192.0.2.2 daddr4=192.0.2.1 sport=36444 dport=12345", "[ ESTABLISHED] token=63c070d2 remid=0 locid=0 saddr4=192.0.2.2 daddr4=192.0.2.1 sport=36444 dport=12345", "ss -taunp | grep \":12345\" tcp ESTAB 0 0 192.0.2.2:36444 192.0.2.1:12345", "ip mptcp endpoint add dev wlp1s0 192.0.2.3 subflow", "[SF_ESTABLISHED] token=63c070d2 remid=0 locid=2 saddr4=192.0.2.3 daddr4=192.0.2.1 sport=53345 dport=12345 backup=0 ifindex=3", "ss -taunp | grep \":12345\" tcp ESTAB 0 0 192.0.2.2:36444 192.0.2.1:12345 tcp ESTAB 0 0 192.0.2.3%wlp1s0:53345 192.0.2.1:12345", "ip mptcp endpoint delete id 2", "[ SF_CLOSED] token=63c070d2 remid=0 locid=2 saddr4=192.0.2.3 daddr4=192.0.2.1 sport=53345 dport=12345 backup=0 ifindex=3", "[ CLOSED] token=63c070d2", "echo \"net.mptcp.enabled=0\" > /etc/sysctl.d/90-enable-MPTCP.conf sysctl -p /etc/sysctl.d/90-enable-MPTCP.conf", "sysctl -a | grep mptcp.enabled net.mptcp.enabled = 0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/getting-started-with-multipath-tcp_configuring-and-managing-networking
Chapter 13. Configuration of SNMP traps
Chapter 13. Configuration of SNMP traps As a storage administrator, you can deploy and configure the simple network management protocol (SNMP) gateway in a Red Hat Ceph Storage cluster to receive alerts from the Prometheus Alertmanager and route them as SNMP traps to the cluster. 13.1. Simple network management protocol Simple network management protocol (SNMP) is one of the most widely used open protocols, to monitor distributed systems and devices across a variety of hardware and software platforms. Ceph's SNMP integration focuses on forwarding alerts from its Prometheus Alertmanager cluster to a gateway daemon. The gateway daemon transforms the alert into an SNMP Notification and sends it on to a designated SNMP management platform. The gateway daemon is from the snmp_notifier_project , which provides SNMP V2c and V3 support with authentication and encryption. The Red Hat Ceph Storage SNMP gateway service deploys one instance of the gateway by default. You can increase this by providing placement information. However, if you enable multiple SNMP gateway daemons, your SNMP management platform receives multiple notifications for the same event. The SNMP traps are alert messages and the Prometheus Alertmanager sends these alerts to the SNMP notifier which then looks for object identifier (OID) in the given alerts' labels. Each SNMP trap has a unique ID which allows it to send additional traps with updated status to a given SNMP poller. SNMP hooks into the Ceph health checks so that every health warning generates a specific SNMP trap. In order to work correctly and transfer information on device status to the user to monitor, SNMP relies on several components. There are four main components that makeup SNMP: SNMP Manager - The SNMP manager, also called a management station, is a computer that runs network monitoring platforms. A platform that has the job of polling SNMP-enabled devices and retrieving data from them. An SNMP Manager queries agents, receives responses from agents and acknowledges asynchronous events from agents. SNMP Agent - An SNMP agent is a program that runs on a system to be managed and contains the MIB database for the system. These collect data like bandwidth and disk space, aggregates it, and sends it to the management information base (MIB). Management information base (MIB) - These are components contained within the SNMP agents. The SNMP manager uses this as a database and asks the agent for access to particular information. This information is needed for the network management systems (NMS). The NMS polls the agent to take information from these files and then proceeds to translate it into graphs and displays that can be viewed by the user. MIBs contain statistical and control values that are determined by the network device. SNMP Devices The following versions of SNMP are compatible and supported for gateway implementation: V2c - Uses an community string without any authentication and is vulnerable to outside attacks. V3 authNoPriv - Uses the username and password authentication without encryption. V3 authPriv - Uses the username and password authentication with encryption to the SNMP management platform. Important When using SNMP traps, ensure that you have the correct security configuration for your version number to minimize the vulnerabilities that are inherent to SNMP and keep your network protected from unauthorized users. 13.2. Configuring snmptrapd It is important to configure the simple network management protocol (SNMP) target before deploying the snmp-gateway because the snmptrapd daemon contains the auth settings that you need to specify when creating the snmp-gateway service. The SNMP gateway feature provides a means of exposing the alerts that are generated in the Prometheus stack to an SNMP management platform. You can configure the SNMP traps to the destination based on the snmptrapd tool. This tool allows you to establish one or more SNMP trap listeners. The following parameters are important for configuration: The engine-id is a unique identifier for the device, in hex, and required for SNMPV3 gateway. Red Hat recommends to use `8000C53F_CLUSTER_FSID_WITHOUT_DASHES_`for this parameter. The snmp-community , which is the SNMP_COMMUNITY_FOR_SNMPV2 parameter, is public for SNMPV2c gateway. The auth-protocol which is the AUTH_PROTOCOL , is mandatory for SNMPV3 gateway and is SHA by default. The privacy-protocol , which is the PRIVACY_PROTOCOL , is mandatory for SNMPV3 gateway. The PRIVACY_PASSWORD is mandatory for SNMPV3 gateway with encryption. The SNMP_V3_AUTH_USER_NAME is the user name and is mandatory for SNMPV3 gateway. The SNMP_V3_AUTH_PASSWORD is the password and is mandatory for SNMPV3 gateway. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the nodes. Install firewalld on Red Hat Enterprise Linux system. Procedure On the SNMP management host, install the SNMP packages: Example Open the port 162 for SNMP to receive alerts: Example Implement the management information base (MIB) to make sense of the SNMP notification and enhance SNMP support on the destination host. Copy the raw file from the main repository: https://github.com/ceph/ceph/blob/master/monitoring/snmp/CEPH-MIB.txt Example Create the snmptrapd directory. Example Create the configuration files in snmptrapd directory for each protocol based on the SNMP version: Syntax For SNMPV2c, create the snmptrapd_public.conf file as follows: Example The public setting here must match the snmp_community setting used when deploying the snmp-gateway service. For SNMPV3 with authentication only, create the snmptrapd_auth.conf file as follows: Example The 0x8000C53Ff64f341c655d11eb8778fa163e914bcc string is the engine_id , and myuser and mypassword are the credentials. The password security is defined by the SHA algorithm. This corresponds to the settings for deploying the snmp-gateway daemon. Example For SNMPV3 with authentication and encryption, create the snmptrapd_authpriv.conf file as follows: Example The 0x8000C53Ff64f341c655d11eb8778fa163e914bcc string is the engine_id , and myuser and mypassword are the credentials. The password security is defined by the SHA algorithm and DES is the type of privacy encryption. This corresponds to the settings for deploying the snmp-gateway daemon. Example Run the daemon on the SNMP management host: Syntax Example If any alert is triggered on the storage cluster, you can monitor the output on the SNMP management host. Verify the SNMP traps and also the traps decoded by MIB. Example In the above example, an alert is generated after the Prometheus module is disabled. Additional Resources See the Deploying the SNMP gateway section in the Red Hat Ceph Storage Operations Guide . 13.3. Deploying the SNMP gateway You can deploy the simple network management protocol (SNMP) gateway using either SNMPV2c or SNMPV3. There are two methods to deploy the SNMP gateway: By creating a credentials file. By creating one service configuration yaml file with all the details. You can use the following parameters to deploy the SNMP gateway based on the versions: The service_type is the snmp-gateway . The service_name is any user-defined string. The count is the number of SNMP gateways to be deployed in a storage cluster. The snmp_destination parameter must be of the format hostname:port. The engine-id is a unique identifier for the device, in hex, and required for SNMPV3 gateway. Red Hat recommends to use `8000C53F_CLUSTER_FSID_WITHOUT_DASHES_`for this parameter. The snmp_community parameter is public for SNMPV2c gateway. The auth-protocol is mandatory for SNMPV3 gateway and is SHA by default. The privacy-protocol is mandatory for SNMPV3 gateway with authentication and encryption. The port is 9464 by default. You must provide a -i FILENAME to pass the secrets and passwords to the orchestrator. Once the SNMP gateway service is deployed or updated, the Prometheus Alertmanager configuration is automatically updated to forward any alert that has an objectidentifier to the SNMP gateway daemon for further processing. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the nodes. Configuring snmptrapd on the destination host, which is the SNMP management host. Procedure Log into the Cephadm shell: Example Create a label for the host on which SNMP gateway needs to be deployed: Syntax Example Create a credentials file or a service configuration file based on the SNMP version: For SNMPV2c, create the file as follows: Example OR Example For SNMPV3 with authentication only, create the file as follows: Example OR Example For SNMPV3 with authentication and encryption, create the file as follows: Example OR Example Run the ceph orch command: Syntax OR Syntax For SNMPV2c, with the snmp_creds file, run the ceph orch command with the snmp-version as V2c : Example For SNMPV3 with authentication only, with the snmp_creds file, run the ceph orch command with the snmp-version as V3 and engine-id : Example For SNMPV3 with authentication and encryption, with the snmp_creds file, run the ceph orch command with the snmp-version as V3 , privacy-protocol , and engine-id : Example OR For all the SNMP versions, with the snmp-gateway file, run the following command: Example Additional Resources See the Configuring `snmptrapd` section in the Red Hat Ceph Storage Operations Guide .
[ "dnf install -y net-snmp-utils net-snmp", "firewall-cmd --zone=public --add-port=162/udp firewall-cmd --zone=public --add-port=162/udp --permanent", "curl -o CEPH_MIB.txt -L https://raw.githubusercontent.com/ceph/ceph/master/monitoring/snmp/CEPH-MIB.txt scp CEPH_MIB.txt root@host02:/usr/share/snmp/mibs", "mkdir /root/snmptrapd/", "format2 %V\\n% Agent Address: %A \\n Agent Hostname: %B \\n Date: %H - %J - %K - %L - %M - %Y \\n Enterprise OID: %N \\n Trap Type: %W \\n Trap Sub-Type: %q \\n Community/Infosec Context: %P \\n Uptime: %T \\n Description: %W \\n PDU Attribute/Value Pair Array:\\n%v \\n -------------- \\n createuser -e 0x_ENGINE_ID_ SNMPV3_AUTH_USER_NAME AUTH_PROTOCOL SNMP_V3_AUTH_PASSWORD PRIVACY_PROTOCOL PRIVACY_PASSWORD authuser log,execute SNMP_V3_AUTH_USER_NAME authCommunity log,execute,net SNMP_COMMUNITY_FOR_SNMPV2", "format2 %V\\n% Agent Address: %A \\n Agent Hostname: %B \\n Date: %H - %J - %K - %L - %M - %Y \\n Enterprise OID: %N \\n Trap Type: %W \\n Trap Sub-Type: %q \\n Community/Infosec Context: %P \\n Uptime: %T \\n Description: %W \\n PDU Attribute/Value Pair Array:\\n%v \\n -------------- \\n authCommunity log,execute,net public", "format2 %V\\n% Agent Address: %A \\n Agent Hostname: %B \\n Date: %H - %J - %K - %L - %M - %Y \\n Enterprise OID: %N \\n Trap Type: %W \\n Trap Sub-Type: %q \\n Community/Infosec Context: %P \\n Uptime: %T \\n Description: %W \\n PDU Attribute/Value Pair Array:\\n%v \\n -------------- \\n createuser -e 0x8000C53Ff64f341c655d11eb8778fa163e914bcc myuser SHA mypassword authuser log,execute myuser", "snmp_v3_auth_username: myuser snmp_v3_auth_password: mypassword", "format2 %V\\n% Agent Address: %A \\n Agent Hostname: %B \\n Date: %H - %J - %K - %L - %M - %Y \\n Enterprise OID: %N \\n Trap Type: %W \\n Trap Sub-Type: %q \\n Community/Infosec Context: %P \\n Uptime: %T \\n Description: %W \\n PDU Attribute/Value Pair Array:\\n%v \\n -------------- \\n createuser -e 0x8000C53Ff64f341c655d11eb8778fa163e914bcc myuser SHA mypassword DES mysecret authuser log,execute myuser", "snmp_v3_auth_username: myuser snmp_v3_auth_password: mypassword snmp_v3_priv_password: mysecret", "/usr/sbin/snmptrapd -M /usr/share/snmp/mibs -m CEPH-MIB.txt -f -C -c /root/snmptrapd/ CONFIGURATION_FILE -Of -Lo :162", "/usr/sbin/snmptrapd -M /usr/share/snmp/mibs -m CEPH-MIB.txt -f -C -c /root/snmptrapd/snmptrapd_auth.conf -Of -Lo :162", "NET-SNMP version 5.8 Agent Address: 0.0.0.0 Agent Hostname: <UNKNOWN> Date: 15 - 5 - 12 - 8 - 10 - 4461391 Enterprise OID: . Trap Type: Cold Start Trap Sub-Type: 0 Community/Infosec Context: TRAP2, SNMP v3, user myuser, context Uptime: 0 Description: Cold Start PDU Attribute/Value Pair Array: .iso.org.dod.internet.mgmt.mib-2.1.3.0 = Timeticks: (292276100) 3 days, 19:52:41.00 .iso.org.dod.internet.snmpV2.snmpModules.1.1.4.1.0 = OID: .iso.org.dod.internet.private.enterprises.ceph.cephCluster.cephNotifications.prometheus.promMgr.promMgrPrometheusInactive .iso.org.dod.internet.private.enterprises.ceph.cephCluster.cephNotifications.prometheus.promMgr.promMgrPrometheusInactive.1 = STRING: \"1.3.6.1.4.1.50495.1.2.1.6.2[alertname=CephMgrPrometheusModuleInactive]\" .iso.org.dod.internet.private.enterprises.ceph.cephCluster.cephNotifications.prometheus.promMgr.promMgrPrometheusInactive.2 = STRING: \"critical\" .iso.org.dod.internet.private.enterprises.ceph.cephCluster.cephNotifications.prometheus.promMgr.promMgrPrometheusInactive.3 = STRING: \"Status: critical - Alert: CephMgrPrometheusModuleInactive Summary: Ceph's mgr/prometheus module is not available Description: The mgr/prometheus module at 10.70.39.243:9283 is unreachable. This could mean that the module has been disabled or the mgr itself is down. Without the mgr/prometheus module metrics and alerts will no longer function. Open a shell to ceph and use 'ceph -s' to determine whether the mgr is active. If the mgr is not active, restart it, otherwise you can check the mgr/prometheus module is loaded with 'ceph mgr module ls' and if it's not listed as enabled, enable it with 'ceph mgr module enable prometheus'\"", "cephadm shell", "ceph orch host label add HOSTNAME snmp-gateway", "ceph orch host label add host02 snmp-gateway", "cat snmp_creds.yml snmp_community: public", "cat snmp-gateway.yml service_type: snmp-gateway service_name: snmp-gateway placement: count: 1 spec: credentials: snmp_community: public port: 9464 snmp_destination: 192.168.122.73:162 snmp_version: V2c", "cat snmp_creds.yml snmp_v3_auth_username: myuser snmp_v3_auth_password: mypassword", "cat snmp-gateway.yml service_type: snmp-gateway service_name: snmp-gateway placement: count: 1 spec: credentials: snmp_v3_auth_password: mypassword snmp_v3_auth_username: myuser engine_id: 8000C53Ff64f341c655d11eb8778fa163e914bcc port: 9464 snmp_destination: 192.168.122.1:162 snmp_version: V3", "cat snmp_creds.yml snmp_v3_auth_username: myuser snmp_v3_auth_password: mypassword snmp_v3_priv_password: mysecret", "cat snmp-gateway.yml service_type: snmp-gateway service_name: snmp-gateway placement: count: 1 spec: credentials: snmp_v3_auth_password: mypassword snmp_v3_auth_username: myuser snmp_v3_priv_password: mysecret engine_id: 8000C53Ff64f341c655d11eb8778fa163e914bcc port: 9464 snmp_destination: 192.168.122.1:162 snmp_version: V3", "ceph orch apply snmp-gateway --snmp_version= V2c_OR_V3 --destination= SNMP_DESTINATION [--port= PORT_NUMBER ] [--engine-id=8000C53F_CLUSTER_FSID_WITHOUT_DASHES_] [--auth-protocol= MDS_OR_SHA ] [--privacy_protocol= DES_OR_AES ] -i FILENAME", "ceph orch apply -i FILENAME .yml", "ceph orch apply snmp-gateway --snmp-version=V2c --destination=192.168.122.73:162 --port=9464 -i snmp_creds.yml", "ceph orch apply snmp-gateway --snmp-version=V3 --engine-id=8000C53Ff64f341c655d11eb8778fa163e914bcc--destination=192.168.122.73:162 -i snmp_creds.yml", "ceph orch apply snmp-gateway --snmp-version=V3 --engine-id=8000C53Ff64f341c655d11eb8778fa163e914bcc--destination=192.168.122.73:162 --privacy-protocol=AES -i snmp_creds.yml", "ceph orch apply -i snmp-gateway.yml" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/operations_guide/configuration-of-snmp-traps
23.11. Additional Resources
23.11. Additional Resources The following sources of information provide additional resources regarding PTP and the ptp4l tools. 23.11.1. Installed Documentation ptp4l(8) man page - Describes ptp4l options including the format of the configuration file. pmc(8) man page - Describes the PTP management client and its command options. phc2sys(8) man page - Describes a tool for synchronizing the system clock to a PTP hardware clock (PHC).
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-ptp_additional_resources
Chapter 1. Introduction to Service Registry
Chapter 1. Introduction to Service Registry This chapter introduces Service Registry concepts and features and provides details on the supported artifact types that are stored in the registry: Section 1.1, "What is Service Registry?" Section 1.2, "Schema and API artifacts in Service Registry" Section 1.3, "Manage content using the Service Registry web console" Section 1.4, "Service Registry REST API for clients" Section 1.5, "Service Registry storage options" Section 1.6, "Validate Kafka messages using schemas and Java client serializers/deserializers" Section 1.7, "Stream data to external systems with Kafka Connect converters" Section 1.8, "Service Registry demonstration examples" Section 1.9, "Service Registry available distributions" 1.1. What is Service Registry? Service Registry is a datastore for sharing standard event schemas and API designs across event-driven and API architectures. You can use Service Registry to decouple the structure of your data from your client applications, and to share and manage your data types and API descriptions at runtime using a REST interface. Client applications can dynamically push or pull the latest schema updates to or from Service Registry at runtime without needing to redeploy. Developer teams can query Service Registry for existing schemas required for services already deployed in production, and can register new schemas required for new services in development. You can enable client applications to use schemas and API designs stored in Service Registry by specifying the Service Registry URL in your client application code. Service Registry can store schemas used to serialize and deserialize messages, which are referenced from your client applications to ensure that the messages that they send and receive are compatible with those schemas. Using Service Registry to decouple your data structure from your applications reduces costs by decreasing overall message size, and creates efficiencies by increasing consistent reuse of schemas and API designs across your organization. Service Registry provides a web console to make it easy for developers and administrators to manage registry content. You can configure optional rules to govern the evolution of your Service Registry content. These include rules to ensure that uploaded content is valid, or is compatible with other versions. Any configured rules must pass before new versions can be uploaded to Service Registry, which ensures that time is not wasted on invalid or incompatible schemas or API designs. Service Registry is based on the Apicurio Registry open source community project. For details, see https://github.com/apicurio/apicurio-registry . Service Registry capabilities Multiple payload formats for standard event schema and API specifications such as Apache Avro, JSON Schema, Google Protobuf, AsyncAPI, OpenAPI, and more. Pluggable Service Registry storage options in AMQ Streams or PostgreSQL database. Rules for content validation, compatibility, and integrity to govern how Service Registry content evolves over time. Service Registry content management using web console, REST API, command line, Maven plug-in, or Java client. Full Apache Kafka schema registry support, including integration with Kafka Connect for external systems. Kafka client serializers/deserializers (SerDes) to validate message types at runtime. Compatibility with existing Confluent schema registry client applications. Cloud-native Quarkus Java runtime for low memory footprint and fast deployment times. Operator-based installation of Service Registry on OpenShift. OpenID Connect (OIDC) authentication using Red Hat Single Sign-On. 1.2. Schema and API artifacts in Service Registry The items stored in Service Registry, such as event schemas and API designs, are known as registry artifacts . The following shows an example of an Apache Avro schema artifact in JSON format for a simple share price application: Example Avro schema { "type": "record", "name": "price", "namespace": "com.example", "fields": [ { "name": "symbol", "type": "string" }, { "name": "price", "type": "string" } ] } When a schema or API design is added as an artifact in Service Registry, client applications can then use that schema or API design to validate that the client messages conform to the correct data structure at runtime. Groups of schemas and APIs An artifact group is an optional named collection of schema or API artifacts. Each group contains a logically related set of schemas or API designs, typically managed by a single entity, belonging to a particular application or organization. You can create optional artifact groups when adding your schemas and API designs to organize them in Service Registry. For example, you could create groups to match your development and production application environments, or your sales and engineering organizations. Schema and API groups can contain multiple artifact types. For example, you could have Protobuf, Avro, JSON Schema, OpenAPI, or AsyncAPI artifacts all in the same group. You can create schema and API artifacts and groups using the Service Registry web console, REST API, command line, Maven plug-in, or Java client application. The following simple example shows using the Core Registry REST API: USD curl -X POST -H "Content-type: application/json; artifactType=AVRO" \ -H "X-Registry-ArtifactId: share-price" \ --data '{"type":"record","name":"price","namespace":"com.example", \ "fields":[{"name":"symbol","type":"string"},{"name":"price","type":"string"}]}' \ https://my-registry.example.com/apis/registry/v2/groups/my-group/artifacts This example creates an artifact group named my-group and adds an Avro schema with an artifact ID of share-price . Note Specifying a group is optional when using the Service Registry web console, and a default group is created automatically. When using the REST API or Maven plug-in, specify the default group in the API path if you do not want to create a unique group. Additional resources For information on supported artifact types, see Chapter 9, Service Registry artifact reference . For information on the Core Registry API, see the Apicurio Registry REST API documentation . References to other schemas and APIs Some Service Registry artifact types can include artifact references from one artifact file to another. You can create efficiencies by defining reusable schema or API components, and then referencing them from multiple locations. For example, you can specify a reference in JSON Schema or OpenAPI using a USDref statement, or in Google Protobuf using an import statement, or in Apache Avro using a nested namespace. The following example shows a simple Avro schema named TradeKey that includes a reference to another schema named Exchange using a nested namespace: Tradekey schema with nested Exchange schema { "namespace": "com.kubetrade.schema.trade", "type": "record", "name": "TradeKey", "fields": [ { "name": "exchange", "type": "com.kubetrade.schema.common.Exchange" }, { "name": "key", "type": "string" } ] } Exchange schema { "namespace": "com.kubetrade.schema.common", "type": "enum", "name": "Exchange", "symbols" : ["GEMINI"] } An artifact reference is stored in Service Registry as a collection of artifact metadata that maps from an artifact type-specific reference to an internal Service Registry reference. Each artifact reference in Service Registry is composed of the following: Group ID Artifact ID Artifact version Artifact reference name You can manage artifact references using the Service Registry core REST API, Maven plug-in, and Java serializers/deserializers (SerDes). Service Registry stores the artifact references along with the artifact content. Service Registry also maintains a collection of all artifact references so you can search them or list all references for a specific artifact. Supported artifact types Service Registry currently supports artifact references for the following artifact types only: Avro Protobuf JSON Schema OpenAPI AsyncAPI Additional resources For details on managing artifact references, see: Chapter 4, Managing Service Registry content using the REST API . Chapter 5, Managing Service Registry content using the Maven plug-in . For a Java example, see the Apicurio Registry SerDes with references demonstration . 1.3. Manage content using the Service Registry web console You can use the Service Registry web console to browse and search the schema and API artifacts and optional groups stored in the registry, and to add new schema and API artifacts, groups, and versions. You can search for artifacts by label, name, group, and description. You can view an artifact's content or its available versions, or download an artifact file locally. You can also configure optional rules for registry content, both globally and for each schema and API artifact. These optional rules for content validation and compatibility are applied when new schema and API artifacts or versions are uploaded to the registry. For more details, see Chapter 10, Service Registry content rule reference . Figure 1.1. Service Registry web console The Service Registry web console is available from http://MY_REGISTRY_URL/ui . Additional resources Chapter 3, Managing Service Registry content using the web console 1.4. Service Registry REST API for clients Client applications can use the Core Registry API v2 to manage the schema and API artifacts in Service Registry. This API provides operations for the following features: Admin Export or import Service Registry data in a .zip file, and manage logging levels for the Service Registry instance at runtime. Artifacts Manage schema and API artifacts stored in Service Registry. You can also manage the lifecycle state of an artifact: enabled, disabled, or deprecated. Artifact metadata Manage details about a schema or API artifact. You can edit details such as artifact name, description, or labels. Details such as artifact group, and when the artifact was created or modified are read-only. Artifact rules Configure rules to govern the content evolution of a specific schema or API artifact to prevent invalid or incompatible content from being added to Service Registry. Artifact rules override any global rules configured. Artifact versions Manage versions that are created when a schema or API artifact is updated. You can also manage the lifecycle state of an artifact version: enabled, disabled, or deprecated. Global rules Configure rules to govern the content evolution of all schema and API artifacts to prevent invalid or incompatible content from being added to Service Registry. Global rules are applied only if an artifact does not have its own specific artifact rules configured. Search Browse or search for schema and API artifacts and versions, for example, by name, group, description, or label. System Get the Service Registry version and the limits on resources for the Service Registry instance. Users Get the current Service Registry user. Compatibility with other schema registry REST APIs Service Registry also provides compatibility with the following schema registries by including implementations of their respective REST APIs: Service Registry Core Registry API v1 Confluent Schema Registry API v6 Confluent Schema Registry API v7 CNCF CloudEvents Schema Registry API v0 Applications using Confluent client libraries can use Service Registry as a drop-in replacement. For more details, see Replacing Confluent Schema Registry . Additional resources For more information on the Core Registry API v2, see the Apicurio Registry REST API documentation . For API documentation on the Core Registry API v2 and all compatible APIs, browse to the /apis endpoint of your Service Registry instance, for example, http://MY-REGISTRY-URL/apis . 1.5. Service Registry storage options Service Registry provides the following options for the underlying storage of registry data: Table 1.1. Service Registry data storage options Storage option Description PostgreSQL database PostgreSQL is the recommended data storage option for performance, stability, and data management (backup/restore, and so on) in a production environment. AMQ Streams Kafka storage is provided for production environments where database management expertise is not available, or where storage in Kafka is a specific requirement. Additional resources For more details on storage options, see Installing and deploying Service Registry on OpenShift . 1.6. Validate Kafka messages using schemas and Java client serializers/deserializers Kafka producer applications can use serializers to encode messages that conform to a specific event schema. Kafka consumer applications can then use deserializers to validate that messages have been serialized using the correct schema, based on a specific schema ID. Figure 1.2. Service Registry and Kafka client SerDes architecture Service Registry provides Kafka client serializers/deserializers (SerDes) to validate the following message types at runtime: Apache Avro Google Protobuf JSON Schema The Service Registry Maven repository and source code distributions include the Kafka SerDes implementations for these message types, which Kafka client application developers can use to integrate with Service Registry. These implementations include custom Java classes for each supported message type, for example, io.apicurio.registry.serde.avro , which client applications can use to pull schemas from Service Registry at runtime for validation. Additional resources Chapter 7, Validating Kafka messages using serializers/deserializers in Java clients 1.7. Stream data to external systems with Kafka Connect converters You can use Service Registry with Apache Kafka Connect to stream data between Kafka and external systems. Using Kafka Connect, you can define connectors for different systems to move large volumes of data into and out of Kafka-based systems. Figure 1.3. Service Registry and Kafka Connect architecture Service Registry provides the following features for Kafka Connect: Storage for Kafka Connect schemas Kafka Connect converters for Apache Avro and JSON Schema Core Registry API to manage schemas You can use the Avro and JSON Schema converters to map Kafka Connect schemas into Avro or JSON schemas. These schemas can then serialize message keys and values into the compact Avro binary format or human-readable JSON format. The converted JSON is less verbose because the messages do not contain the schema information, only the schema ID. Service Registry can manage and track the Avro and JSON schemas used in the Kafka topics. Because the schemas are stored in Service Registry and decoupled from the message content, each message must only include a tiny schema identifier. For an I/O bound system like Kafka, this means more total throughput for producers and consumers. The Avro and JSON Schema serializers and deserializers (SerDes) provided by Service Registry are used by Kafka producers and consumers in this use case. Kafka consumer applications that you write to consume change events can use the Avro or JSON SerDes to deserialize these events. You can install the Service Registry SerDes in any Kafka-based system and use them along with Kafka Connect, or with a Kafka Connect-based system such as Debezium. Additional resources Configuring Debezium to use Avro serialization and Service Registry Example of using Debezium to monitor the PostgreSQL database used by Apicurio Registry Apache Kafka Connect documentation 1.8. Service Registry demonstration examples Service Registry provides open source example applications that demonstrate how to use Service Registry in different use case scenarios. For example, these include storing schemas used by Kafka serializer and deserializer (SerDes) Java classes. These classes fetch the schema from Service Registry for use when producing or consuming operations to serialize, deserialize, or validate the Kafka message payload. These applications demonstrate use cases such as the following examples: Apache Avro Kafka SerDes Apache Avro Maven plug-in Apache Camel Quarkus and Kafka CloudEvents Confluent Kafka SerDes Custom ID strategy Event-driven architecture with Debezium Google Protobuf Kafka SerDes JSON Schema Kafka SerDes REST clients Additional resources For more details, see https://github.com/Apicurio/apicurio-registry-examples 1.9. Service Registry available distributions Service Registry provides the following distribution options. Table 1.2. Service Registry Operator and images Distribution Location Release category Service Registry Operator OpenShift web console under Operators OperatorHub General Availability Container image for Service Registry Operator Red Hat Ecosystem Catalog General Availability Container image for Kafka storage in AMQ Streams Red Hat Ecosystem Catalog General Availability Container image for database storage in PostgreSQL Red Hat Ecosystem Catalog General Availability Table 1.3. Service Registry zip downloads Distribution Location Release category Example custom resource definitions for installation Red Hat Software Downloads General Availability Service Registry v1 to v2 migration tool Red Hat Software Downloads General Availability Maven repository Red Hat Software Downloads General Availability Source code Red Hat Software Downloads General Availability Kafka Connect converters Red Hat Software Downloads General Availability Note You must have a subscription for Red Hat Integration and be logged into the Red Hat Customer Portal to access the available Service Registry distributions.
[ "{ \"type\": \"record\", \"name\": \"price\", \"namespace\": \"com.example\", \"fields\": [ { \"name\": \"symbol\", \"type\": \"string\" }, { \"name\": \"price\", \"type\": \"string\" } ] }", "curl -X POST -H \"Content-type: application/json; artifactType=AVRO\" -H \"X-Registry-ArtifactId: share-price\" --data '{\"type\":\"record\",\"name\":\"price\",\"namespace\":\"com.example\", \"fields\":[{\"name\":\"symbol\",\"type\":\"string\"},{\"name\":\"price\",\"type\":\"string\"}]}' https://my-registry.example.com/apis/registry/v2/groups/my-group/artifacts", "{ \"namespace\": \"com.kubetrade.schema.trade\", \"type\": \"record\", \"name\": \"TradeKey\", \"fields\": [ { \"name\": \"exchange\", \"type\": \"com.kubetrade.schema.common.Exchange\" }, { \"name\": \"key\", \"type\": \"string\" } ] }", "{ \"namespace\": \"com.kubetrade.schema.common\", \"type\": \"enum\", \"name\": \"Exchange\", \"symbols\" : [\"GEMINI\"] }" ]
https://docs.redhat.com/en/documentation/red_hat_integration/2023.q4/html/service_registry_user_guide/intro-to-the-registry_registry
35.3. Configuring the Certificate Server Component
35.3. Configuring the Certificate Server Component To configure Certificate Server (CS) manually, open the /etc/pki/pki-tomcat/server.xml file. Set all occurrences of the sslVersionRangeStream and sslVersionRangeDatagram parameters to the following values: Alternatively, use the following command to replace the values for you: Restart CS:
[ "sslVersionRangeStream=\"tls1_2:tls1_2\" sslVersionRangeDatagram=\"tls1_2:tls1_2\"", "sed -i 's/tls1_[01]:tls1_2/tls1_2:tls1_2/g' /etc/pki/pki-tomcat/server.xml", "systemctl restart [email protected]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/configure-tls-cs
Appendix A. About Service Interconnect documentation
Appendix A. About Service Interconnect documentation Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Revised on 2025-02-24 19:05:05 UTC
null
https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/overview/about-documentation
Chapter 17. Inviting users to your RHACS instance
Chapter 17. Inviting users to your RHACS instance By inviting users to Red Hat Advanced Cluster Security for Kubernetes (RHACS), you can ensure that the right users have the appropriate access rights within your cluster. You can invite one or more users by assigning roles and defining the authentication provider. 17.1. Configuring access control and sending invitations By configuring access control in the RHACS portal, you can invite users to your RHACS instance. Procedure In the RHACS portal, go to the Platform Configuration Access Control Auth providers tab, and then click Invite users . In the Invite users dialog box, provide the following information: Emails to invite : Enter one or more email addresses of the users you want to invite. Ensure that they are valid email addresses associated with the intended recipients. Provider : From the drop-down list, select a provider you want to use for each invited user. Important If you have only one authentication provider available, it is selected by default. If multiple authentication providers are available and at least one of them is Red Hat SSO or Default Internal SSO , that provider is selected by default. If multiple authentication providers are available, but none of them is Red Hat SSO or Default Internal SSO , you are prompted to select one manually. If you have not yet set up an authentication provider, a warning message appears and the form is disabled. Click the link, which takes you to the Access Control section to configure an authentication provider. Role : From the drop-down list, select the role to assign to each invited user. Click Invite users . On the confirmation dialog box, you receive a confirmation that the users have been created with the selected role. Copy the one or more email addresses and the message into an email that you create in your own email client, and send it to the users. Click Done . Verification In the RHACS portal, go to the Platform Configuration Access Control Auth providers tab. Select the authentication provider you used to invite users. Scroll down to the Rules section. Verify that the user emails and authentication provider roles have been added to the list.
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/configuring/inviting-users-to-your-rhacs-instance
Chapter 145. XSLT
Chapter 145. XSLT Only producer is supported The XSLT component allows you to process a message using an XSLT template. This can be ideal when using Templating to generate response for requests. 145.1. Dependencies When using xslt with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-xslt-starter</artifactId> </dependency> 145.2. URI format The URI format contains templateName , which can be one of the following: the classpath-local URI of the template to invoke the complete URL of the remote template. You can append query options to the URI in the following format: ?option=value&option=value&... Table 145.1. Table 1. Example URIs URI Description xslt:com/acme/mytransform.xsl Refers to the file com/acme/mytransform.xsl on the classpath xslt:file:///foo/bar.xsl Refers to the file /foo/bar.xsl xslt:http://acme.com/cheese/foo.xsl Refers to the remote http resource 145.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 145.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 145.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 145.4. Component Options The XSLT component supports 7 options, which are listed below. Name Description Default Type contentCache (producer) Cache for the resource content (the stylesheet file) when it is loaded. If set to false Camel will reload the stylesheet file on each message processing. This is good for development. A cached stylesheet can be forced to reload at runtime via JMX using the clearCachedStylesheet operation. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean transformerFactoryClass (advanced) To use a custom XSLT transformer factory, specified as a FQN class name. String transformerFactoryConfigurationStrategy (advanced) A configuration strategy to apply on freshly created instances of TransformerFactory. TransformerFactoryConfigurationStrategy uriResolver (advanced) To use a custom UriResolver. Should not be used together with the option 'uriResolverFactory'. URIResolver uriResolverFactory (advanced) To use a custom UriResolver which depends on a dynamic endpoint resource URI. Should not be used together with the option 'uriResolver'. XsltUriResolverFactory 145.5. Endpoint Options The XSLT endpoint is configured using URI syntax: with the following path and query parameters: 145.5.1. Path Parameters (1 parameters) Name Description Default Type resourceUri (producer) Required Path to the template. The following is supported by the default URIResolver. You can prefix with: classpath, file, http, ref, or bean. classpath, file and http loads the resource using these protocols (classpath is default). ref will lookup the resource in the registry. bean will call a method on a bean to be used as the resource. For bean you can specify the method name after dot, eg bean:myBean.myMethod. String 145.5.2. Query Parameters (13 parameters) Name Description Default Type contentCache (producer) Cache for the resource content (the stylesheet file) when it is loaded. If set to false Camel will reload the stylesheet file on each message processing. This is good for development. A cached stylesheet can be forced to reload at runtime via JMX using the clearCachedStylesheet operation. true boolean deleteOutputFile (producer) If you have output=file then this option dictates whether or not the output file should be deleted when the Exchange is done processing. For example suppose the output file is a temporary file, then it can be a good idea to delete it after use. false boolean failOnNullBody (producer) Whether or not to throw an exception if the input body is null. true boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean output (producer) Option to specify which output type to use. Possible values are: string, bytes, DOM, file. The first three options are all in memory based, where as file is streamed directly to a java.io.File. For file you must specify the filename in the IN header with the key Exchange.XSLT_FILE_NAME which is also CamelXsltFileName. Also any paths leading to the filename must be created beforehand, otherwise an exception is thrown at runtime. Enum values: string bytes DOM file string XsltOutput transformerCacheSize (producer) The number of javax.xml.transform.Transformer object that are cached for reuse to avoid calls to Template.newTransformer(). 0 int entityResolver (advanced) To use a custom org.xml.sax.EntityResolver with javax.xml.transform.sax.SAXSource. EntityResolver errorListener (advanced) Allows to configure to use a custom javax.xml.transform.ErrorListener. Beware when doing this then the default error listener which captures any errors or fatal errors and store information on the Exchange as properties is not in use. So only use this option for special use-cases. ErrorListener resultHandlerFactory (advanced) Allows you to use a custom org.apache.camel.builder.xml.ResultHandlerFactory which is capable of using custom org.apache.camel.builder.xml.ResultHandler types. ResultHandlerFactory transformerFactory (advanced) To use a custom XSLT transformer factory. TransformerFactory transformerFactoryClass (advanced) To use a custom XSLT transformer factory, specified as a FQN class name. String transformerFactoryConfigurationStrategy (advanced) A configuration strategy to apply on freshly created instances of TransformerFactory. TransformerFactoryConfigurationStrategy uriResolver (advanced) To use a custom javax.xml.transform.URIResolver. URIResolver 145.6. Using XSLT endpoints The following format is an example of using an XSLT template to formulate a response for a message for InOut message exchanges (where there is a JMSReplyTo header) from("activemq:My.Queue"). to("xslt:com/acme/mytransform.xsl"); If you want to use InOnly and consume the message and send it to another destination you could use the following route: from("activemq:My.Queue"). to("xslt:com/acme/mytransform.xsl"). to("activemq:Another.Queue"); 145.7. Getting Useable Parameters into the XSLT By default, all headers are added as parameters which are then available in the XSLT. To make the parameters useable, you will need to declare them. <setHeader name="myParam"><constant>42</constant></setHeader> <to uri="xslt:MyTransform.xsl"/> The parameter also needs to be declared in the top level of the XSLT for it to be available: <xsl: ...... > <xsl:param name="myParam"/> <xsl:template ...> 145.8. Spring XML versions To use the above examples in Spring XML you would use something like the following code: <camelContext xmlns="http://activemq.apache.org/camel/schema/spring"> <route> <from uri="activemq:My.Queue"/> <to uri="xslt:org/apache/camel/spring/processor/example.xsl"/> <to uri="activemq:Another.Queue"/> </route> </camelContext> 145.9. Using xsl:include Camel provides its own implementation of URIResolver . This allows Camel to load included files from the classpath. For example the include file in the following code will be located relative to the starting endpoint. <xsl:include href="staff_template.xsl"/> This means that Camel will locate the file in the classpath as org/apache/camel/component/xslt/staff_template.xsl You can use classpath: or file: to instruct Camel to look either in the classpath or file system. If you omit the prefix then Camel uses the prefix from the endpoint configuration. If no prefix is specified in the endpoint configuration, the default is classpath: . You can also refer backwards in the include paths. In the following example, the xsl file will be resolved under org/apache/camel/component . <xsl:include href="../staff_other_template.xsl"/> 145.10. Using xsl:include and default prefix Camel will use the prefix from the endpoint configuration as the default prefix. You can explicitly specify file: or classpath: loading. The two loading types can be mixed in a XSLT script, if necessary. 145.11. Dynamic stylesheets To provide a dynamic stylesheet at runtime you can define a dynamic URI. See How to use a dynamic URI in to() for more information. 145.12. Accessing warnings, errors and fatalErrors from XSLT ErrorListener Any warning/error or fatalError is stored on the current Exchange as a property with the keys Exchange.XSLT_ERROR , Exchange.XSLT_FATAL_ERROR , or Exchange.XSLT_WARNING which allows end users to get hold of any errors happening during transformation. For example in the stylesheet below, we want to terminate if a staff has an empty dob field. And to include a custom error message using xsl:message. <xsl:template match="/"> <html> <body> <xsl:for-each select="staff/programmer"> <p>Name: <xsl:value-of select="name"/><br /> <xsl:if test="dob=''"> <xsl:message terminate="yes">Error: DOB is an empty string!</xsl:message> </xsl:if> </p> </xsl:for-each> </body> </html> </xsl:template> The exception is stored on the Exchange as a warning with the key Exchange.XSLT_WARNING. 145.13. Spring Boot Auto-Configuration The component supports 8 options, which are listed below. Name Description Default Type camel.component.xslt.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.xslt.content-cache Cache for the resource content (the stylesheet file) when it is loaded. If set to false Camel will reload the stylesheet file on each message processing. This is good for development. A cached stylesheet can be forced to reload at runtime via JMX using the clearCachedStylesheet operation. true Boolean camel.component.xslt.enabled Whether to enable auto configuration of the xslt component. This is enabled by default. Boolean camel.component.xslt.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.xslt.transformer-factory-class To use a custom XSLT transformer factory, specified as a FQN class name. String camel.component.xslt.transformer-factory-configuration-strategy A configuration strategy to apply on freshly created instances of TransformerFactory. The option is a org.apache.camel.component.xslt.TransformerFactoryConfigurationStrategy type. TransformerFactoryConfigurationStrategy camel.component.xslt.uri-resolver To use a custom UriResolver. Should not be used together with the option 'uriResolverFactory'. The option is a javax.xml.transform.URIResolver type. URIResolver camel.component.xslt.uri-resolver-factory To use a custom UriResolver which depends on a dynamic endpoint resource URI. Should not be used together with the option 'uriResolver'. The option is a org.apache.camel.component.xslt.XsltUriResolverFactory type. XsltUriResolverFactory
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-xslt-starter</artifactId> </dependency>", "xslt:templateName[?options]", "xslt:resourceUri", "from(\"activemq:My.Queue\"). to(\"xslt:com/acme/mytransform.xsl\");", "from(\"activemq:My.Queue\"). to(\"xslt:com/acme/mytransform.xsl\"). to(\"activemq:Another.Queue\");", "<setHeader name=\"myParam\"><constant>42</constant></setHeader> <to uri=\"xslt:MyTransform.xsl\"/>", "<xsl: ...... > <xsl:param name=\"myParam\"/> <xsl:template ...>", "<camelContext xmlns=\"http://activemq.apache.org/camel/schema/spring\"> <route> <from uri=\"activemq:My.Queue\"/> <to uri=\"xslt:org/apache/camel/spring/processor/example.xsl\"/> <to uri=\"activemq:Another.Queue\"/> </route> </camelContext>", "<xsl:include href=\"staff_template.xsl\"/>", "<xsl:include href=\"../staff_other_template.xsl\"/>", "<xsl:template match=\"/\"> <html> <body> <xsl:for-each select=\"staff/programmer\"> <p>Name: <xsl:value-of select=\"name\"/><br /> <xsl:if test=\"dob=''\"> <xsl:message terminate=\"yes\">Error: DOB is an empty string!</xsl:message> </xsl:if> </p> </xsl:for-each> </body> </html> </xsl:template>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-xslt-component-starter
Web console
Web console OpenShift Dedicated 4 Getting started with web console in Red Hat OpenShift Dedicated Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/web_console/index
Chapter 70. Expression syntax in test scenarios
Chapter 70. Expression syntax in test scenarios The test scenarios designer supports different expression languages for both rule-based and DMN-based test scenarios. While rule-based test scenarios support the MVFLEX Expression Language (MVEL) and DMN-based test scenarios support the Friendly Enough Expression Language (FEEL). 70.1. Expression syntax in rule-based test scenarios Rule-based test scenario supports the following built-in data types: String Boolean Integer Long Double Float Character Byte Short LocalDate Note For any other data types, use the MVEL expression with the prefix # . Follow the BigDecimal example in the test scenario designer to use the # prefix to set the java expression: Enter # java.math.BigDecimal.valueOf(10) for the GIVEN column value. Enter # actualValue.intValue() == 10 for the EXPECT column value. You can refer to the actual value of the EXPECT column in the java expression to execute a condition. The following rule-based test scenario definition expressions are supported by the test scenarios designer: Table 70.1. Description of expressions syntax Operator Description = Specifies equal to a value. This is default for all columns and is the only operator supported by the GIVEN column. =, =!, <> Specifies inequality of a value. This operator can be combined with other operators. <, >, <=, >= Specifies a comparison: less than, greater than, less or equals than, and greater or equals than. # This operator is used to set the java expression value to a property header cell which can be executed as a java method. [value1, value2, value3] Specifies a list of values. If one or more values are valid, the scenario definition is evaluated as true. expression1; expression2; expression3 Specifies a list of expressions. If all expressions are valid, the scenario definition is evaluated as true. Note When evaluating a rule-based test scenario, an empty cell is skipped from the evaluation. To define an empty string, use = , [] , or ; and to define a null value, use null . Table 70.2. Example expressions Expression Description -1 The actual value is equal to -1. < 0 The actual value is less than 0. ! > 0 The actual value is not greater than 0. [-1, 0, 1] The actual value is equal to either -1 or 0 or 1. <> [1, -1] The actual value is neither equal to 1 nor -1. ! 100; 0 The actual value is not equal to 100 but is equal to 0. != < 0; <> > 1 The actual value is neither less than 0 nor greater than 1. <> <= 0; >= 1 The actual value is neither less than 0 nor equal to 0 but is greater than or equal to 1. You can refer to the supported commands and syntax in the Scenario Cheatsheet tab on the right of the rule-based test scenarios designer. 70.2. Expression syntax in DMN-based test scenarios The following data types are supported by the DMN-based test scenarios in the test scenarios designer: Table 70.3. Data types supported by DMN-based scenarios Supported data types Description numbers & strings Strings must be delimited by quotation marks, for example, "John Doe" , "Brno" or "" . boolean values true , false , and null . dates and time For example, date("2019-05-13") or time("14:10:00+02:00") . functions Supports built-in math functions, for example, avg , max . contexts For example, {x : 5, y : 3} . ranges and lists For example, [1 .. 10] or [2, 3, 4, 5] . Note When evaluating a DMN-based test scenario, an empty cell is skipped from the evaluation. To define an empty string in DMN-based test scenarios, use " " and to define a null value, use null . You can refer to the supported commands and syntax in the Scenario Cheatsheet tab on the right of the DMN-based test scenarios designer.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/test-designer-expressions-syntax-intro-ref
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_dns_as_a_service/making-open-source-more-inclusive
Chapter 2. AMQ Streams deployment of Kafka
Chapter 2. AMQ Streams deployment of Kafka Apache Kafka components are provided for deployment to OpenShift with the AMQ Streams distribution. The Kafka components are generally run as clusters for availability. A typical deployment incorporating Kafka components might include: Kafka cluster of broker nodes ZooKeeper cluster of replicated ZooKeeper instances Kafka Connect cluster for external data connections Kafka MirrorMaker cluster to mirror the Kafka cluster in a secondary cluster Kafka Exporter to extract additional Kafka metrics data for monitoring Kafka Bridge to make HTTP-based requests to the Kafka cluster Cruise Control to rebalance topic partitions across broker nodes Not all of these components are mandatory, though you need Kafka and ZooKeeper as a minimum. Some components can be deployed without Kafka, such as MirrorMaker or Kafka Connect. 2.1. Kafka component architecture A cluster of Kafka brokers handles delivery of messages. A broker uses Apache ZooKeeper for storing configuration data and for cluster coordination. Before running Apache Kafka, an Apache ZooKeeper cluster has to be ready. Each of the other Kafka components interact with the Kafka cluster to perform specific roles. Kafka component interaction Apache ZooKeeper Apache ZooKeeper is a core dependency for Kafka as it provides a cluster coordination service, storing and tracking the status of brokers and consumers. ZooKeeper is also used for controller election. Kafka Connect Kafka Connect is an integration toolkit for streaming data between Kafka brokers and other systems using Connector plugins. Kafka Connect provides a framework for integrating Kafka with an external data source or target, such as a database, for import or export of data using connectors. Connectors are plugins that provide the connection configuration needed. A source connector pushes external data into Kafka. A sink connector extracts data out of Kafka External data is translated and transformed into the appropriate format. You can deploy Kafka Connect with build configuration that automatically builds a container image with the connector plugins you require for your data connections. Kafka MirrorMaker Kafka MirrorMaker replicates data between two Kafka clusters, within or across data centers. MirrorMaker takes messages from a source Kafka cluster and writes them to a target Kafka cluster. Kafka Bridge Kafka Bridge provides an API for integrating HTTP-based clients with a Kafka cluster. Kafka Exporter Kafka Exporter extracts data for analysis as Prometheus metrics, primarily data relating to offsets, consumer groups, consumer lag and topics. Consumer lag is the delay between the last message written to a partition and the message currently being picked up from that partition by a consumer
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_on_openshift_overview/kafka-components_str
Chapter 8. Known Issues
Chapter 8. Known Issues This chapter documents known problems in Red Hat Enterprise Linux 7.7. 8.1. Authentication and Interoperability Inconsistent warning message when applying an ID range change In RHEL Identity Management (IdM), you can define multiple identity ranges (ID ranges) associated with a local IdM domain or a trusted Active Directory domain. The information about ID ranges is retrieved by the SSSD daemon on all enrolled systems. A change to ID range properties requires restart of SSSD. Previously, there was no warning about the need to restart SSSD. RHEL 7.7 adds a warning that is displayed when ID range properties are modified in a way that requires restart of SSSD. The warning message currently uses inconsistent wording. The purpose of the warning message is to ask for a restart of SSSD on any IdM system that consumes the ID range. To learn more about ID ranges, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/managing-unique_uid_and_gid_attributes ( BZ#1631826 ) Potential risk when using the default value for ldap_id_use_start_tls option When using ldap:// without TLS for identity lookups, it can pose a risk for an attack vector. Particularly a man-in-the-middle (MITM) attack which could allow an attacker to impersonate a user by altering, for example, the UID or GID of an object returned in an LDAP search. Currently, the SSSD configuration option to enforce TLS, ldap_id_use_start_tls , defaults to false . Ensure that your setup operates in a trusted environment and decide if it is safe to use unencrypted communication for id_provider = ldap . Note id_provider = ad and id_provider = ipa are not affected as they use encrypted connections protected by SASL and GSSAPI. If it is not safe to use unencrypted communication, enforce TLS by setting the ldap_id_use_start_tls option to true in the /etc/sssd/sssd.conf file. The default behavior is planned to be changed in a future release of RHEL. (JIRA:RHELPLAN-155168) 8.2. Compiler and Tools GCC thread sanitizer included in RHEL no longer works Due to incompatible changes in kernel memory mapping, the thread sanitizer included with the GNU C Compiler (GCC) compiler version in RHEL no longer works. Additionally, the thread sanitizer cannot be adapted to the incompatible memory layout. As a result, it is no longer possible to use the GCC thread sanitizer included with RHEL. As a workaround, use the version of GCC included in Red Hat Developer Toolset to build code which uses the thread sanitizer. (BZ#1569484) Context variables in SystemTap not always accessible The generation of debug information in the GCC compiler has some limitations. As a consequence, when analyzing the resulting executable files with the SystemTap tool, context variables listed in the form USDfoo are often inaccessible. To work around this limitation, add the -P option to the USDHOME/.systemtap/rc file. This causes SystemTap to always select prologue-searching heuristics. As a result, some of the context variables can become accessible. (BZ#1714480) ksh with the KEYBD trap mishandles multibyte characters The Korn Shell (KSH) is unable to correctly handle multibyte characters when the KEYBD trap is enabled. Consequently, when the user enters, for example, Japanese characters, ksh displays an incorrect string. To work around this problem, disable the KEYBD trap in the /etc/kshrc file by commenting out the following line: For more details, see a related Knowledgebase solution . ( BZ#1503922 ) Error while upgrading PCP from the RHEL 7.6 version When you upgrade the pcp packages from the RHEL 7.6 to the RHEL 7.7 version, yum returns the following error message: It is safe to ignore this harmless message, which is caused by a bug in the RHEL 7.6 build of pcp and not by the updated package. The PCP functionality in RHEL 7.7 is not affected. ( BZ#1781692 ) 8.3. Desktop Gnome Documents cannot display some documents when installed without LibreOffice Gnome Documents uses libraries provided by the LibreOffice suite for rendering certain types of documents, such as OpenDocument Text or Open Office XML formats. However, the required libreoffice-filters libraries are missing from the dependency list of the gnome-documents package. Therefore, if you install Gnome Documents on a system that does not have LibreOffice , these document types cannot be rendered. To work around this problem, install the libreoffice-filters package manually, even if you do not plan to use LibreOffice itself. ( BZ#1695699 ) GNOME Software cannot install packages from unsigned repositories GNOME Software cannot install packages from repositories that have the following setting in the *.repo file: If you attempt to install a package from such repository, GNOME software fails with a generic error. Currently, there is no workaround available. ( BZ#1591270 ) Nautilus does not hide icons in the GNOME Classic Session The GNOME Tweak Tool setting to show or hide icons in the GNOME session, where the icons are hidden by default, is ignored in the GNOME Classic Session. As a result, it is not possible to hide icons in the GNOME Classic Session even though the GNOME Tweak Tool displays this option. ( BZ#1474852 ) 8.4. Installation and Booting RHEL 7.7 and later installations add spectre_v2=retpoline to Intel Cascade Lake systems RHEL 7.7 and later installations add the spectre_v2=retpoline kernel parameter to Intel Cascade Lake systems, and as a consequence, system performance is affected. To work around this problem and ensure the best performance, complete the following steps. Remove the kernel boot parameter on Intel Cascade Lake systems: Reboot the system: (BZ#1767612) 8.5. Kernel RHEL 7 virtual machines sometimes fail to boot on ESXi 5.5 When running Red Hat Enterprise Linux 7 guests with 12 GB RAM or above on a VMware ESXi 5.5 hypervisor, certain components currently initialize with incorrect memory type range register (MTRR) values or incorrectly reconfigure MTRR values across boots. This sometimes causes the guest kernel to panic or the guest to become unresponsive during boot. To work around this problem, add the disable_mtrr_trim option to the guest's kernel command line, which enables the guest to continue booting when MTRRs are configured incorrectly. Note that with this option, the guest prints WARNING: BIOS bug messages during boot, which you can safely ignore. (BZ#1429792) Certain NIC firmware can become unresponsive with bnx2x Due to a bug in the unload sequence of the pre-boot drivers, the firmware of some internet adapters can become unresponsive after the bnx2x driver takes over the device. The bnx2x driver detects the problem and returns the message "storm stats were not updated for 3 times" in the kernel log. To work around this problem, apply the latest NIC firmware updates provided by your hardware vendor. As a result, unloading of the pre-boot firmware now works as expected and the firmware no longer hangs after bnx2x takes over the device. (BZ#1315400) The i40iw module does not load automatically on boot Some i40e NICs do not support iWarp and the i40iw module does not fully support suspend and resume operations. Consequently, the i40iw module is not automatically loaded by default to ensure suspend and resume operations work properly. To work around this problem, edit the /lib/udev/rules.d/90-rdma-hw-modules.rules file to enable automated load of i40iw . Also note that if there is another RDMA device installed with an i40e device on the same machine, the non-i40e RDMA device triggers the rdma service, which loads all enabled RDMA stack modules, including the i40iw module. (BZ#1622413) The non-interleaved persistent memory configurations cannot use storage Previously, systems with persistent memory aligned to 64 MB boundaries, prevented creating of namespaces. As a consequence, the non-interleaved persistent memory configurations in some cases were not able to use storage. To work around this problem, use the interleaved mode for the persistent memory. As a result, most of the storage is available for use, however, with limited fault isolation. (BZ#1691868) System boot might fail due to persistent memory file systems Systems with a large amount of persistent memory take a long time to boot. If the /etc/fstab file configures persistent memory file systems, the system might time out waiting for the devices to become available. The boot process then fails and presents the user with an emergency prompt. To work around the problem, increase the DefaultTimeoutStartSec value in the /etc/systemd/system.conf file. Use a sufficiently large value, such as 1200s . As a result, the system boot no longer times out. (BZ#1666535) radeon fails to reset hardware correctly The radeon kernel driver currently does not reset hardware in the kexec context correctly. Instead, radeon terminates unexpectedly, which causes the rest of the kdump service to fail. To work around this bug, blacklist radeon in kdump by adding the following line to the /etc/kdump.conf file: Afterwards, restart the machine and kdump . Note that in this scenario, no graphics will be available during kdump , but kdump will complete successfully. (BZ#1509444) Certain eBPF tools can cause the system to become unresponsive on IBM Z Due to a bug in the JIT compiler, running certain eBPF tools contained in the bcc-tools package on IBM Z might cause the system to become unresponsive. To work around this problem, avoid using the dcsnoop , runqlen , and slabratetop tools from bcc-tools on IBM Z until a fix is released. (BZ#1724027) Concurrent SG_IO requests in /dev/sg might cause data corruption The /dev/sg device driver is missing synchronization of kernel data. Concurrent requests in the driver access the same data at the same time. As a consequence, the ioctl system call might sometimes erroneously use the payload of an SG_IO request for a different command that was sent at the same time as the correct one. This might lead to disk corruption in certain cases. Red Hat has observed this bug in Red Hat Virtualization (RHV). To work around the problem, use either of the following solutions: Do not send concurrent requests to the /dev/sg driver. As a result, each SG_IO request sent to /dev/sg is guaranteed to use the correct data. Alternatively, use the /dev/sd or the /dev/bsg drivers instead of /dev/sg . The bug is not present in these drivers. (BZ#1710533) Incorrect order for inner and outer VLAN tags The system receives the inner and outer VLAN tags in a swapped order when using QinQ (IEEE802.1Q in IEEE802.1Q standard) over representor devices when using the mlx5 driver. That happens because the rxvlan offloading switch is not effective on this path and it causes Open vSwitch (OVS) to push this error forward. There is no known workaround. (BZ#1701502) kdump fails to generate vmcore on Azure instances in RHEL 7 An underlying problem with the serial console implementation on Azure instances booted through the UEFI bootloader causes that the kdump kernel is unable to boot. Consequently, the vmcore of the crashed kernel cannot be captured in the /var/crash/ directory. To work around this problem: Add the console=ttyS0 and earlyprintk=ttyS0 parameters to the KDUMP_COMMANDLINE_REMOVE command line in the /etc/sysconfig/kdump directory. Restart the kdump service. As a result, the kdump kernel should correctly boot and vmcore is expected to be captured upon crash. Make sure there is enough space in /var/crash/ to save the vmcore, which can be up to the size of system memory. (BZ#1724993) The kdumpctl service fails to load crash kernel if KASLR is enabled An inappropriate setting of the kptr_restrict kernel tunable causes that contents of the /proc/kcore file are generated as all zeros. As a consequence, the kdumpctl service is not able to access /proc/kcore and to load the crash kernel if Kernel Address Space Layout Randomization (KASLR) is enabled. To work around this problem, keep kptr_restrict set to 1 . As a result, kdumpctl is able to load the crash kernel in the described scenario. For details, refer to the /usr/share/doc/kexec-tools/kexec-kdump-howto.txt file. (BZ#1600148) Kdump fails in the second kernel The kdump initramfs archive is a critical component for capturing the crash dump. However, it is strictly generated for the machine it runs on and has no generality. If you did a disk migration or installed a new machine with a disk image, kdump might fail in the second kernel. To work around this problem, if you did a disk migration, rebuild initramfs manually by running the following commands: # touch /etc/kdump.conf # kdumpctl restart If you are creating a disk image for installing new machines, it is strongly recommended not to include the kdump initramfs in the disk image. It helps to save space and kdump will build the initramfs automatically if it is missing. (BZ#1723492) 8.6. Networking Verification of signatures using the MD5 hash algorithm is disabled in Red Hat Enterprise Linux 7 It is impossible to connect to any Wi-Fi Protected Access (WPA) Enterprise Access Point (AP) that requires MD5 signed certificates. To work around this problem, copy the wpa_supplicant.service file from the /usr/lib/systemd/system/ directory to the /etc/systemd/system/ directory and add the following line to the Service section of the file: Then run the systemctl daemon-reload command as root to reload the service file. Important Note that MD5 certificates are highly insecure and Red Hat does not recommend using them. (BZ#1062656) Booting from a network device fails when the network driver is restarted Currently, if the boot device is mounted over the network when using iSCSI or Fibre Channel over Ethernet (FCoE), Red Hat Enterprise Linux (RHEL) fails to boot when the underlying network interface driver is restarted. For example, RHEL restarts the bnx2x network driver when the libvirt service starts its first virtual network and enables IP forwarding. To work around the problem in this specific example, enable IPv4 forwarding earlier in the boot sequence: Note that this workaround works only in the mentioned scenario. (BZ#1574536) freeradius might fail when upgrading from RHEL 7.3 A new configuration property, correct_escapes , in the /etc/raddb/radiusd.conf file was introduced in the freeradius version distributed since RHEL 7.4. When an administrator sets correct_escapes to true , the new regular expression syntax for backslash escaping is expected. If correct_escapes is set to false , the old syntax is expected where backslashes are also escaped. For backward compatibility reasons, false is the default value. When upgrading, configuration files in the /etc/raddb/ directory are overwritten unless modified by the administrator, so the value of correct_escapes might not always correspond to which type of syntax is used in all the configuration files. As a consequence, authentication with freeradius might fail. To prevent the problem from occurring, after upgrading from freeradius version 3.0.4 (distributed with RHEL 7.3) and earlier, make sure all configuration files in the /etc/raddb/ directory use the new escaping syntax (no double backslash characters can be found) and that the value of correct_escapes in /etc/raddb/radiusd.conf is set to true . For more information and examples, see the solution Authentication with Freeradius fails since upgrade to version >= 3.0.5 . (BZ#1489758) RHEL 7 shows the status of an 802.3ad bond as "Churned" after a switch was unavailable for an extended period of time Currently, when you configure an 802.3ad network bond and the switch is down for an extended period of time, Red Hat Enterprise Linux properly shows the status of the bond as "Churned", even after the connection returns to a working state. However, this is the intended behavior, as the "Churned" status aims to tell the administrator that a significant link outage occurred. To clear this status, restart the network bond or reboot the host. (BZ#1708807) Using client-identifier leads to IP address conflict If the client-identifier option is used, certain network switches ignore the ciaddr field of a dynamic host configuration protocol (DHCP) request. Consequently, the same IP address is assigned to multiple clients, which leads to an IP address conflict. To work around the problem, include the following line in the dhclient.conf file: As a result, the IP address conflict does not occur under the described circumstances. ( BZ#1193799 ) 8.7. Security Libreswan does not work properly with seccomp=enabled on all configurations The set of allowed syscalls in the Libreswan SECCOMP support implementation is currently not complete. Consequently, when SECCOMP is enabled in the ipsec.conf file, the syscall filtering rejects even syscalls needed for the proper functioning of the pluto daemon; the daemon is killed, and the ipsec service is restarted. To work around this problem, set the seccomp= option back to the disabled state. SECCOMP support must remain disabled to run ipsec properly. ( BZ#1544463 ) PKCS#11 devices not supporting RSA-PSS cannot be used with TLS 1.3 The TLS protocol version 1.3 requires RSA-PSS signatures, which are not supported by all PKCS#11 devices, such as hardware security modules (HSM) or smart cards. Currently, server applications using NSS do not check the PKCS#11 module capabilities before negotiating TLS 1.3. As a consequence, attempts to authenticate using PKCS#11 devices that do not support RSA-PSS fail. To work around this problem, use TLS 1.2 instead. ( BZ#1711438 ) TLS 1.3 does not work in NSS in FIPS mode TLS 1.3 is not supported on systems working in FIPS mode. As a consequence, connections that require TLS 1.3 for interoperability do not function on a system working in FIPS mode. To enable the connections, disable the system's FIPS mode or enable support for TLS 1.2 in the peer. (BZ#1710372) OpenSCAP inadvertently accesses remote file systems The OpenSCAP scanner cannot correctly detect whether the scanned file system is a mounted remote file system or a local file system, and the detection part contains also other bugs. Consequently, the scanner reads mounted remote file systems even if an evaluated rule applies to a local file-system only, and it might generate unwanted traffic on remote file systems. To work around this problem, unmount remote file systems before scanning. Another option is to exclude affected rules from the evaluated profile by providing a tailoring file. ( BZ#1694962 ) 8.8. Servers and Services Manual initialization of MariaDB using mysql_install_db fails The mysql_install_db script for initializing the MariaDB database calls the resolveip binary from the /usr/libexec/ directory, while the binary is located in /usr/bin/ . Consequently, manual initialization of the database using mysql_install_db fails. To work around this problem, create a symbolic link to the actual location of the resolveip binary: When the symlink is created, mysql_install_db successfully locates resolveip , and the manual database initialization is successful. Alternatively, use mysql_install_db with the --rpm option. In this case, mysql_install_db does not call the resolveip binary, and therefore does not fail. (BZ#1731062) mysql-connector-java does not work with MySQL 8.0 The mysql-connector-java database connector provided in RHEL 7 does not work with the MySQL 8.0 database server. To work around this problem, use the rh-mariadb103-mariadb-java-client database connector from Red Hat Software Collections. ( BZ#1646363 ) Harmless error messages occur when the balanced Tuned profile is used The balanced Tuned profile has been changed in the way that the cpufreq_conservative kernel module loads when this profile is applied. However, cpufreq_conservative is built-in in the kernel, and it is not available as a module. Consequently, when the balanced profile is used, the following error messages occasionally appear in /var/log/tuned/tuned.log file: Such error messages are harmless, so you can safely ignore them. However, to eliminate the errors, you can override the balanced profile, so that Tuned does not attempt to load the kernel module. For example, create the /etc/tuned/balanced/tuned.conf file with the following contents: ( BZ#1719160 ) The php-mysqlnd database connector does not work with MySQL 8.0 The default character set has been changed to utf8mb4 in MySQL 8.0 but this character set is unsupported by the php-mysqlnd database connector. Consequently, php-mysqlnd fails to connect in the default configuration. To work around this problem, specify a known character set as a parameter of the MySQL server configuration. For example, modify the /etc/opt/rh/rh-mysql80/my.cnf.d/mysql-server.cnf file to read: ( BZ#1646158 ) 8.9. Storage The system halts unexpectedly when using scsi-mq with software FCoE The host system halts unexpectedly when it is configured to use both multiqueue scheduling ( scsi-mq ) and software Fibre Channel over Ethernet (FCoE) at the same time. To work around the problem, disable scsi-mq when using software FCoE. As a result, the system no longer crashes. (BZ#1712664) The system boot sometimes fails on large systems During the boot process, the udev device manager sometimes generates too many rules on large systems. For example, the problem has manifested on a system with 32 TB of memory and 192 CPUs. As a consequence, the boot process becomes unresponsive or times out and switches to the emergency shell. To work around the problem, add the udev.children-max=1000 option to the kernel command line. You can experiment with different values of udev.children-max to see which value results in the fastest boot on your system. As a result, the system boots successfully. (BZ#1722855) When an image is split off from an active/active cluster mirror, the resulting new logical volume has no active component When you split off an image from an active/active cluster mirror, the resulting new logical appears active but it has no active component. To activate the newly split-off logical volume, deactivate the volume and then activate it with the following commands: ( BZ#1642162 ) 8.10. Virtualization Virtual machines sometimes enable unnecessary CPU vulnerability mitigation Currently, the MDS_NO CPU flags, which indicate that the CPU is not vulnerable to the Microarchitectural Data Sampling (MDS) vulnerability, are not exposed to guest operating systems. As a consequence, the guest operating system in some cases automatically enables CPU vulnerability mitigation features that are not necessary for the current host. If the host CPU is known not to be vulnerable to MDS and the virtual machine is not going to be migrated to hosts vulnerable to MDS, MDS vulnerability mitigation can be disabled in Linux guests by using the "mds=off" kernel command-line option. Note, however, that this option disables all MDS mitigations on the guest. Therefore, it should be used with care and should never be used if the host CPU is vulnerable to MDS. (BZ#1708465) Modifying a RHEL 8 virtual image on a RHEL 7 host sometimes fails On RHEL 7 hosts, using virtual image manipulation utilities such as guestfish , virt-sysprep , or virt-customize in some cases fails if the utility targets a virtual image that is using a RHEL 8 file system. This is because RHEL 7 is not fully compatible with certain file-system features in RHEL 8. To work around the problem, you can disable the problematic features when creating the guest file systems using the mkfs utility: For XFS file systems, use the "-m reflink=0" option. For ext4 file systems, use the "-O ^metadata_csum" option. Alternatively, use a RHEL 8 host instead of a RHEL 7 one, where the affected utilities will work as expected. (BZ#1667478) Slow connection to RHEL 7 guest console on a Windows Server 2019 host When using RHEL 7 as a guest operating system in multi-user mode on a Windows Server 2019 host, connecting to a console output of the guest currently takes significantly longer than expected. To work around this problem, connect to the guest using SSH or use Windows Server 2016 as the host. (BZ#1706522) SMT works only on AMD EPYC CPU models Currently, only the AMD EPYC CPU models support the simultaneous multithreading (SMT) feature. As a consequence, manually enabling the topoext feature when configuring a virtual machine (VM) with a different CPU model causes the VM not to detect the vCPU topology correctly, and the vCPU does not perform as configured. To work around this problem, do not enable topoext manually and do not use the threads vCPU option on AMD hosts unless the host is using the AMD EPYC model ( BZ#1615682 )
[ "trap keybd_trap KEYBD", "Failed to resolve allow statement at /etc/selinux/targeted/tmp/modules/400/pcpupstream/cil:83 semodule: Failed!", "gpgcheck=0", "grubby --remove-args=\"spectre_v2=retpoline\" --update-kernel=DEFAULT", "reboot", "dracut_args --omit-drivers \"radeon\"", "Environment=OPENSSL_ENABLE_MD5_VERIFY=1", "echo 'net.ipv4.ip_forward = 1' > /etc/sysctl.d/90-forwarding.conf dracut -f", "send dhcp-client-identifier = \"\";", "ln -s /usr/bin/resolveip /usr/libexec/resolveip", "tuned.utils.commands: Executing modinfo error: modinfo: ERROR: Module cpufreq_conservative not found. tuned.plugins.plugin_modules: kernel module 'cpufreq_conservative' not found, skipping it tuned.plugins.plugin_modules: verify: failed: 'module 'cpufreq_conservative' is not loaded'", "[main] include=balanced [modules] enabled=0", "[mysqld] character-set-server=utf8", "lvchange -an _vg_/_newly_split_lv_ lvchange -ay _vg_/_newly_split_lv_" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.7_release_notes/known_issues
Chapter 15. Checking and repairing a file system
Chapter 15. Checking and repairing a file system RHEL provides file system administration utilities which are capable of checking and repairing file systems. These tools are often referred to as fsck tools, where fsck is a shortened version of file system check . In most cases, these utilities are run automatically during system boot, if needed, but can also be manually invoked if required. Important File system checkers guarantee only metadata consistency across the file system. They have no awareness of the actual data contained within the file system and are not data recovery tools. 15.1. Scenarios that require a file system check The relevant fsck tools can be used to check your system if any of the following occurs: System fails to boot Files on a specific disk become corrupt The file system shuts down or changes to read-only due to inconsistencies A file on the file system is inaccessible File system inconsistencies can occur for various reasons, including but not limited to hardware errors, storage administration errors, and software bugs. Important File system check tools cannot repair hardware problems. A file system must be fully readable and writable if repair is to operate successfully. If a file system was corrupted due to a hardware error, the file system must first be moved to a good disk, for example with the dd(8) utility. For journaling file systems, all that is normally required at boot time is to replay the journal if required and this is usually a very short operation. However, if a file system inconsistency or corruption occurs, even for journaling file systems, then the file system checker must be used to repair the file system. Important It is possible to disable file system check at boot by setting the sixth field in /etc/fstab to 0 . However, Red Hat does not recommend doing so unless you are having issues with fsck at boot time, for example with extremely large or remote file systems. Additional resources fstab(5) , fsck(8) , and dd(8) man pages on your system 15.2. Potential side effects of running fsck Generally, running the file system check and repair tool can be expected to automatically repair at least some of the inconsistencies it finds. In some cases, the following issues can arise: Severely damaged inodes or directories may be discarded if they cannot be repaired. Significant changes to the file system may occur. To ensure that unexpected or undesirable changes are not permanently made, ensure you follow any precautionary steps outlined in the procedure. 15.3. Error-handling mechanisms in XFS This section describes how XFS handles various kinds of errors in the file system. Unclean unmounts Journalling maintains a transactional record of metadata changes that happen on the file system. In the event of a system crash, power failure, or other unclean unmount, XFS uses the journal (also called log) to recover the file system. The kernel performs journal recovery when mounting the XFS file system. Corruption In this context, corruption means errors on the file system caused by, for example: Hardware faults Bugs in storage firmware, device drivers, the software stack, or the file system itself Problems that cause parts of the file system to be overwritten by something outside of the file system When XFS detects corruption in the file system or the file-system metadata, it may shut down the file system and report the incident in the system log. Note that if the corruption occurred on the file system hosting the /var directory, these logs will not be available after a reboot. Example 15.1. System log entry reporting an XFS corruption User-space utilities usually report the Input/output error message when trying to access a corrupted XFS file system. Mounting an XFS file system with a corrupted log results in a failed mount and the following error message: You must manually use the xfs_repair utility to repair the corruption. Additional resources xfs_repair(8) man page on your system 15.4. Checking an XFS file system with xfs_repair Perform a read-only check of an XFS file system by using the xfs_repair utility. Unlike other file system repair utilities, xfs_repair does not run at boot time, even when an XFS file system was not cleanly unmounted. In case of an unclean unmount, XFS simply replays the log at mount time, ensuring a consistent file system; xfs_repair cannot repair an XFS file system with a dirty log without remounting it first. Note Although an fsck.xfs binary is present in the xfsprogs package, this is present only to satisfy initscripts that look for an fsck.file system binary at boot time. fsck.xfs immediately exits with an exit code of 0. Procedure Replay the log by mounting and unmounting the file system: Note If the mount fails with a structure needs cleaning error, the log is corrupted and cannot be replayed. The dry run should discover and report more on-disk corruption as a result. Use the xfs_repair utility to perform a dry run to check the file system. Any errors are printed and an indication of the actions that would be taken, without modifying the file system. Mount the file system: Additional resources xfs_repair(8) and xfs_metadump(8) man pages on your system 15.5. Repairing an XFS file system with xfs_repair This procedure repairs a corrupted XFS file system using the xfs_repair utility. Procedure Create a metadata image prior to repair for diagnostic or testing purposes using the xfs_metadump utility. A pre-repair file system metadata image can be useful for support investigations if the corruption is due to a software bug. Patterns of corruption present in the pre-repair image can aid in root-cause analysis. Use the xfs_metadump debugging tool to copy the metadata from an XFS file system to a file. The resulting metadump file can be compressed using standard compression utilities to reduce the file size if large metadump files need to be sent to support. Replay the log by remounting the file system: Use the xfs_repair utility to repair the unmounted file system: If the mount succeeded, no additional options are required: If the mount failed with the Structure needs cleaning error, the log is corrupted and cannot be replayed. Use the -L option ( force log zeroing ) to clear the log: Warning This command causes all metadata updates in progress at the time of the crash to be lost, which might cause significant file system damage and data loss. This should be used only as a last resort if the log cannot be replayed. Mount the file system: Additional resources xfs_repair(8) man page on your system 15.6. Error handling mechanisms in ext2, ext3, and ext4 The ext2, ext3, and ext4 file systems use the e2fsck utility to perform file system checks and repairs. The file names fsck.ext2 , fsck.ext3 , and fsck.ext4 are hardlinks to the e2fsck utility. These binaries are run automatically at boot time and their behavior differs based on the file system being checked and the state of the file system. A full file system check and repair is invoked for ext2, which is not a metadata journaling file system, and for ext4 file systems without a journal. For ext3 and ext4 file systems with metadata journaling, the journal is replayed in userspace and the utility exits. This is the default action because journal replay ensures a consistent file system after a crash. If these file systems encounter metadata inconsistencies while mounted, they record this fact in the file system superblock. If e2fsck finds that a file system is marked with such an error, e2fsck performs a full check after replaying the journal (if present). Additional resources fsck(8) and e2fsck(8) man pages on your system 15.7. Checking an ext2, ext3, or ext4 file system with e2fsck This procedure checks an ext2, ext3, or ext4 file system using the e2fsck utility. Procedure Replay the log by remounting the file system: Perform a dry run to check the file system. Note Any errors are printed and an indication of the actions that would be taken, without modifying the file system. Later phases of consistency checking may print extra errors as it discovers inconsistencies which would have been fixed in early phases if it were running in repair mode. Additional resources e2image(8) and e2fsck(8) man pages on your system 15.8. Repairing an ext2, ext3, or ext4 file system with e2fsck This procedure repairs a corrupted ext2, ext3, or ext4 file system using the e2fsck utility. Procedure Save a file system image for support investigations. A pre-repair file system metadata image can be useful for support investigations if the corruption is due to a software bug. Patterns of corruption present in the pre-repair image can aid in root-cause analysis. Note Severely damaged file systems may cause problems with metadata image creation. If you are creating the image for testing purposes, use the -r option to create a sparse file of the same size as the file system itself. e2fsck can then operate directly on the resulting file. If you are creating the image to be archived or provided for diagnostic, use the -Q option, which creates a more compact file format suitable for transfer. Replay the log by remounting the file system: Automatically repair the file system. If user intervention is required, e2fsck indicates the unfixed problem in its output and reflects this status in the exit code. Additional resources e2image(8) man page on your system e2fsck(8) man page on your system
[ "dmesg --notime | tail -15 XFS (loop0): Mounting V5 Filesystem XFS (loop0): Metadata CRC error detected at xfs_agi_read_verify+0xcb/0xf0 [xfs], xfs_agi block 0x2 XFS (loop0): Unmount and run xfs_repair XFS (loop0): First 128 bytes of corrupted metadata buffer: 00000000027b3b56: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ............. 000000005f9abc7a: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ............. 000000005b0aef35: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ............. 00000000da9d2ded: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ............. 000000001e265b07: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ............. 000000006a40df69: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ............. 000000000b272907: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ............. 00000000e484aac5: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ............. XFS (loop0): metadata I/O error in \"xfs_trans_read_buf_map\" at daddr 0x2 len 1 error 74 XFS (loop0): xfs_imap_lookup: xfs_ialloc_read_agi() returned error -117, agno 0 XFS (loop0): Failed to read root inode 0x80, error 11", "mount: /mount-point : mount(2) system call failed: Structure needs cleaning.", "mount file-system umount file-system", "xfs_repair -n block-device", "mount file-system", "xfs_metadump block-device metadump-file", "mount file-system umount file-system", "xfs_repair block-device", "xfs_repair -L block-device", "mount file-system", "mount file-system umount file-system", "e2fsck -n block-device", "e2image -r block-device image-file", "e2image -Q block-device image-file", "mount file-system umount file-system", "e2fsck -p block-device" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_file_systems/checking-and-repairing-a-file-system__managing-file-systems
13.13. Network & Hostname
13.13. Network & Hostname To configure essential networking features for your system, select Network & Hostname at the Installation Summary screen. Important When the installation finishes and the system boots for the first time, any network interfaces which you configured during the installation will be activated. However, the installation does not prompt you to configure network interfaces on some common installation paths - for example, when you install Red Hat Enterprise Linux from a DVD to a local hard drive. When you install Red Hat Enterprise Linux from a local installation source to a local storage device, be sure to configure at least one network interface manually if you require network access when the system boots for the first time. You will also need to set the connection to connect automatically after boot when editing the configuration. Locally accessible interfaces are automatically detected by the installation program and cannot be manually added or deleted. The detected interfaces are listed in the left pane. Click an interface in the list to display more details about in on the right. To activate or deactivate a network interface, move the switch in the top right corner of the screen to either ON or OFF . Note There are several types of network device naming standards used to identify network devices with persistent names such as em1 or wl3sp0 . For information about these standards, see the Red Hat Enterprise Linux 7 Networking Guide . Figure 13.10. Network & Hostname Configuration Screen Below the list of connections, enter a host name for this computer in the Hostname input field. The host name can be either a fully-qualified domain name (FQDN) in the format hostname . domainname or a short host name in the format hostname . Many networks have a Dynamic Host Configuration Protocol (DHCP) service that automatically supplies connected systems with a domain name. To allow the DHCP service to assign the domain name to this machine, only specify the short host name. The value localhost.localdomain means that no specific static host name for target system is configured, and the actual host name of installed system will be configured during process of network configuration (for example, by NetworkManager using DHCP or DNS). Important If you want to manually assign the host name, make sure you do not use a domain name that is not delegated to you, as this can result in network resources becoming unavailable. For more information, see the recommended naming practices in the Red Hat Enterprise Linux 7 Networking Guide . Note You can use the Network section of the system Settings dialog to change your network configuration after you have completed the installation. Once you have finished network configuration, click Done to return to the Installation Summary screen. 13.13.1. Edit Network Connections This section only details the most important settings for a typical wired connection used during installation. Many of the available options do not have to be changed in most installation scenarios and are not carried over to the installed system. Configuration of other types of network is broadly similar, although the specific configuration parameters are necessarily different. To learn more about network configuration after installation, see the Red Hat Enterprise Linux 7 Networking Guide . To configure a network connection manually, click the Configure button in the lower right corner of the screen. A dialog appears that allows you to configure the selected connection. The configuration options presented depends on whether the connection is wired, wireless, mobile broadband, VPN, or DSL. If required, see the Networking Guide for more detailed information on network settings. The most useful network configuration options to consider during installation are: Mark the Automatically connect to this network when it is available check box if you want to use the connection every time the system boots. You can use more than one connection that will connect automatically. This setting will carry over to the installed system. Figure 13.11. Network Auto-Connection Feature By default, IPv4 parameters are configured automatically by the DHCP service on the network. At the same time, the IPv6 configuration is set to the Automatic method. This combination is suitable for most installation scenarios and usually does not require any changes. Figure 13.12. IP Protocol Settings When you have finished editing network settings, click Save to save the new configuration. If you reconfigured a device that was already active during installation, you must restart the device in order to use the new configuration in the installation environment. Use the ON/OFF switch on the Network & Host Name screen to restart the device. 13.13.2. Advanced Network Interfaces Advanced network interfaces are also available for installation. This includes virtual local area networks ( VLAN s) and three methods to use aggregated links. Detailed description of these interfaces is beyond the scope of this document; read the Red Hat Enterprise Linux 7 Networking Guide for more information. To create an advanced network interface, click the + button in the lower left corner of the Network & Hostname screen. Figure 13.13. Network & Hostname Configuration Screen A dialog appears with a drop-down menu with the following options: Bond - represents NIC ( Network Interface Controller ) Bonding, a method to bind multiple network interfaces together into a single, bonded, channel. Bridge - represents NIC Bridging, a method to connect multiple separate network into one aggregate network. Team - represents NIC Teaming, a new implementation to aggregate links, designed to provide a small kernel driver to implement the fast handling of packet flows, and various applications to do everything else in user space. VLAN - represents a method to create multiple distinct broadcast domains, which are mutually isolated. Figure 13.14. Advanced Network Interface Dialog Note Note that locally accessible interfaces, wired or wireless, are automatically detected by the installation program and cannot be manually added or deleted by using these controls. Once you have selected an option and clicked the Add button, another dialog appears for you to configure the new interface. See the respective chapters in the Red Hat Enterprise Linux 7 Networking Guide for detailed instructions. To edit configuration on an existing advanced interface, click the Configure button in the lower right corner of the screen. You can also remove a manually-added interface by clicking the - button.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-network-hostname-configuration-ppc
5.4. Capacity Tuning
5.4. Capacity Tuning Read this section for an outline of memory, kernel and file system capacity, the parameters related to each, and the trade-offs involved in adjusting these parameters. To set these values temporarily during tuning, echo the desired value to the appropriate file in the proc file system. For example, to set overcommit_memory temporarily to 1 , run: Note that the path to the parameter in the proc file system varies depending on the system affected by the change. To set these values persistently, use the sysctl command. For information on how to use sysctl , see E.4. Using the sysctl Command in the Red Hat Enterprise Linux 6 Deployment Guide . Starting with Red Hat Enterprise Linux 6.6, the /proc/meminfo file provides the MemAvailable field. To determine how much memory is available, run: Capacity-related Memory Tunables Each of the following parameters is located under /proc/sys/vm/ in the proc file system. overcommit_memory Defines the conditions that determine whether a large memory request is accepted or denied. There are three possible values for this parameter: 0 - The default setting. The kernel performs heuristic memory overcommit handling by estimating the amount of memory available and failing requests that are blatantly invalid. Unfortunately, since memory is allocated using a heuristic rather than a precise algorithm, this setting can sometimes allow available memory on the system to be overloaded. 1 - The kernel performs no memory overcommit handling. Under this setting, the potential for memory overload is increased, but so is performance for memory-intensive tasks. 2 - The kernel denies requests for memory equal to or larger than the sum of total available swap and the percentage of physical RAM specified in overcommit_ratio . This setting is best if you want a lesser risk of memory overcommitment. Note This setting is only recommended for systems with swap areas larger than their physical memory. overcommit_ratio Specifies the percentage of physical RAM considered when overcommit_memory is set to 2 . The default value is 50 . max_map_count Defines the maximum number of memory map areas that a process may use. In most cases, the default value of 65530 is appropriate. Increase this value if your application needs to map more than this number of files. nr_hugepages Defines the number of hugepages configured in the kernel. The default value is 0. It is only possible to allocate (or deallocate) hugepages if there are sufficient physically contiguous free pages in the system. Pages reserved by this parameter cannot be used for other purposes. Further information is available from the installed documentation: /usr/share/doc/kernel-doc- kernel_version /Documentation/vm/hugetlbpage.txt . For an Oracle database workload, Red Hat recommends configuring a number of hugepages equivalent to slightly more than the total size of the system global area of all databases running on the system. 5 additional hugepages per database instance is sufficient. Capacity-related Kernel Tunables Default values for the following parameters, located in the /proc/sys/kernel/ directory, can be calculated by the kernel at boot time depending on available system resources. To determine the page size, enter: To determine the huge page size, enter: msgmax Defines the maximum allowable size in bytes of any single message in a message queue. This value must not exceed the size of the queue ( msgmnb ). To determine the current msgmax value on your system, enter: msgmnb Defines the maximum size in bytes of a single message queue. To determine the current msgmnb value on your system, enter: msgmni Defines the maximum number of message queue identifiers (and therefore the maximum number of queues). To determine the current msgmni value on your system, enter: sem Semaphores, counters that help synchronize processes and threads, are generally configured to assist with database workloads. Recommended values vary between databases. See your database documentation for details about semaphore values. This parameter takes four values, separated by spaces, that represent SEMMSL, SEMMNS, SEMOPM, and SEMMNI respectively. shmall Defines the total number of shared memory pages that can be used on the system at one time. For database workloads, Red Hat recommends that this value is set to the result of shmmax divided by the hugepage size. However, Red Hat recommends checking your vendor documentation for recommended values. To determine the current shmall value on your system, enter: shmmax Defines the maximum shared memory segment allowed by the kernel, in bytes. For database workloads, Red Hat recommends a value no larger than 75% of the total memory on the system. However, Red Hat recommends checking your vendor documentation for recommended values. To determine the current shmmax value on your system, enter: shmmni Defines the system-wide maximum number of shared memory segments. The default value is 4096 on all systems. threads-max Defines the system-wide maximum number of threads (tasks) to be used by the kernel at one time. To determine the current threads-max value on your system, enter: The default value is the result of: The minimum value of threads-max is 20 . Capacity-related File System Tunables Each of the following parameters is located under /proc/sys/fs/ in the proc file system. aio-max-nr Defines the maximum allowed number of events in all active asynchronous I/O contexts. The default value is 65536 . Note that changing this value does not pre-allocate or resize any kernel data structures. file-max Lists the maximum number of file handles that the kernel allocates. The default value matches the value of files_stat.max_files in the kernel, which is set to the largest value out of either (mempages * (PAGE_SIZE / 1024)) / 10 , or NR_FILE (8192 in Red Hat Enterprise Linux). Raising this value can resolve errors caused by a lack of available file handles. Out-of-Memory Kill Tunables Out of Memory (OOM) refers to a computing state where all available memory, including swap space, has been allocated. By default, this situation causes the system to panic and stop functioning as expected. However, setting the /proc/sys/vm/panic_on_oom parameter to 0 instructs the kernel to call the oom_killer function when OOM occurs. Usually, oom_killer can kill rogue processes and the system survives. The following parameter can be set on a per-process basis, giving you increased control over which processes are killed by the oom_killer function. It is located under /proc/ pid / in the proc file system, where pid is the process ID number. oom_adj Defines a value from -16 to 15 that helps determine the oom_score of a process. The higher the oom_score value, the more likely the process will be killed by the oom_killer . Setting a oom_adj value of -17 disables the oom_killer for that process. Important Any processes spawned by an adjusted process will inherit that process's oom_score . For example, if an sshd process is protected from the oom_killer function, all processes initiated by that SSH session will also be protected. This can affect the oom_killer function's ability to salvage the system if OOM occurs.
[ "echo 1 > /proc/sys/vm/overcommit_memory", "cat /proc/meminfo | grep MemAvailable", "getconf PAGE_SIZE", "grep Hugepagesize /proc/meminfo", "sysctl kernel.msgmax", "sysctl kernel.msgmnb", "sysctl kernel.msgmni", "sysctl kernel.shmall", "sysctl kernel.shmmax", "sysctl kernel.threads-max", "mempages / (8 * THREAD_SIZE / PAGE_SIZE )" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/performance_tuning_guide/s-memory-captun
14.2. ID Range Assignments During Installation
14.2. ID Range Assignments During Installation During server installation, the ipa-server-install command by default automatically assigns a random current ID range to the installed server. The setup script randomly selects a range of 200,000 IDs from a total of 10,000 possible ranges. Selecting a random range in this way significantly reduces the probability of conflicting IDs in case you decide to merge two separate IdM domains in the future. However, you can define a current ID range manually during server installation by using the following two options with ipa-server-install : --idstart gives the starting value for UID and GID numbers; by default, the value is selected at random, --idmax gives the maximum UID and GID number; by default, the value is the --idstart starting value plus 199,999. If you have a single IdM server installed, a new user or group entry receives a random ID from the whole range. When you install a new replica and the replica requests its own ID range, the initial ID range for the server splits and is distributed between the server and replica: the replica receives half of the remaining ID range that is available on the initial master. The server and replica then use their respective portions of the original ID range for new entries. Also, if less than 100 IDs from the ID range that was assigned to a replica remain, meaning the replica is close to depleting its allocated ID range, the replica contacts the other available servers with a request for a new ID range. A server receives an ID range the first time the DNA plug-in is used; until then, the server has no ID range defined. For example, when you create a replica from a master server, the replica does not receive an ID range immediately. The replica requests an ID range from the initial master only when the first ID is about to be assigned on the replica. Note If the initial master stops functioning before the replica requests an ID range from it, the replica is unable to contact the master with a request for the ID range. An attempt to add a new user on the replica fails. In such situations, you can find out what ID range is assigned to the disabled master and assign an ID range to the replica manually, which is described in Section 14.5, "Manual ID Range Extension and Assigning a New ID Range" .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/id-ranges-at-install
Chapter 10. Managing errata
Chapter 10. Managing errata As a part of Red Hat's quality control and release process, we provide customers with updates for each release of official Red Hat RPMs. Red Hat compiles groups of related packages into an erratum along with an advisory that provides a description of the update. There are three types of advisories (in order of importance): Security Advisory Describes fixed security issues found in the package. The security impact of the issue can be Low, Moderate, Important, or Critical. Bug Fix Advisory Describes bug fixes for the package. Product Enhancement Advisory Describes enhancements and new features added to the package. Red Hat Satellite imports this errata information when synchronizing repositories with Red Hat's Content Delivery Network (CDN). Red Hat Satellite also provides tools to inspect and filter errata, allowing for precise update management. This way, you can select relevant updates and propagate them through content views to selected content hosts. Errata are labeled according to the most important advisory type they contain. Therefore, errata labeled as Product Enhancement Advisory can contain only enhancement updates, while Bug Fix Advisory errata can contain both bug fixes and enhancements, and Security Advisory can contain all three types. In Red Hat Satellite, there are two keywords that describe an erratum's relationship to the available content hosts: Applicable An erratum that applies to one or more content hosts, which means it updates packages present on the content host. Although these errata apply to content hosts, until their state changes to Installable , the errata are not ready to be installed. Installable errata are automatically applicable. Installable An erratum that applies to one or more content hosts and is available to install on the content host. Installable errata are available to a content host from lifecycle environment and the associated content view, but are not yet installed. This chapter shows how to manage errata and apply them to either a single host or multiple hosts. 10.1. Best practices for errata Use errata to add patches for security issues to a frozen set of content without unnecessarily updating other unaffected packages. Automate errata management by using a Hammer script or an Ansible playbook . View errata on the content hosts page and compare the errata of the current content view and lifecycle environment to the Library lifecycle environment, which contains the latest synchronized packages. You can only apply errata included in the content view version of the lifecycle of your host. You can view applicable errata as a recommendation to create an incremental content view to provide errata to hosts. For more information, see Section 10.8, "Adding errata to an incremental content view" . 10.2. Inspecting available errata The following procedure describes how to view and filter the available errata and how to display metadata of the selected advisory. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Content Types > Errata to view the list of available errata. Use the filtering tools at the top of the page to limit the number of displayed errata: Select the repository to be inspected from the list. All Repositories is selected by default. The Applicable checkbox is selected by default to view only applicable errata in the selected repository. Select the Installable checkbox to view only errata marked as installable. To search the table of errata, type the query in the Search field in the form of: See Section 10.3, "Parameters available for errata search" for the list of parameters available for search. Find the list of applicable operators in Supported Operators for Granular Search in Administering Red Hat Satellite . Automatic suggestion works as you type. You can also combine queries with the use of and and or operators. For example, to display only security advisories related to the kernel package, type: Press Enter to start the search. Click the Errata ID of the erratum you want to inspect: The Details tab contains the description of the updated package as well as documentation of important fixes and enhancements provided by the update. On the Content Hosts tab, you can apply the erratum to selected content hosts as described in Section 10.10, "Applying errata to multiple hosts" . The Repositories tab lists repositories that already contain the erratum. You can filter repositories by the environment and content view, and search for them by the repository name. You can also use the new Host page to view to inspect available errata and select errata to install. In the Satellite web UI, navigate to Hosts > All Hosts and select the host you require. If there are errata associated with the host, an Installable Errata card on the new Host page displays an interactive pie chart showing a breakdown of the security advisories, bugfixes, and enhancements. On the new Host page, select the Content tab. On the Content page select the Errata tab. The page displays installable errata for the chosen host. Click the checkbox for any errata you wish to install. Select Apply via Remote Execution to use Remote Execution, or Apply via customized remote execution if you want to customize the remote execution. Click Submit . CLI procedure To view errata that are available for all organizations, enter the following command: To view details of a specific erratum, enter the following command: You can search errata by entering the query with the --search option. For example, to view applicable errata for the selected product that contains the specified bugs ordered so that the security errata are displayed on top, enter the following command: 10.3. Parameters available for errata search Parameter Description Example bug Search by the Bugzilla number. bug = 1172165 cve Search by the CVE number. cve = CVE-2015-0235 id Search by the errata ID. The auto-suggest system displays a list of available IDs as you type. id = RHBA-2014:2004 issued Search by the issue date. You can specify the exact date, like "Feb16,2015", or use keywords, for example "Yesterday", or "1 hour ago". The time range can be specified with the use of the "<" and ">" operators. issued < "Jan 12,2015" package Search by the full package build name. The auto-suggest system displays a list of available packages as you type. package = glib2-2.22.5-6.el6.i686 package_name Search by the package name. The auto-suggest system displays a list of available packages as you type. package_name = glib2 severity Search by the severity of the issue fixed by the security update. Specify Critical , Important , or Moderate . severity = Critical title Search by the advisory title. title ~ openssl type Search by the advisory type. Specify security , bugfix , or enhancement . type = bugfix updated Search by the date of the last update. You can use the same formats as with the issued parameter. updated = "6 days ago" 10.4. Applying installable errata Use the following procedure to view a list of installable errata and select errata to install. Procedure In the Satellite web UI, navigate to Hosts > All Hosts and select the host you require. If there are errata associated with the host, they are displayed in an Installable Errata card on the new Host page. On the Content tab, Errata displays installable errata for the chosen host. Click the checkbox for any errata you wish to install. Using the vertical ellipsis icon to the errata you want to add to the host, select Apply via Remote Execution to use Remote Execution. Select Apply via customized remote execution if you want to customize the remote execution. Click Submit . 10.5. Subscribing to errata notifications You can configure email notifications for Satellite users. Users receive a summary of applicable and installable errata, notifications on content view promotion or after synchronizing a repository. For more information, see Configuring Email Notification Preferences in Administering Red Hat Satellite . 10.6. Limitations to repository dependency resolution With Satellite, using incremental updates to your content views solves some repository dependency problems. However, dependency resolution at a repository level still remains problematic on occasion. When a repository update becomes available with a new dependency, Satellite retrieves the newest version of the package to solve the dependency, even if there are older versions available in the existing repository package. This can create further dependency resolution problems when installing packages. Example scenario A repository on your client has the package example_repository-1.0 with the dependency example_repository-libs-1.0 . The repository also has another package example_tools-1.0 . A security erratum becomes available with the package example_tools-1.1 . The example_tools-1.1 package requires the example_repository-libs-1.1 package as a dependency. After an incremental content view update, the example_tools-1.1 , example_tools-1.0 , and example_repository-libs-1.1 are now in the repository. The repository also has the packages example_repository-1.0 and example_repository-libs-1.0 . Note that the incremental update to the content view did not add the package example_repository-1.1 . Because you can install all these packages using dnf , no potential problem is detected. However, when the client installs the example_tools-1.1 package, a dependency resolution problem occurs because both example_repository-libs-1.0 and example_repository-libs-1.1 cannot be installed. There is currently no workaround for this problem. The larger the time frame, and minor Y releases between the base set of packages and the errata being applied, the higher the chance of a problem with dependency resolution. 10.7. Creating a content view filter for errata You can use content filters to limit errata. Such filters include: ID - Select specific erratum to allow into your resulting repositories. Date Range - Define a date range and include a set of errata released during that date range. Type - Select the type of errata to include such as bug fixes, enhancements, and security updates. Create a content filter to exclude errata after a certain date. This ensures your production systems in the application lifecycle are kept up to date to a certain point. Then you can modify the filter's start date to introduce new errata into your testing environment to test the compatibility of new packages into your application lifecycle. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites A content view with the repositories that contain required errata is created. For more information, see Section 7.4, "Creating a content view" . Procedure In the Satellite web UI, navigate to Content > Lifecycle > Content Views . Select a content view that you want to use for applying errata. Select Yum Content > Filters and click New Filter . In the Name field, enter Errata Filter . From the Content Type list, select Erratum - Date and Type . From the Inclusion Type list, select Exclude . In the Description field, enter Exclude errata items from YYYY-MM-DD . Click Save . For Errata Type , select the checkboxes of errata types you want to exclude. For example, select the Enhancement and Bugfix checkboxes and clear the Security checkbox to exclude enhancement and bugfix errata after certain date, but include all the security errata. For Date Type , select one of two checkboxes: Issued On for the issued date of the erratum. Updated On for the date of the erratum's last update. Select the Start Date to exclude all errata on or after the selected date. Leave the End Date field blank. Click Save . Click Publish New Version to publish the resulting repository. Enter Adding errata filter in the Description field. Click Save . When the content view completes publication, notice the Content column reports a reduced number of packages and errata from the initial repository. This means the filter successfully excluded the all non-security errata from the last year. Click the Versions tab. Click Promote to the right of the published version. Select the environments you want to promote the content view version to. In the Description field, enter the description for promoting. Click Promote Version to promote this content view version across the required environments. CLI procedure Create a filter for the errata: Create a filter rule to exclude all errata on or after the Start Date that you want to set: Publish the content view: Promote the content view to the lifecycle environment so that the included errata are available to that lifecycle environment: 10.8. Adding errata to an incremental content view If errata are available but not installable, you can create an incremental content view version to add the errata to your content hosts. For example, if the content view is version 1.0, it becomes content view version 1.1, and when you publish, it becomes content view version 2.0. Important If your content view version is old, you might encounter incompatibilities when incrementally adding enhancement errata. This is because enhancements are typically designed for the most current software in a repository. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Content Types > Errata . From the Errata list, click the name of the errata that you want to apply. Select the content hosts that you want to apply the errata to, and click Apply to Hosts . This creates the incremental update to the content view. If you want to apply the errata to the content host, select the Apply Errata to Content Hosts immediately after publishing checkbox. Click Confirm to apply the errata. CLI procedure List the errata and its corresponding IDs: List the different content-view versions and the corresponding IDs: Apply a single erratum to content-view version. You can add more IDs in a comma-separated list. 10.9. Applying errata to hosts Use these procedures to review and apply errata to hosts. Prerequisites Synchronize Red Hat Satellite repositories with the latest errata available from Red Hat. For more information, see Section 4.7, "Synchronizing repositories" . Register the host to an environment and content view on Satellite Server. For more information, see Registering Hosts in Managing hosts . Configure the host for remote execution. For more information about running remote execution jobs, see Configuring and Setting Up Remote Jobs in Managing hosts . The procedure to apply an erratum to a host depends on its operating system. 10.9.1. Applying errata to hosts running Red Hat Enterprise Linux 9 Use this procedure to review and apply errata to a host running Red Hat Enterprise Linux 9. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Content Hosts and select the host you want to apply errata to. Navigate to the Errata tab to see the list of errata. Select the errata to apply and click Apply Selected . In the confirmation window, click Apply . After the task to update all packages associated with the selected errata completes, click the Details tab to view the updated packages. CLI procedure List all errata for the host: Find the module stream an erratum belongs to: On the host, update the module stream: 10.9.2. Applying errata to hosts running Red Hat Enterprise Linux 8 Use this procedure to review and apply errata to a host running Red Hat Enterprise Linux 8. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Content Hosts and select the host you want to apply errata to. Navigate to the Errata tab to see the list of errata. Select the errata to apply and click Apply Selected . In the confirmation window, click Apply . After the task to update all packages associated with the selected errata completes, click the Details tab to view the updated packages. CLI procedure List all errata for the host: Find the module stream an erratum belongs to: On the host, update the module stream: 10.9.3. Applying errata to hosts running Red Hat Enterprise Linux 7 Use this procedure to review and apply errata to a host running Red Hat Enterprise Linux 7. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Content Hosts and select the host you want to apply errata to. Navigate to the Errata tab to see the list of errata. Select the errata to apply and click Apply Selected . In the confirmation window, click Apply . After the task to update all packages associated with the selected errata completes, click the Details tab to view the updated packages. CLI procedure List all errata for the host: Apply the most recent erratum to the host. Identify the erratum to apply using the erratum ID. Using Remote Execution 10.10. Applying errata to multiple hosts Use these procedures to review and apply errata to multiple RHEL hosts. Prerequisites Synchronize Red Hat Satellite repositories with the latest errata available from Red Hat. For more information, see Section 4.7, "Synchronizing repositories" . Register the hosts to an environment and content view on Satellite Server. For more information, see Registering Hosts in Managing hosts . Configure the host for remote execution. For more information about running remote execution jobs, see Configuring and Setting Up Remote Jobs in Managing hosts . Procedure In the Satellite web UI, navigate to Content > Content Types > Errata . Click the name of an erratum you want to apply. Click to Content Hosts tab. Select the hosts you want to apply errata to and click Apply to Hosts . Click Confirm . CLI procedure List all installable errata: Apply one of the errata to multiple hosts: Using Remote Execution The following Bash script applies an erratum to each host for which this erratum is available: for HOST in hammer --csv --csv-separator "|" host list --search "applicable_errata = ERRATUM_ID" --organization "Default Organization" | tail -n+2 | awk -F "|" '{ print USD2 }' ; do echo "== Applying to USDHOST ==" ; hammer host errata apply --host USDHOST --errata-ids ERRATUM_ID1,ERRATUM_ID2 ; done This command identifies all hosts with erratum_IDs as an applicable erratum and then applies the erratum to each host. To see if an erratum is applied successfully, find the corresponding task in the output of the following command: View the state of a selected task: 10.11. Applying errata to a host collection Using Remote Execution
[ "parameter operator value", "type = security and package_name = kernel", "hammer erratum list", "hammer erratum info --id erratum_ID", "hammer erratum list --product-id 7 --search \"bug = 1213000 or bug = 1207972\" --errata-restrict-applicable 1 --order \"type desc\"", "hammer content-view filter create --content-view \" My_Content_View \" --description \"Exclude errata items from the YYYY-MM-DD \" --name \" My_Filter_Name \" --organization \" My_Organization \" --type \"erratum\"", "hammer content-view filter rule create --content-view \" My_Content_View \" --content-view-filter=\" My_Content_View_Filter \" --organization \" My_Organization \" --start-date \" YYYY-MM-DD \" --types=security,enhancement,bugfix", "hammer content-view publish --name \" My_Content_View \" --organization \" My_Organization \"", "hammer content-view version promote --content-view \" My_Content_View \" --organization \" My_Organization \" --to-lifecycle-environment \" My_Lifecycle_Environment \"", "hammer erratum list", "hammer content-view version list", "hammer content-view version incremental-update --content-view-version-id 319 --errata-ids 34068b", "hammer host errata list --host client.example.com", "hammer erratum info --id ERRATUM_ID", "dnf upgrade Module_Stream_Name", "hammer host errata list --host client.example.com", "hammer erratum info --id ERRATUM_ID", "dnf upgrade Module_Stream_Name", "hammer host errata list --host client.example.com", "hammer job-invocation create --feature katello_errata_install --inputs errata= ERRATUM_ID1 , ERRATUM_ID2 --search-query \"name = client.example.com\"", "hammer erratum list --errata-restrict-installable true --organization \" Default Organization \"", "hammer job-invocation create --feature katello_errata_install --inputs errata= ERRATUM_ID --search-query \"applicable_errata = ERRATUM_ID \"", "for HOST in hammer --csv --csv-separator \"|\" host list --search \"applicable_errata = ERRATUM_ID\" --organization \"Default Organization\" | tail -n+2 | awk -F \"|\" '{ print USD2 }' ; do echo \"== Applying to USDHOST ==\" ; hammer host errata apply --host USDHOST --errata-ids ERRATUM_ID1,ERRATUM_ID2 ; done", "hammer task list", "hammer task progress --id task_ID", "hammer job-invocation create --feature katello_errata_install --inputs errata= ERRATUM_ID1 , ERRATUM_ID2 ,... --search-query \"host_collection = HOST_COLLECTION_NAME \"" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/managing_content/Managing_Errata_content-management
Chapter 28. domain
Chapter 28. domain This chapter describes the commands under the domain command. 28.1. domain create Create new domain Usage: Table 28.1. Positional arguments Value Summary <domain-name> New domain name Table 28.2. Command arguments Value Summary -h, --help Show this help message and exit --description <description> New domain description --enable Enable domain (default) --disable Disable domain --or-show Return existing domain --immutable Make resource immutable. an immutable project may not be deleted or modified except to remove the immutable flag --no-immutable Make resource mutable (default) Table 28.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 28.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 28.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 28.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 28.2. domain delete Delete domain(s) Usage: Table 28.7. Positional arguments Value Summary <domain> Domain(s) to delete (name or id) Table 28.8. Command arguments Value Summary -h, --help Show this help message and exit 28.3. domain list List domains Usage: Table 28.9. Command arguments Value Summary -h, --help Show this help message and exit --name <name> The domain name --enabled The domains that are enabled will be returned Table 28.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 28.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 28.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 28.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 28.4. domain set Set domain properties Usage: Table 28.14. Positional arguments Value Summary <domain> Domain to modify (name or id) Table 28.15. Command arguments Value Summary -h, --help Show this help message and exit --name <name> New domain name --description <description> New domain description --enable Enable domain --disable Disable domain --immutable Make resource immutable. an immutable project may not be deleted or modified except to remove the immutable flag --no-immutable Make resource mutable (default) 28.5. domain show Display domain details Usage: Table 28.16. Positional arguments Value Summary <domain> Domain to display (name or id) Table 28.17. Command arguments Value Summary -h, --help Show this help message and exit Table 28.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 28.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 28.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 28.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack domain create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--description <description>] [--enable | --disable] [--or-show] [--immutable | --no-immutable] <domain-name>", "openstack domain delete [-h] <domain> [<domain> ...]", "openstack domain list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--name <name>] [--enabled]", "openstack domain set [-h] [--name <name>] [--description <description>] [--enable | --disable] [--immutable | --no-immutable] <domain>", "openstack domain show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <domain>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/domain
Chapter 1. Support policy
Chapter 1. Support policy Red Hat will support select major versions of Red Hat build of OpenJDK in its products. For consistency, these are the same versions that Oracle designates as long-term support (LTS) for the Oracle JDK. A major version of Red Hat build of OpenJDK will be supported for a minimum of six years from the time that version is first introduced. For more information, see the OpenJDK Life Cycle and Support Policy . Note RHEL 6 reached the end of life in November 2020. Because of this, Red Hat build of OpenJDK is not supporting RHEL 6 as a supported configuration.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/release_notes_for_red_hat_build_of_openjdk_8.0.412/openjdk8-support-policy
4.16. Thumbnail Protection
4.16. Thumbnail Protection The thumbnail icons can potentially allow an attacker to break into a locked machine using removable media, such as USB devices or CDs. When the system detects a removable media, the Nautilus file manager executes the thumbnail driver code to display thumbnail icons in an appropriate file browser even if the machine is locked. This behavior is unsafe because if the thumbnail executables were vulnerable, the attacker could use the thumbnail driver code to bypass the lock screen without entering the password. Therefore, a new SELinux policy is used to prevent such attacks. This policy ensures that all thumbnail drivers are locked when the screen is locked. The thumbnail protection is enabled for both confined users and unconfined users. This policy affects the following applications: /usr/bin/evince-thumbnailer /usr/bin/ffmpegthumbnailer /usr/bin/gnome-exe-thumbnailer.sh /usr/bin/gnome-nds-thumbnailer /usr/bin/gnome-xcf-thumbnailer /usr/bin/gsf-office-thumbnailer /usr/bin/raw-thumbnailer /usr/bin/shotwell-video-thumbnailer /usr/bin/totem-video-thumbnailer /usr/bin/whaaw-thumbnailer /usr/lib/tumbler-1/tumblerd /usr/lib64/tumbler-1/tumblerd
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/sect-thumbnail_protection
High Availability Add-On Overview
High Availability Add-On Overview Red Hat Enterprise Linux 6 Overview of the High Availability Add-On for Red Hat Enterprise Linux Steven Levine Red Hat Customer Content Services [email protected] John Ha Red Hat Engineering Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/high_availability_add-on_overview/index
Chapter 3. User tasks
Chapter 3. User tasks 3.1. Creating applications from installed Operators This guide walks developers through an example of creating applications from an installed Operator using the OpenShift Container Platform web console. 3.1.1. Creating an etcd cluster using an Operator This procedure walks through creating a new etcd cluster using the etcd Operator, managed by Operator Lifecycle Manager (OLM). Prerequisites Access to an OpenShift Container Platform 4.7 cluster. The etcd Operator already installed cluster-wide by an administrator. Procedure Create a new project in the OpenShift Container Platform web console for this procedure. This example uses a project called my-etcd . Navigate to the Operators Installed Operators page. The Operators that have been installed to the cluster by the cluster administrator and are available for use are shown here as a list of cluster service versions (CSVs). CSVs are used to launch and manage the software provided by the Operator. Tip You can get this list from the CLI using: USD oc get csv On the Installed Operators page, click the etcd Operator to view more details and available actions. As shown under Provided APIs , this Operator makes available three new resource types, including one for an etcd Cluster (the EtcdCluster resource). These objects work similar to the built-in native Kubernetes ones, such as Deployment or ReplicaSet , but contain logic specific to managing etcd. Create a new etcd cluster: In the etcd Cluster API box, click Create instance . The screen allows you to make any modifications to the minimal starting template of an EtcdCluster object, such as the size of the cluster. For now, click Create to finalize. This triggers the Operator to start up the pods, services, and other components of the new etcd cluster. Click on the example etcd cluster, then click the Resources tab to see that your project now contains a number of resources created and configured automatically by the Operator. Verify that a Kubernetes service has been created that allows you to access the database from other pods in your project. All users with the edit role in a given project can create, manage, and delete application instances (an etcd cluster, in this example) managed by Operators that have already been created in the project, in a self-service manner, just like a cloud service. If you want to enable additional users with this ability, project administrators can add the role using the following command: USD oc policy add-role-to-user edit <user> -n <target_project> You now have an etcd cluster that will react to failures and rebalance data as pods become unhealthy or are migrated between nodes in the cluster. Most importantly, cluster administrators or developers with proper access can now easily use the database with their applications. 3.2. Installing Operators in your namespace If a cluster administrator has delegated Operator installation permissions to your account, you can install and subscribe an Operator to your namespace in a self-service manner. 3.2.1. Prerequisites A cluster administrator must add certain permissions to your OpenShift Container Platform user account to allow self-service Operator installation to a namespace. See Allowing non-cluster administrators to install Operators for details. 3.2.2. Operator installation with OperatorHub OperatorHub is a user interface for discovering Operators; it works in conjunction with Operator Lifecycle Manager (OLM), which installs and manages Operators on a cluster. As a user with the proper permissions, you can install an Operator from OperatorHub using the OpenShift Container Platform web console or CLI. During installation, you must determine the following initial settings for the Operator: Installation Mode Choose a specific namespace in which to install the Operator. Update Channel If an Operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list. Approval Strategy You can choose automatic or manual updates. If you choose automatic updates for an installed Operator, when a new version of that Operator is available in the selected channel, Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without human intervention. If you select manual updates, when a newer version of an Operator is available, OLM creates an update request. As a cluster administrator, you must then manually approve that update request to have the Operator updated to the new version. Understanding OperatorHub 3.2.3. Installing from OperatorHub using the web console You can install and subscribe to an Operator from OperatorHub using the OpenShift Container Platform web console. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. Procedure Navigate in the web console to the Operators OperatorHub page. Scroll or type a keyword into the Filter by keyword box to find the Operator you want. For example, type advanced to find the Advanced Cluster Management for Kubernetes Operator. You can also filter options by Infrastructure Features . For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments. Select the Operator to display additional information. Note Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. Read the information about the Operator and click Install . On the Install Operator page: Choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. Select an Update Channel (if more than one is available). Select Automatic or Manual approval strategy, as described earlier. Click Install to make the Operator available to the selected namespaces on this OpenShift Container Platform cluster. If you selected a Manual approval strategy, the upgrade status of the subscription remains Upgrading until you review and approve the install plan. After approving on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an Automatic approval strategy, the upgrade status should resolve to Up to date without intervention. After the upgrade status of the subscription is Up to date , select Operators Installed Operators to verify that the cluster service version (CSV) of the installed Operator eventually shows up. The Status should ultimately resolve to InstallSucceeded in the relevant namespace. Note For the All namespaces... installation mode, the status resolves to InstallSucceeded in the openshift-operators namespace, but the status is Copied if you check in other namespaces. If it does not: Check the logs in any pods in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. 3.2.4. Installing from OperatorHub using the CLI Instead of using the OpenShift Container Platform web console, you can install an Operator from OperatorHub using the CLI. Use the oc command to create or update a Subscription object. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions. Install the oc command to your local system. Procedure View the list of Operators available to the cluster from OperatorHub: USD oc get packagemanifests -n openshift-marketplace Example output NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m ... couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m ... etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m ... Note the catalog for your desired Operator. Inspect your desired Operator to verify its supported install modes and available channels: USD oc describe packagemanifests <operator_name> -n openshift-marketplace An Operator group, defined by an OperatorGroup object, selects target namespaces in which to generate required RBAC access for all Operators in the same namespace as the Operator group. The namespace to which you subscribe the Operator must have an Operator group that matches the install mode of the Operator, either the AllNamespaces or SingleNamespace mode. If the Operator you intend to install uses the AllNamespaces , then the openshift-operators namespace already has an appropriate Operator group in place. However, if the Operator uses the SingleNamespace mode and you do not already have an appropriate Operator group in place, you must create one. Note The web console version of this procedure handles the creation of the OperatorGroup and Subscription objects automatically behind the scenes for you when choosing SingleNamespace mode. Create an OperatorGroup object YAML file, for example operatorgroup.yaml : Example OperatorGroup object apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace> Create the OperatorGroup object: USD oc apply -f operatorgroup.yaml Create a Subscription object YAML file to subscribe a namespace to an Operator, for example sub.yaml : Example Subscription object apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: "-v=10" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: "Exists" resources: 11 requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" nodeSelector: 12 foo: bar 1 For AllNamespaces install mode usage, specify the openshift-operators namespace. Otherwise, specify the relevant single namespace for SingleNamespace install mode usage. 2 Name of the channel to subscribe to. 3 Name of the Operator to subscribe to. 4 Name of the catalog source that provides the Operator. 5 Namespace of the catalog source. Use openshift-marketplace for the default OperatorHub catalog sources. 6 The env parameter defines a list of Environment Variables that must exist in all containers in the pod created by OLM. 7 The envFrom parameter defines a list of sources to populate Environment Variables in the container. 8 The volumes parameter defines a list of Volumes that must exist on the pod created by OLM. 9 The volumeMounts parameter defines a list of VolumeMounts that must exist in all containers in the pod created by OLM. If a volumeMount references a volume that does not exist, OLM fails to deploy the Operator. 10 The tolerations parameter defines a list of Tolerations for the pod created by OLM. 11 The resources parameter defines resource constraints for all the containers in the pod created by OLM. 12 The nodeSelector parameter defines a NodeSelector for the pod created by OLM. Create the Subscription object: USD oc apply -f sub.yaml At this point, OLM is now aware of the selected Operator. A cluster service version (CSV) for the Operator should appear in the target namespace, and APIs provided by the Operator should be available for creation. Additional resources Operator groups Channel names 3.2.5. Installing a specific version of an Operator You can install a specific version of an Operator by setting the cluster service version (CSV) in a Subscription object. Prerequisites Access to an OpenShift Container Platform cluster using an account with Operator installation permissions OpenShift CLI ( oc ) installed Procedure Create a Subscription object YAML file that subscribes a namespace to an Operator with a specific version by setting the startingCSV field. Set the installPlanApproval field to Manual to prevent the Operator from automatically upgrading if a later version exists in the catalog. For example, the following sub.yaml file can be used to install the Red Hat Quay Operator specifically to version 3.4.0: Subscription with a specific starting Operator version apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: quay-v3.4 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.4.0 2 1 Set the approval strategy to Manual in case your specified version is superseded by a later version in the catalog. This plan prevents an automatic upgrade to a later version and requires manual approval before the starting CSV can complete the installation. 2 Set a specific version of an Operator CSV. Create the Subscription object: USD oc apply -f sub.yaml Manually approve the pending install plan to complete the Operator installation. Additional resources Manually approving a pending Operator upgrade
[ "oc get csv", "oc policy add-role-to-user edit <user> -n <target_project>", "oc get packagemanifests -n openshift-marketplace", "NAME CATALOG AGE 3scale-operator Red Hat Operators 91m advanced-cluster-management Red Hat Operators 91m amq7-cert-manager Red Hat Operators 91m couchbase-enterprise-certified Certified Operators 91m crunchy-postgres-operator Certified Operators 91m mongodb-enterprise Certified Operators 91m etcd Community Operators 91m jaeger Community Operators 91m kubefed Community Operators 91m", "oc describe packagemanifests <operator_name> -n openshift-marketplace", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: <operatorgroup_name> namespace: <namespace> spec: targetNamespaces: - <namespace>", "oc apply -f operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: <subscription_name> namespace: openshift-operators 1 spec: channel: <channel_name> 2 name: <operator_name> 3 source: redhat-operators 4 sourceNamespace: openshift-marketplace 5 config: env: 6 - name: ARGS value: \"-v=10\" envFrom: 7 - secretRef: name: license-secret volumes: 8 - name: <volume_name> configMap: name: <configmap_name> volumeMounts: 9 - mountPath: <directory_name> name: <volume_name> tolerations: 10 - operator: \"Exists\" resources: 11 requests: memory: \"64Mi\" cpu: \"250m\" limits: memory: \"128Mi\" cpu: \"500m\" nodeSelector: 12 foo: bar", "oc apply -f sub.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay spec: channel: quay-v3.4 installPlanApproval: Manual 1 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: quay-operator.v3.4.0 2", "oc apply -f sub.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/operators/user-tasks
4.3.9. Splitting a Volume Group
4.3.9. Splitting a Volume Group To split the physical volumes of a volume group and create a new volume group, use the vgsplit command. Logical volumes cannot be split between volume groups. Each existing logical volume must be entirely on the physical volumes forming either the old or the new volume group. If necessary, however, you can use the pvmove command to force the split. The following example splits off the new volume group smallvg from the original volume group bigvg .
[ "vgsplit bigvg smallvg /dev/ram15 Volume group \"smallvg\" successfully split from \"bigvg\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/VG_split
Chapter 1. Hosted control planes overview
Chapter 1. Hosted control planes overview You can deploy OpenShift Container Platform clusters by using two different control plane configurations: standalone or hosted control planes. The standalone configuration uses dedicated virtual machines or physical machines to host the control plane. With hosted control planes for OpenShift Container Platform, you create control planes as pods on a hosting cluster without the need for dedicated virtual or physical machines for each control plane. 1.1. Glossary of common concepts and personas for hosted control planes When you use hosted control planes for OpenShift Container Platform, it is important to understand its key concepts and the personas that are involved. 1.1.1. Concepts hosted cluster An OpenShift Container Platform cluster with its control plane and API endpoint hosted on a management cluster. The hosted cluster includes the control plane and its corresponding data plane. hosted cluster infrastructure Network, compute, and storage resources that exist in the tenant or end-user cloud account. hosted control plane An OpenShift Container Platform control plane that runs on the management cluster, which is exposed by the API endpoint of a hosted cluster. The components of a control plane include etcd, the Kubernetes API server, the Kubernetes controller manager, and a VPN. hosting cluster See management cluster . managed cluster A cluster that the hub cluster manages. This term is specific to the cluster lifecycle that the multicluster engine for Kubernetes Operator manages in Red Hat Advanced Cluster Management. A managed cluster is not the same thing as a management cluster . For more information, see Managed cluster . management cluster An OpenShift Container Platform cluster where the HyperShift Operator is deployed and where the control planes for hosted clusters are hosted. The management cluster is synonymous with the hosting cluster . management cluster infrastructure Network, compute, and storage resources of the management cluster. node pool A resource that contains the compute nodes. The control plane contains node pools. The compute nodes run applications and workloads. 1.1.2. Personas cluster instance administrator Users who assume this role are the equivalent of administrators in standalone OpenShift Container Platform. This user has the cluster-admin role in the provisioned cluster, but might not have power over when or how the cluster is updated or configured. This user might have read-only access to see some configuration projected into the cluster. cluster instance user Users who assume this role are the equivalent of developers in standalone OpenShift Container Platform. This user does not have a view into OperatorHub or machines. cluster service consumer Users who assume this role can request control planes and worker nodes, drive updates, or modify externalized configurations. Typically, this user does not manage or access cloud credentials or infrastructure encryption keys. The cluster service consumer persona can request hosted clusters and interact with node pools. Users who assume this role have RBAC to create, read, update, or delete hosted clusters and node pools within a logical boundary. cluster service provider Users who assume this role typically have the cluster-admin role on the management cluster and have RBAC to monitor and own the availability of the HyperShift Operator as well as the control planes for the tenant's hosted clusters. The cluster service provider persona is responsible for several activities, including the following examples: Owning service-level objects for control plane availability, uptime, and stability Configuring the cloud account for the management cluster to host control planes Configuring the user-provisioned infrastructure, which includes the host awareness of available compute resources 1.2. Introduction to hosted control planes You can use hosted control planes for Red Hat OpenShift Container Platform to reduce management costs, optimize cluster deployment time, and separate management and workload concerns so that you can focus on your applications. Hosted control planes is available by using the multicluster engine for Kubernetes Operator version 2.0 or later on the following platforms: Bare metal by using the Agent provider OpenShift Virtualization, as a Generally Available feature in connected environments and a Technology Preview feature in disconnected environments Amazon Web Services (AWS), as a Technology Preview feature IBM Z, as a Technology Preview feature IBM Power, as a Technology Preview feature 1.2.1. Architecture of hosted control planes OpenShift Container Platform is often deployed in a coupled, or standalone, model, where a cluster consists of a control plane and a data plane. The control plane includes an API endpoint, a storage endpoint, a workload scheduler, and an actuator that ensures state. The data plane includes compute, storage, and networking where workloads and applications run. The standalone control plane is hosted by a dedicated group of nodes, which can be physical or virtual, with a minimum number to ensure quorum. The network stack is shared. Administrator access to a cluster offers visibility into the cluster's control plane, machine management APIs, and other components that contribute to the state of a cluster. Although the standalone model works well, some situations require an architecture where the control plane and data plane are decoupled. In those cases, the data plane is on a separate network domain with a dedicated physical hosting environment. The control plane is hosted by using high-level primitives such as deployments and stateful sets that are native to Kubernetes. The control plane is treated as any other workload. 1.2.2. Benefits of hosted control planes With hosted control planes for OpenShift Container Platform, you can pave the way for a true hybrid-cloud approach and enjoy several other benefits. The security boundaries between management and workloads are stronger because the control plane is decoupled and hosted on a dedicated hosting service cluster. As a result, you are less likely to leak credentials for clusters to other users. Because infrastructure secret account management is also decoupled, cluster infrastructure administrators cannot accidentally delete control plane infrastructure. With hosted control planes, you can run many control planes on fewer nodes. As a result, clusters are more affordable. Because the control planes consist of pods that are launched on OpenShift Container Platform, control planes start quickly. The same principles apply to control planes and workloads, such as monitoring, logging, and auto-scaling. From an infrastructure perspective, you can push registries, HAProxy, cluster monitoring, storage nodes, and other infrastructure components to the tenant's cloud provider account, isolating usage to the tenant. From an operational perspective, multicluster management is more centralized, which results in fewer external factors that affect the cluster status and consistency. Site reliability engineers have a central place to debug issues and navigate to the cluster data plane, which can lead to shorter Time to Resolution (TTR) and greater productivity. 1.3. Differences between hosted control planes and OpenShift Container Platform Hosted control planes is a form factor of OpenShift Container Platform. Hosted clusters and the stand-alone OpenShift Container Platform clusters are configured and managed differently. See the following tables to understand the differences between OpenShift Container Platform and hosted control planes: 1.3.1. Cluster creation and lifecycle OpenShift Container Platform Hosted control planes You install a standalone OpenShift Container Platform cluster by using the openshift-install binary or the Assisted Installer. You install a hosted cluster by using the hypershift.openshift.io API resources such as HostedCluster and NodePool , on an existing OpenShift Container Platform cluster. 1.3.2. Cluster configuration OpenShift Container Platform Hosted control planes You configure cluster-scoped resources such as authentication, API server, and proxy by using the config.openshift.io API group. You configure resources that impact the control plane in the HostedCluster resource. 1.3.3. etcd encryption OpenShift Container Platform Hosted control planes You configure etcd encryption by using the APIServer resource with AES-GCM or AES-CBC. For more information, see "Enabling etcd encryption". You configure etcd encryption by using the HostedCluster resource in the SecretEncryption field with AES-CBC or KMS for Amazon Web Services. 1.3.4. Operators and control plane OpenShift Container Platform Hosted control planes A standalone OpenShift Container Platform cluster contains separate Operators for each control plane component. A hosted cluster contains a single Operator named Control Plane Operator that runs in the hosted control plane namespace on the management cluster. etcd uses storage that is mounted on the control plane nodes. The etcd cluster Operator manages etcd. etcd uses a persistent volume claim for storage and is managed by the Control Plane Operator. The Ingress Operator, network related Operators, and Operator Lifecycle Manager (OLM) run on the cluster. The Ingress Operator, network related Operators, and Operator Lifecycle Manager (OLM) run in the hosted control plane namespace on the management cluster. The OAuth server runs inside the cluster and is exposed through a route in the cluster. The OAuth server runs inside the control plane and is exposed through a route, node port, or load balancer on the management cluster. 1.3.5. Updates OpenShift Container Platform Hosted control planes The Cluster Version Operator (CVO) orchestrates the update process and monitors the ClusterVersion resource. Administrators and OpenShift components can interact with the CVO through the ClusterVersion resource. The oc adm upgrade command results in a change to the ClusterVersion.Spec.DesiredUpdate field in the ClusterVersion resource. The hosted control planes update results in a change to the .spec.release.image field in the HostedCluster and NodePools resources. Any changes to the ClusterVersion resource are ignored. After you update an OpenShift Container Platform cluster, both the control plane and compute machines are updated. After you update the hosted cluster, only the control plane is updated. You perform node pool updates separately. 1.3.6. Machine configuration and management OpenShift Container Platform Hosted control planes The MachineSets resource manages machines in the openshift-machine-api namespace. The NodePool resource manages machines on the management cluster. A set of control plane machines are available. A set of control plane machines do not exist. You enable a machine health check by using the MachineHealthCheck resource. You enable a machine health check through the .spec.management.autoRepair field in the NodePool resource. You enable autoscaling by using the ClusterAutoscaler and MachineAutoscaler resources. You enable autoscaling through the spec.autoScaling field in the NodePool resource. Machines and machine sets are exposed in the cluster. Machines, machine sets, and machine deployments from upstream Cluster CAPI Operator are used to manage machines but are not exposed to the user. All machine sets are upgraded automatically when you update the cluster. You update your node pools independently from the hosted cluster updates. Only an in-place upgrade is supported in the cluster. Both replace and in-place upgrades are supported in the hosted cluster. The Machine Config Operator manages configurations for machines. The Machine Config Operator does not exist in hosted control planes. You configure machine Ignition by using the MachineConfig , KubeletConfig , and ContainerRuntimeConfig resources that are selected from a MachineConfigPool selector. You configure the MachineConfig , KubeletConfig , and ContainerRuntimeConfig resources through the config map referenced in the spec.config field of the NodePool resource. The Machine Config Daemon (MCD) manages configuration changes and updates on each of the nodes. For an in-place upgrade, the node pool controller creates a run-once pod that updates a machine based on your configuration. You can modify the machine configuration resources such as the SR-IOV Operator. You cannot modify the machine configuration resources. 1.3.7. Networking OpenShift Container Platform Hosted control planes The Kube API server communicates with nodes directly, because the Kube API server and nodes exist in the same Virtual Private Cloud (VPC). The Kube API server communicates with nodes through Konnectivity. The Kube API server and nodes exist in a different Virtual Private Cloud (VPC). Nodes communicate with the Kube API server through the internal load balancer. Nodes communicate with the Kube API server through an external load balancer or a node port. 1.3.8. Web console OpenShift Container Platform Hosted control planes The web console shows the status of a control plane. The web console does not show the status of a control plane. You can update your cluster by using the web console. You cannot update the hosted cluster by using the web console. The web console displays the infrastructure resources such as machines. The web console does not display the infrastructure resources. You can configure machines through the MachineConfig resource by using the web console. You cannot configure machines by using the web console. Additional resources Enabling etcd encryption 1.4. Relationship between hosted control planes, multicluster engine Operator, and RHACM You can configure hosted control planes by using the multicluster engine for Kubernetes Operator. The multicluster engine is an integral part of Red Hat Advanced Cluster Management (RHACM) and is enabled by default with RHACM. The multicluster engine Operator cluster lifecycle defines the process of creating, importing, managing, and destroying Kubernetes clusters across various infrastructure cloud providers, private clouds, and on-premises data centers. The multicluster engine Operator is the cluster lifecycle Operator that provides cluster management capabilities for OpenShift Container Platform and RHACM hub clusters. The multicluster engine Operator enhances cluster fleet management and supports OpenShift Container Platform cluster lifecycle management across clouds and data centers. Figure 1.1. Cluster life cycle and foundation You can use the multicluster engine Operator with OpenShift Container Platform as a standalone cluster manager or as part of a RHACM hub cluster. Tip A management cluster is also known as the hosting cluster. You can deploy OpenShift Container Platform clusters by using two different control plane configurations: standalone or hosted control planes. The standalone configuration uses dedicated virtual machines or physical machines to host the control plane. With hosted control planes for OpenShift Container Platform, you create control planes as pods on a management cluster without the need for dedicated virtual or physical machines for each control plane. Figure 1.2. RHACM and the multicluster engine Operator introduction diagram 1.5. Versioning for hosted control planes With each major, minor, or patch version release of OpenShift Container Platform, two components of hosted control planes are released: The HyperShift Operator The hcp command-line interface (CLI) The HyperShift Operator manages the lifecycle of hosted clusters that are represented by the HostedCluster API resources. The HyperShift Operator is released with each OpenShift Container Platform release. The HyperShift Operator creates the supported-versions config map in the hypershift namespace. The config map contains the supported hosted cluster versions. You can host different versions of control planes on the same management cluster. Example supported-versions config map object apiVersion: v1 data: supported-versions: '{"versions":["4.16"]}' kind: ConfigMap metadata: labels: hypershift.openshift.io/supported-versions: "true" name: supported-versions namespace: hypershift You can use the hcp CLI to create hosted clusters. You can use the hypershift.openshift.io API resources, such as, HostedCluster and NodePool , to create and manage OpenShift Container Platform clusters at scale. A HostedCluster resource contains the control plane and common data plane configuration. When you create a HostedCluster resource, you have a fully functional control plane with no attached nodes. A NodePool resource is a scalable set of worker nodes that is attached to a HostedCluster resource. The API version policy generally aligns with the policy for Kubernetes API versioning . Additional resources Configuring node tuning in a hosted cluster Advanced node tuning for hosted clusters by setting kernel boot parameters
[ "apiVersion: v1 data: supported-versions: '{\"versions\":[\"4.16\"]}' kind: ConfigMap metadata: labels: hypershift.openshift.io/supported-versions: \"true\" name: supported-versions namespace: hypershift" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/hosted_control_planes/hosted-control-planes-overview
Chapter 7. Migration from previous versions of .NET
Chapter 7. Migration from versions of .NET 7.1. Migration from versions of .NET Microsoft provides instructions for migrating from most versions of .NET Core. If you are using a version of .NET that is no longer supported or want to migrate to a newer .NET version to expand functionality, see the following articles: Migrate from ASP.NET Core 7.0 to 8.0 Migrate from ASP.NET Core 6.0 to 7.0 Migrate from ASP.NET Core 5.0 to 6.0 Migrate from ASP.NET Core 3.1 to 5.0 Migrate from ASP.NET Core 3.0 to 3.1 Migrate from ASP.NET Core 2.2 to 3.0 Migrate from ASP.NET Core 2.1 to 2.2 Migrate from .NET Core 2.0 to 2.1 Migrate from ASP.NET to ASP.NET Core Migrating .NET Core projects from project.json Migrate from project.json to .csproj format Note If migrating from .NET Core 1.x to 2.0, see the first few related sections in Migrate from ASP.NET Core 1.x to 2.0 . These sections provide guidance that is appropriate for a .NET Core 1.x to 2.0 migration path. 7.2. Porting from .NET Framework Refer to the following Microsoft articles when migrating from .NET Framework: For general guidelines, see Porting to .NET Core from .NET Framework . For porting libraries, see Porting to .NET Core - Libraries . For migrating to ASP.NET Core, see Migrating to ASP.NET Core . Several technologies and APIs present in the .NET Framework are not available in .NET Core and .NET. If your application or library requires these APIs, consider finding alternatives or continue using the .NET Framework. .NET Core and .NET do not support the following technologies and APIs: Desktop applications, for example, Windows Forms and Windows Presentation Foundation (WPF) Windows Communication Foundation (WCF) servers (WCF clients are supported) .NET remoting Additionally, several .NET APIs can only be used in Microsoft Windows environments. The following list shows examples of these Windows-specific APIs: Microsoft.Win32.Registry System.AppDomains System.Security.Principal.Windows Important Several APIs that are not supported in the default version of .NET may be available from the Microsoft.Windows.Compatibility NuGet package. Be careful when using this NuGet package. Some of the APIs provided (such as Microsoft.Win32.Registry ) only work on Windows, making your application incompatible with Red Hat Enterprise Linux.
null
https://docs.redhat.com/en/documentation/net/8.0/html/getting_started_with_.net_on_rhel_8/assembly_dotnet-migration_getting-started-with-dotnet-on-rhel-8
function::is_sig_blocked
function::is_sig_blocked Name function::is_sig_blocked - Returns 1 if the signal is currently blocked, or 0 if it is not Synopsis Arguments task address of the task_struct to query. sig the signal number to test.
[ "is_sig_blocked:long(task:long,sig:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-is-sig-blocked
Chapter 2. The Go compiler
Chapter 2. The Go compiler The Go compiler is a build tool and dependency manager for the Go programming language. It offers error checking and optimization of your code. 2.1. Prerequisites Go Toolset is installed. For more information, see Installing Go Toolset . 2.2. Setting up a Go workspace To compile a Go program, you need to set up a Go workspace. Procedure Create a workspace directory as a subdirectory of USDGOPATH/src . A common choice is USDHOME/go . Place your source files into your workspace directory. Set the location of your workspace directory as an environment variable to the USDHOME/.bashrc file by running: Replace < workspace_dir > with the name of your workspace directory. Additional resources The official Go workspaces documentation . 2.3. Compiling a Go program You can compile your Go program using the Go compiler. The Go compiler creates an executable binary file as a result of compiling. Prerequisites A set up Go workspace with configured modules. For information on how to set up a workspace, see Setting up a Go workspace . Procedure In your project directory, run: On Red Hat Enterprise Linux 8: Replace < output_file > with the desired name of your output file and < go_main_package > with the name of your main package. On Red Hat Enterprise Linux 9: Replace < output_file > with the desired name of your output file and < go_main_package > with the name of your main package. 2.4. Running a Go program The Go compiler creates an executable binary file as a result of compiling. Complete the following steps to execute this file and run your program. Prerequisites Your program is compiled. For more information on how to compile your program, see Compiling a Go program . Procedure To run your program, run in the directory containing the executable file: Replace < file_name > with the name of your executable file. 2.5. Installing compiled Go projects You can install already compiled Go projects to use their executable files and libraries in further Go projects. After installation, the executable files and libraries of the project are copied to according directories in the Go workspace. Its dependencies are installed as well. Prerequisites A Go workspace with configured modules. For more information, see Setting up a Go workspace . Procedure To install a Go project, run: On Red Hat Enterprise Linux 8: Replace < go_project > with the name of the Go project you want to install. On Red Hat Enterprise Linux 9: Replace < go_project > with the name of the Go project you want to install. 2.6. Downloading and installing Go projects You can download and install third-party Go projects from online resources to use their executable files and libraries in further Go projects. After installation, the executable files and libraries of the project are copied to according directories in the Go workspace. Its dependencies are installed as well. Prerequisites A Go workspace. For more information, see Setting up a Go workspace . Procedure To download and install a Go project, run: On Red Hat Enterprise Linux 8: Replace < third_party_go_project > with the name of the project you want to download. On Red Hat Enterprise Linux 9: Replace < third_party_go_project > with the name of the project you want to download. For information on possible values of third-party projects, run: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: 2.7. Additional resources For more information on the Go compiler, see the official Go documentation . To display the help index included in Go Toolset, run: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: To display documentation for specific Go packages, run: On Red Hat Enterprise Linux 8: On Red Hat Enterprise Linux 9: See Go packages for an overview of Go packages.
[ "echo 'export GOPATH=< workspace_dir >' >> USDHOME/.bashrc source USDHOME/.bashrc", "go build -o < output_file > < go_main_package >", "go build -o < output_file > < go_main_package >", "./< file_name >", "go install < go_project >", "go install < go_project >", "go install < third_party_go_project >", "go install < third_party_go_project >", "go help importpath", "go help importpath", "go help", "go help", "go doc < package_name >", "go doc < package_name >" ]
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_go_1.22_toolset/assembly_the-go-compiler_using-go-toolset
Chapter 6. Content distribution with Red Hat Quay
Chapter 6. Content distribution with Red Hat Quay Content distribution features in Red Hat Quay include: Repository mirroring Geo-replication Deployment in air-gapped environments 6.1. Repository mirroring Red Hat Quay repository mirroring lets you mirror images from external container registries, or another local registry, into your Red Hat Quay cluster. Using repository mirroring, you can synchronize images to Red Hat Quay based on repository names and tags. From your Red Hat Quay cluster with repository mirroring enabled, you can perform the following: Choose a repository from an external registry to mirror Add credentials to access the external registry Identify specific container image repository names and tags to sync Set intervals at which a repository is synced Check the current state of synchronization To use the mirroring functionality, you need to perform the following actions: Enable repository mirroring in the Red Hat Quay configuration file Run a repository mirroring worker Create mirrored repositories All repository mirroring configurations can be performed using the configuration tool UI or by the Red Hat Quay API. 6.1.1. Using repository mirroring The following list shows features and limitations of Red Hat Quay repository mirroring: With repository mirroring, you can mirror an entire repository or selectively limit which images are synced. Filters can be based on a comma-separated list of tags, a range of tags, or other means of identifying tags through Unix shell-style wildcards. For more information, see the documentation for wildcards . When a repository is set as mirrored, you cannot manually add other images to that repository. Because the mirrored repository is based on the repository and tags you set, it will hold only the content represented by the repository and tag pair. For example if you change the tag so that some images in the repository no longer match, those images will be deleted. Only the designated robot can push images to a mirrored repository, superseding any role-based access control permissions set on the repository. Mirroring can be configured to rollback on failure, or to run on a best-effort basis. With a mirrored repository, a user with read permissions can pull images from the repository but cannot push images to the repository. Changing settings on your mirrored repository can be performed in the Red Hat Quay user interface, using the Repositories Mirrors tab for the mirrored repository you create. Images are synced at set intervals, but can also be synced on demand. 6.1.2. Repository mirroring recommendations Best practices for repository mirroring include the following: Repository mirroring pods can run on any node. This means that you can run mirroring on nodes where Red Hat Quay is already running. Repository mirroring is scheduled in the database and runs in batches. As a result, repository workers check each repository mirror configuration file and reads when the sync needs to be. More mirror workers means more repositories can be mirrored at the same time. For example, running 10 mirror workers means that a user can run 10 mirroring operators in parallel. If a user only has 2 workers with 10 mirror configurations, only 2 operators can be performed. The optimal number of mirroring pods depends on the following conditions: The total number of repositories to be mirrored The number of images and tags in the repositories and the frequency of changes Parallel batching For example, if a user is mirroring a repository that has 100 tags, the mirror will be completed by one worker. Users must consider how many repositories one wants to mirror in parallel, and base the number of workers around that. Multiple tags in the same repository cannot be mirrored in parallel. 6.1.3. Event notifications for mirroring There are three notification events for repository mirroring: Repository Mirror Started Repository Mirror Success Repository Mirror Unsuccessful The events can be configured inside of the Settings tab for each repository, and all existing notification methods such as email, Slack, Quay UI, and webhooks are supported. 6.1.4. Mirroring API You can use the Red Hat Quay API to configure repository mirroring: Mirroring API More information is available in the Red Hat Quay API Guide 6.2. Geo-replication Geo-replication allows multiple, geographically distributed Red Hat Quay deployments to work as a single registry from the perspective of a client or user. It significantly improves push and pull performance in a globally-distributed Red Hat Quay setup. Image data is asynchronously replicated in the background with transparent failover and redirect for clients. Deployments of Red Hat Quay with geo-replication is supported on standalone and Operator deployments. 6.2.1. Geo-replication features When geo-replication is configured, container image pushes will be written to the preferred storage engine for that Red Hat Quay instance. This is typically the nearest storage backend within the region. After the initial push, image data will be replicated in the background to other storage engines. The list of replication locations is configurable and those can be different storage backends. An image pull will always use the closest available storage engine, to maximize pull performance. If replication has not been completed yet, the pull will use the source storage backend instead. 6.2.2. Geo-replication requirements and constraints In geo-replicated setups, Red Hat Quay requires that all regions are able to read and write to all other region's object storage. Object storage must be geographically accessible by all other regions. In case of an object storage system failure of one geo-replicating site, that site's Red Hat Quay deployment must be shut down so that clients are redirected to the remaining site with intact storage systems by a global load balancer. Otherwise, clients will experience pull and push failures. Red Hat Quay has no internal awareness of the health or availability of the connected object storage system. Users must configure a global load balancer (LB) to monitor the health of your distributed system and to route traffic to different sites based on their storage status. To check the status of your geo-replication deployment, you must use the /health/endtoend checkpoint, which is used for global health monitoring. You must configure the redirect manually using the /health/endtoend endpoint. The /health/instance end point only checks local instance health. If the object storage system of one site becomes unavailable, there will be no automatic redirect to the remaining storage system, or systems, of the remaining site, or sites. Geo-replication is asynchronous. The permanent loss of a site incurs the loss of the data that has been saved in that sites' object storage system but has not yet been replicated to the remaining sites at the time of failure. A single database, and therefore all metadata and Red Hat Quay configuration, is shared across all regions. Geo-replication does not replicate the database. In the event of an outage, Red Hat Quay with geo-replication enabled will not failover to another database. A single Redis cache is shared across the entire Red Hat Quay setup and needs to be accessible by all Red Hat Quay pods. The exact same configuration should be used across all regions, with exception of the storage backend, which can be configured explicitly using the QUAY_DISTRIBUTED_STORAGE_PREFERENCE environment variable. Geo-replication requires object storage in each region. It does not work with local storage. Each region must be able to access every storage engine in each region, which requires a network path. Alternatively, the storage proxy option can be used. The entire storage backend, for example, all blobs, is replicated. Repository mirroring, by contrast, can be limited to a repository, or an image. All Red Hat Quay instances must share the same entrypoint, typically through a load balancer. All Red Hat Quay instances must have the same set of superusers, as they are defined inside the common configuration file. Geo-replication requires your Clair configuration to be set to unmanaged . An unmanaged Clair database allows the Red Hat Quay Operator to work in a geo-replicated environment, where multiple instances of the Red Hat Quay Operator must communicate with the same database. For more information, see Advanced Clair configuration . Geo-Replication requires SSL/TLS certificates and keys. For more information, see * Geo-Replication requires SSL/TLS certificates and keys. For more information, see Proof of concept deployment using SSL/TLS certificates . . If the above requirements cannot be met, you should instead use two or more distinct Red Hat Quay deployments and take advantage of repository mirroring functions. 6.2.3. Geo-replication using standalone Red Hat Quay In the following image, Red Hat Quay is running standalone in two separate regions, with a common database and a common Redis instance. Localized image storage is provided in each region and image pulls are served from the closest available storage engine. Container image pushes are written to the preferred storage engine for the Red Hat Quay instance, and will then be replicated, in the background, to the other storage engines. Note If Clair fails in one cluster, for example, the US cluster, US users would not see vulnerability reports in Red Hat Quay for the second cluster (EU). This is because all Clair instances have the same state. When Clair fails, it is usually because of a problem within the cluster. Geo-replication architecture 6.2.4. Geo-replication using the Red Hat Quay Operator In the example shown above, the Red Hat Quay Operator is deployed in two separate regions, with a common database and a common Redis instance. Localized image storage is provided in each region and image pulls are served from the closest available storage engine. Container image pushes are written to the preferred storage engine for the Quay instance, and will then be replicated, in the background, to the other storage engines. Because the Operator now manages the Clair security scanner and its database separately, geo-replication setups can be leveraged so that they do not manage the Clair database. Instead, an external shared database would be used. Red Hat Quay and Clair support several providers and vendors of PostgreSQL, which can be found in the Red Hat Quay 3.x test matrix . Additionally, the Operator also supports custom Clair configurations that can be injected into the deployment, which allows users to configure Clair with the connection credentials for the external database. 6.2.5. Mixed storage for geo-replication Red Hat Quay geo-replication supports the use of different and multiple replication targets, for example, using AWS S3 storage on public cloud and using Ceph storage on premise. This complicates the key requirement of granting access to all storage backends from all Red Hat Quay pods and cluster nodes. As a result, it is recommended that you use the following: A VPN to prevent visibility of the internal storage, or A token pair that only allows access to the specified bucket used by Red Hat Quay This results in the public cloud instance of Red Hat Quay having access to on-premise storage, but the network will be encrypted, protected, and will use ACLs, thereby meeting security requirements. If you cannot implement these security measures, it might be preferable to deploy two distinct Red Hat Quay registries and to use repository mirroring as an alternative to geo-replication. 6.3. Repository mirroring compared to geo-replication Red Hat Quay geo-replication mirrors the entire image storage backend data between 2 or more different storage backends while the database is shared, for example, one Red Hat Quay registry with two different blob storage endpoints. The primary use cases for geo-replication include the following: Speeding up access to the binary blobs for geographically dispersed setups Guaranteeing that the image content is the same across regions Repository mirroring synchronizes selected repositories, or subsets of repositories, from one registry to another. The registries are distinct, with each registry having a separate database and separate image storage. The primary use cases for mirroring include the following: Independent registry deployments in different data centers or regions, where a certain subset of the overall content is supposed to be shared across the data centers and regions Automatic synchronization or mirroring of selected (allowlisted) upstream repositories from external registries into a local Red Hat Quay deployment Note Repository mirroring and geo-replication can be used simultaneously. Table 6.1. Red Hat Quay Repository mirroring and geo-replication comparison Feature / Capability Geo-replication Repository mirroring What is the feature designed to do? A shared, global registry Distinct, different registries What happens if replication or mirroring has not been completed yet? The remote copy is used (slower) No image is served Is access to all storage backends in both regions required? Yes (all Red Hat Quay nodes) No (distinct storage) Can users push images from both sites to the same repository? Yes No Is all registry content and configuration identical across all regions (shared database)? Yes No Can users select individual namespaces or repositories to be mirrored? No Yes Can users apply filters to synchronization rules? No Yes Are individual / different role-base access control configurations allowed in each region No Yes 6.4. Air-gapped or disconnected deployments In the following diagram, the upper deployment in the diagram shows Red Hat Quay and Clair connected to the internet, with an air-gapped OpenShift Container Platform cluster accessing the Red Hat Quay registry through an explicit, allowlisted hole in the firewall. The lower deployment in the diagram shows Red Hat Quay and Clair running inside of the firewall, with image and CVE data transferred to the target system using offline media. The data is exported from a separate Red Hat Quay and Clair deployment that is connected to the internet. The following diagram shows how Red Hat Quay and Clair can be deployed in air-gapped or disconnected environments: Red Hat Quay and Clair in disconnected, or air-gapped, environments
null
https://docs.redhat.com/en/documentation/red_hat_quay/3/html/red_hat_quay_architecture/content-distrib-intro
B.3.2. Web Server Configuration
B.3.2. Web Server Configuration The following procedure configures an Apache HTTP server. Ensure that the Apache HTTP server is installed on each node in the cluster. You also need the wget tool installed on the cluster to be able to check the status of Apache. On each node, execute the following command. In order for the Apache resource agent to get the status of Apache, ensure that the following text is present in the /etc/httpd/conf/httpd.conf file on each node in the cluster, and ensure that it has not been commented out. If this text is not already present, add the text to the end of the file. Create a web page for Apache to serve up. On one node in the cluster, mount the file system you created in Section B.3.1, "Configuring an LVM Volume with an ext4 File System" , create the file index.html on that file system, then unmount the file system.
[ "yum install -y httpd wget", "<Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from 127.0.0.1 </Location>", "mount /dev/my_vg/my_lv /var/www/ mkdir /var/www/html mkdir /var/www/cgi-bin mkdir /var/www/error restorecon -R /var/www cat <<-END >/var/www/html/index.html <html> <body>Hello</body> </html> END umount /var/www" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-webserversetup-HAAA
Release notes
Release notes OpenShift Container Platform 4.18 Highlights of what is new and what has changed with this OpenShift Container Platform release Red Hat OpenShift Documentation Team
[ "conditionalGathererEndpoint: https://console.redhat.com/api/gathering/v2/%s/gathering_rules", "oc patch featuregates cluster -p '{\"spec\": {\"featureSet\": \"TechPreviewNoUpgrade\"}}' --type=merge", "Warning: unknown field \"metadata\" You don't have any projects. You can try to create a new project, by running oc new-project <projectname>", "oc delete pod -l app=ovnkube-node -n openshift-ovn-kubernetes", "oc adm release info 4.18.4 --pullspecs", "oc adm release info 4.18.3 --pullspecs", "oc adm release info 4.18.2 --pullspecs", "oc adm release info 4.18.1 --pullspecs" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/release_notes/index
Chapter 8. Monitoring application health by using health checks
Chapter 8. Monitoring application health by using health checks In software systems, components can become unhealthy due to transient issues such as temporary connectivity loss, configuration errors, or problems with external dependencies. OpenShift Container Platform applications have a number of options to detect and handle unhealthy containers. 8.1. Understanding health checks A health check periodically performs diagnostics on a running container using any combination of the readiness, liveness, and startup health checks. You can include one or more probes in the specification for the pod that contains the container which you want to perform the health checks. Note If you want to add or edit health checks in an existing pod, you must edit the pod DeploymentConfig object or use the Developer perspective in the web console. You cannot use the CLI to add or edit health checks for an existing pod. Readiness probe A readiness probe determines if a container is ready to accept service requests. If the readiness probe fails for a container, the kubelet removes the pod from the list of available service endpoints. After a failure, the probe continues to examine the pod. If the pod becomes available, the kubelet adds the pod to the list of available service endpoints. Liveness health check A liveness probe determines if a container is still running. If the liveness probe fails due to a condition such as a deadlock, the kubelet kills the container. The pod then responds based on its restart policy. For example, a liveness probe on a pod with a restartPolicy of Always or OnFailure kills and restarts the container. Startup probe A startup probe indicates whether the application within a container is started. All other probes are disabled until the startup succeeds. If the startup probe does not succeed within a specified time period, the kubelet kills the container, and the container is subject to the pod restartPolicy . Some applications can require additional startup time on their first initialization. You can use a startup probe with a liveness or readiness probe to delay that probe long enough to handle lengthy start-up time using the failureThreshold and periodSeconds parameters. For example, you can add a startup probe, with a failureThreshold of 30 failures and a periodSeconds of 10 seconds (30 * 10s = 300s) for a maximum of 5 minutes, to a liveness probe. After the startup probe succeeds the first time, the liveness probe takes over. You can configure liveness, readiness, and startup probes with any of the following types of tests: HTTP GET : When using an HTTP GET test, the test determines the healthiness of the container by using a web hook. The test is successful if the HTTP response code is between 200 and 399 . You can use an HTTP GET test with applications that return HTTP status codes when completely initialized. Container Command: When using a container command test, the probe executes a command inside the container. The probe is successful if the test exits with a 0 status. TCP socket: When using a TCP socket test, the probe attempts to open a socket to the container. The container is only considered healthy if the probe can establish a connection. You can use a TCP socket test with applications that do not start listening until initialization is complete. You can configure several fields to control the behavior of a probe: initialDelaySeconds : The time, in seconds, after the container starts before the probe can be scheduled. The default is 0. periodSeconds : The delay, in seconds, between performing probes. The default is 10 . This value must be greater than timeoutSeconds . timeoutSeconds : The number of seconds of inactivity after which the probe times out and the container is assumed to have failed. The default is 1 . This value must be lower than periodSeconds . successThreshold : The number of times that the probe must report success after a failure to reset the container status to successful. The value must be 1 for a liveness probe. The default is 1 . failureThreshold : The number of times that the probe is allowed to fail. The default is 3. After the specified attempts: for a liveness probe, the container is restarted for a readiness probe, the pod is marked Unready for a startup probe, the container is killed and is subject to the pod's restartPolicy Example probes The following are samples of different probes as they would appear in an object specification. Sample readiness probe with a container command readiness probe in a pod spec apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application ... spec: containers: - name: goproxy-app 1 args: image: k8s.gcr.io/goproxy:0.1 2 readinessProbe: 3 exec: 4 command: 5 - cat - /tmp/healthy ... 1 The container name. 2 The container image to deploy. 3 A readiness probe. 4 A container command test. 5 The commands to execute on the container. Sample container command startup probe and liveness probe with container command tests in a pod spec apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application ... spec: containers: - name: goproxy-app 1 args: image: k8s.gcr.io/goproxy:0.1 2 livenessProbe: 3 httpGet: 4 scheme: HTTPS 5 path: /healthz port: 8080 6 httpHeaders: - name: X-Custom-Header value: Awesome startupProbe: 7 httpGet: 8 path: /healthz port: 8080 9 failureThreshold: 30 10 periodSeconds: 10 11 ... 1 The container name. 2 Specify the container image to deploy. 3 A liveness probe. 4 An HTTP GET test. 5 The internet scheme: HTTP or HTTPS . The default value is HTTP . 6 The port on which the container is listening. 7 A startup probe. 8 An HTTP GET test. 9 The port on which the container is listening. 10 The number of times to try the probe after a failure. 11 The number of seconds to perform the probe. Sample liveness probe with a container command test that uses a timeout in a pod spec apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application ... spec: containers: - name: goproxy-app 1 args: image: k8s.gcr.io/goproxy:0.1 2 livenessProbe: 3 exec: 4 command: 5 - /bin/bash - '-c' - timeout 60 /opt/eap/bin/livenessProbe.sh periodSeconds: 10 6 successThreshold: 1 7 failureThreshold: 3 8 ... 1 The container name. 2 Specify the container image to deploy. 3 The liveness probe. 4 The type of probe, here a container command probe. 5 The command line to execute inside the container. 6 How often in seconds to perform the probe. 7 The number of consecutive successes needed to show success after a failure. 8 The number of times to try the probe after a failure. Sample readiness probe and liveness probe with a TCP socket test in a deployment kind: Deployment apiVersion: apps/v1 ... spec: ... template: spec: containers: - resources: {} readinessProbe: 1 tcpSocket: port: 8080 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log name: ruby-ex livenessProbe: 2 tcpSocket: port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 ... 1 The readiness probe. 2 The liveness probe. 8.2. Configuring health checks using the CLI To configure readiness, liveness, and startup probes, add one or more probes to the specification for the pod that contains the container which you want to perform the health checks Note If you want to add or edit health checks in an existing pod, you must edit the pod DeploymentConfig object or use the Developer perspective in the web console. You cannot use the CLI to add or edit health checks for an existing pod. Procedure To add probes for a container: Create a Pod object to add one or more probes: apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: my-container 1 args: image: k8s.gcr.io/goproxy:0.1 2 livenessProbe: 3 tcpSocket: 4 port: 8080 5 initialDelaySeconds: 15 6 periodSeconds: 20 7 timeoutSeconds: 10 8 readinessProbe: 9 httpGet: 10 host: my-host 11 scheme: HTTPS 12 path: /healthz port: 8080 13 startupProbe: 14 exec: 15 command: 16 - cat - /tmp/healthy failureThreshold: 30 17 periodSeconds: 20 18 timeoutSeconds: 10 19 1 Specify the container name. 2 Specify the container image to deploy. 3 Optional: Create a Liveness probe. 4 Specify a test to perform, here a TCP Socket test. 5 Specify the port on which the container is listening. 6 Specify the time, in seconds, after the container starts before the probe can be scheduled. 7 Specify the number of seconds to perform the probe. The default is 10 . This value must be greater than timeoutSeconds . 8 Specify the number of seconds of inactivity after which the probe is assumed to have failed. The default is 1 . This value must be lower than periodSeconds . 9 Optional: Create a Readiness probe. 10 Specify the type of test to perform, here an HTTP test. 11 Specify a host IP address. When host is not defined, the PodIP is used. 12 Specify HTTP or HTTPS . When scheme is not defined, the HTTP scheme is used. 13 Specify the port on which the container is listening. 14 Optional: Create a Startup probe. 15 Specify the type of test to perform, here an Container Execution probe. 16 Specify the commands to execute on the container. 17 Specify the number of times to try the probe after a failure. 18 Specify the number of seconds to perform the probe. The default is 10 . This value must be greater than timeoutSeconds . 19 Specify the number of seconds of inactivity after which the probe is assumed to have failed. The default is 1 . This value must be lower than periodSeconds . Note If the initialDelaySeconds value is lower than the periodSeconds value, the first Readiness probe occurs at some point between the two periods due to an issue with timers. The timeoutSeconds value must be lower than the periodSeconds value. Create the Pod object: USD oc create -f <file-name>.yaml Verify the state of the health check pod: USD oc describe pod health-check Example output Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9s default-scheduler Successfully assigned openshift-logging/liveness-exec to ip-10-0-143-40.ec2.internal Normal Pulling 2s kubelet, ip-10-0-143-40.ec2.internal pulling image "k8s.gcr.io/liveness" Normal Pulled 1s kubelet, ip-10-0-143-40.ec2.internal Successfully pulled image "k8s.gcr.io/liveness" Normal Created 1s kubelet, ip-10-0-143-40.ec2.internal Created container Normal Started 1s kubelet, ip-10-0-143-40.ec2.internal Started container The following is the output of a failed probe that restarted a container: Sample Liveness check output with unhealthy container USD oc describe pod pod1 Example output .... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned aaa/liveness-http to ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Normal AddedInterface 47s multus Add eth0 [10.129.2.11/23] Normal Pulled 46s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image "k8s.gcr.io/liveness" in 773.406244ms Normal Pulled 28s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image "k8s.gcr.io/liveness" in 233.328564ms Normal Created 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Created container liveness Normal Started 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Started container liveness Warning Unhealthy 10s (x6 over 34s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Liveness probe failed: HTTP probe failed with statuscode: 500 Normal Killing 10s (x2 over 28s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Container liveness failed liveness probe, will be restarted Normal Pulling 10s (x3 over 47s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Pulling image "k8s.gcr.io/liveness" Normal Pulled 10s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image "k8s.gcr.io/liveness" in 244.116568ms 8.3. Monitoring application health using the Developer perspective You can use the Developer perspective to add three types of health probes to your container to ensure that your application is healthy: Use the Readiness probe to check if the container is ready to handle requests. Use the Liveness probe to check if the container is running. Use the Startup probe to check if the application within the container has started. You can add health checks either while creating and deploying an application, or after you have deployed an application. 8.4. Adding health checks using the Developer perspective You can use the Topology view to add health checks to your deployed application. Prerequisites: You have switched to the Developer perspective in the web console. You have created and deployed an application on OpenShift Container Platform using the Developer perspective. Procedure In the Topology view, click on the application node to see the side panel. If the container does not have health checks added to ensure the smooth running of your application, a Health Checks notification is displayed with a link to add health checks. In the displayed notification, click the Add Health Checks link. Alternatively, you can also click the Actions drop-down list and select Add Health Checks . Note that if the container already has health checks, you will see the Edit Health Checks option instead of the add option. In the Add Health Checks form, if you have deployed multiple containers, use the Container drop-down list to ensure that the appropriate container is selected. Click the required health probe links to add them to the container. Default data for the health checks is prepopulated. You can add the probes with the default data or further customize the values and then add them. For example, to add a Readiness probe that checks if your container is ready to handle requests: Click Add Readiness Probe , to see a form containing the parameters for the probe. Click the Type drop-down list to select the request type you want to add. For example, in this case, select Container Command to select the command that will be executed inside the container. In the Command field, add an argument cat , similarly, you can add multiple arguments for the check, for example, add another argument /tmp/healthy . Retain or modify the default values for the other parameters as required. Note The Timeout value must be lower than the Period value. The Timeout default value is 1 . The Period default value is 10 . Click the check mark at the bottom of the form. The Readiness Probe Added message is displayed. Click Add to add the health check. You are redirected to the Topology view and the container is restarted. In the side panel, verify that the probes have been added by clicking on the deployed pod under the Pods section. In the Pod Details page, click the listed container in the Containers section. In the Container Details page, verify that the Readiness probe - Exec Command cat /tmp/healthy has been added to the container. 8.5. Editing health checks using the Developer perspective You can use the Topology view to edit health checks added to your application, modify them, or add more health checks. Prerequisites: You have switched to the Developer perspective in the web console. You have created and deployed an application on OpenShift Container Platform using the Developer perspective. You have added health checks to your application. Procedure In the Topology view, right-click your application and select Edit Health Checks . Alternatively, in the side panel, click the Actions drop-down list and select Edit Health Checks . In the Edit Health Checks page: To remove a previously added health probe, click the minus sign adjoining it. To edit the parameters of an existing probe: Click the Edit Probe link to a previously added probe to see the parameters for the probe. Modify the parameters as required, and click the check mark to save your changes. To add a new health probe, in addition to existing health checks, click the add probe links. For example, to add a Liveness probe that checks if your container is running: Click Add Liveness Probe , to see a form containing the parameters for the probe. Edit the probe parameters as required. Note The Timeout value must be lower than the Period value. The Timeout default value is 1 . The Period default value is 10 . Click the check mark at the bottom of the form. The Liveness Probe Added message is displayed. Click Save to save your modifications and add the additional probes to your container. You are redirected to the Topology view. In the side panel, verify that the probes have been added by clicking on the deployed pod under the Pods section. In the Pod Details page, click the listed container in the Containers section. In the Container Details page, verify that the Liveness probe - HTTP Get 10.129.4.65:8080/ has been added to the container, in addition to the earlier existing probes. 8.6. Monitoring health check failures using the Developer perspective In case an application health check fails, you can use the Topology view to monitor these health check violations. Prerequisites: You have switched to the Developer perspective in the web console. You have created and deployed an application on OpenShift Container Platform using the Developer perspective. You have added health checks to your application. Procedure In the Topology view, click on the application node to see the side panel. Click the Monitoring tab to see the health check failures in the Events (Warning) section. Click the down arrow adjoining Events (Warning) to see the details of the health check failure. Additional resources For details on switching to the Developer perspective in the web console, see About the Developer perspective . For details on adding health checks while creating and deploying an application, see Advanced Options in the Creating applications using the Developer perspective section.
[ "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: k8s.gcr.io/goproxy:0.1 2 readinessProbe: 3 exec: 4 command: 5 - cat - /tmp/healthy", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: k8s.gcr.io/goproxy:0.1 2 livenessProbe: 3 httpGet: 4 scheme: HTTPS 5 path: /healthz port: 8080 6 httpHeaders: - name: X-Custom-Header value: Awesome startupProbe: 7 httpGet: 8 path: /healthz port: 8080 9 failureThreshold: 30 10 periodSeconds: 10 11", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: k8s.gcr.io/goproxy:0.1 2 livenessProbe: 3 exec: 4 command: 5 - /bin/bash - '-c' - timeout 60 /opt/eap/bin/livenessProbe.sh periodSeconds: 10 6 successThreshold: 1 7 failureThreshold: 3 8", "kind: Deployment apiVersion: apps/v1 spec: template: spec: containers: - resources: {} readinessProbe: 1 tcpSocket: port: 8080 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log name: ruby-ex livenessProbe: 2 tcpSocket: port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: my-container 1 args: image: k8s.gcr.io/goproxy:0.1 2 livenessProbe: 3 tcpSocket: 4 port: 8080 5 initialDelaySeconds: 15 6 periodSeconds: 20 7 timeoutSeconds: 10 8 readinessProbe: 9 httpGet: 10 host: my-host 11 scheme: HTTPS 12 path: /healthz port: 8080 13 startupProbe: 14 exec: 15 command: 16 - cat - /tmp/healthy failureThreshold: 30 17 periodSeconds: 20 18 timeoutSeconds: 10 19", "oc create -f <file-name>.yaml", "oc describe pod health-check", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9s default-scheduler Successfully assigned openshift-logging/liveness-exec to ip-10-0-143-40.ec2.internal Normal Pulling 2s kubelet, ip-10-0-143-40.ec2.internal pulling image \"k8s.gcr.io/liveness\" Normal Pulled 1s kubelet, ip-10-0-143-40.ec2.internal Successfully pulled image \"k8s.gcr.io/liveness\" Normal Created 1s kubelet, ip-10-0-143-40.ec2.internal Created container Normal Started 1s kubelet, ip-10-0-143-40.ec2.internal Started container", "oc describe pod pod1", ". Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned aaa/liveness-http to ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Normal AddedInterface 47s multus Add eth0 [10.129.2.11/23] Normal Pulled 46s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"k8s.gcr.io/liveness\" in 773.406244ms Normal Pulled 28s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"k8s.gcr.io/liveness\" in 233.328564ms Normal Created 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Created container liveness Normal Started 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Started container liveness Warning Unhealthy 10s (x6 over 34s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Liveness probe failed: HTTP probe failed with statuscode: 500 Normal Killing 10s (x2 over 28s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Container liveness failed liveness probe, will be restarted Normal Pulling 10s (x3 over 47s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Pulling image \"k8s.gcr.io/liveness\" Normal Pulled 10s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"k8s.gcr.io/liveness\" in 244.116568ms" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/applications/application-health
Chapter 4. Disabling the resource optimization service
Chapter 4. Disabling the resource optimization service 4.1. Removing resource optimization files and data Using Ansible to disable the resource optimization service Perform the following steps on each system to disable and uninstall the resource optimization service. Procedure Download the Ansible Playbook with the following command: Run the Ansible Playbook using command: Uninstalling the playbook does not stop or remove the Performance Co-Pilot (PCP) toolkit. Note that PCP may support multiple applications. If you are using PCP exclusively for the resource optimization service, and desire to remove PCP as well, there are a couple options. You can stop and disable the pmlogger and pmcd services, or remove PCP completely by uninstalling the pcp package from the system. Manually disabling the resource optimization service without the use of Ansible The use of Ansible is recommended to expedite the uninstallation process. If you choose to not use Ansible, use the manual procedure that follows: Procedure Disable resource optimization service metrics collection by removing this line from /etc/pcp/pmlogger/control.d/local Restart PCP so that resource optimization service metrics collection is effectively stopped: Remove the resource optimization service configuration file Remove the resource optimization data from the system If you are not using PCP for anything else, you can remove it from your system 4.2. Disabling kernel pressure stall information (PSI) Procedure Edit the /etc/default/grub file and remove psi=1 from the GRUB_CMDLINE_LINUX line. Regenerate the grub configuration file. [user]USD sudo grub2-mkconfig -o /boot/grub2/grub.cfg Reboot the system. Verification step When PSI is disabled, /proc/pressure does not exist.
[ "curl -O https://raw.githubusercontent.com/RedHatInsights/ros-backend/v1.0/ansible-playbooks/ros_disable.yml", "ansible-playbook -c local ros_disable_and_clean_up.yml", "LOCALHOSTNAME n y PCP_LOG_DIR/pmlogger/ros -r -T24h10m -c config.ros -v 100Mb", "sudo systemctl pmcd pmlogger", "sudo rm /var/lib/pcp/config/pmlogger/config.ros", "sudo rm -rf /var/log/pcp/pmlogger/ros", "sudo yum remove pcp", "[user]USD sudo grub2-mkconfig -o /boot/grub2/grub.cfg" ]
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_monitoring_rhel_resource_optimization_with_insights_for_red_hat_enterprise_linux_with_fedramp/assembly-ros-disable
Chapter 1. Architecture of OpenShift AI
Chapter 1. Architecture of OpenShift AI Red Hat OpenShift AI is a fully Red Hat managed cloud service that is available as an add-on to Red Hat OpenShift Dedicated and to Red Hat OpenShift Service on Amazon Web Services (ROSA Classic). OpenShift AI integrates the following components and services: At the service layer: OpenShift AI dashboard A customer-facing dashboard that shows available and installed applications for the OpenShift AI environment as well as learning resources such as tutorials, quick start examples, and documentation. You can also access administrative functionality from the dashboard, such as user management, cluster settings, accelerator profiles, and notebook image settings. In addition, data scientists can create their own projects from the dashboard. This enables them to organize their data science work into a single project. Model serving Data scientists can deploy trained machine-learning models to serve intelligent applications in production. After deployment, applications can send requests to the model using its deployed API endpoint. Data science pipelines Data scientists can build portable machine learning (ML) workflows with data science pipelines 2.0, using Docker containers. With data science pipelines, data scientists can automate workflows as they develop their data science models. Jupyter (Red Hat managed) A Red Hat managed application that allows data scientists to configure their own notebook server environment and develop machine learning models in JupyterLab. Distributed workloads Data scientists can use multiple nodes in parallel to train machine-learning models or process data more quickly. This approach significantly reduces the task completion time, and enables the use of larger datasets and more complex models. At the management layer: The Red Hat OpenShift AI Operator A meta-operator that deploys and maintains all components and sub-operators that are part of OpenShift AI. Monitoring services Alertmanager, OpenShift Telemetry, and Prometheus work together to gather metrics from OpenShift AI and organize and display those metrics in useful ways for monitoring and billing purposes. Alerts from Alertmanager are sent to PagerDuty, responsible for notifying Red Hat of any issues with your managed cloud service. When you install the Red Hat OpenShift AI Add-on in the Cluster Manager, the following new projects are created: The redhat-ods-operator project contains the Red Hat OpenShift AI Operator. The redhat-ods-applications project installs the dashboard and other required components of OpenShift AI. The redhat-ods-monitoring project contains services for monitoring and billing. The rhods-notebooks project is where notebook environments are deployed by default. You or your data scientists must create additional projects for the applications that will use your machine learning models. Do not install independent software vendor (ISV) applications in namespaces associated with OpenShift AI add-ons unless you are specifically directed to do so on the application tile on the dashboard.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/installing_and_uninstalling_openshift_ai_cloud_service/architecture-of-openshift-ai_install
Chapter 27. Analyzing system performance with BPF Compiler Collection
Chapter 27. Analyzing system performance with BPF Compiler Collection As a system administrator, you can use the BPF Compiler Collection (BCC) library to create tools for analyzing the performance of your Linux operating system and gathering information, which could be difficult to obtain through other interfaces. 27.1. Installing the bcc-tools package Install the bcc-tools package, which also installs the BPF Compiler Collection (BCC) library as a dependency. Procedure Install bcc-tools . The BCC tools are installed in the /usr/share/bcc/tools/ directory. Verification Inspect the installed tools: The doc directory in the listing provides documentation for each tool. 27.2. Using selected bcc-tools for performance analyses Use certain pre-created programs from the BPF Compiler Collection (BCC) library to efficiently and securely analyze the system performance on the per-event basis. The set of pre-created programs in the BCC library can serve as examples for creation of additional programs. Prerequisites Installed bcc-tools package Root permissions Procedure Using execsnoop to examine the system processes Run the execsnoop program in one terminal: To create a short-lived process of the ls command, in another terminal, enter: The terminal running execsnoop shows the output similar to the following: The execsnoop program prints a line of output for each new process that consume system resources. It even detects processes of programs that run very shortly, such as ls , and most monitoring tools would not register them. The execsnoop output displays the following fields: PCOMM The parent process name. ( ls ) PID The process ID. ( 8382 ) PPID The parent process ID. ( 8287 ) RET The return value of the exec() system call ( 0 ), which loads program code into new processes. ARGS The location of the started program with arguments. To see more details, examples, and options for execsnoop , see /usr/share/bcc/tools/doc/execsnoop_example.txt file. For more information about exec() , see exec(3) manual pages. Using opensnoop to track what files a command opens In one terminal, run the opensnoop program to print the output for files opened only by the process of the uname command: In another terminal, enter the command to open certain files: The terminal running opensnoop shows the output similar to the following: The opensnoop program watches the open() system call across the whole system, and prints a line of output for each file that uname tried to open along the way. The opensnoop output displays the following fields: PID The process ID. ( 8596 ) COMM The process name. ( uname ) FD The file descriptor - a value that open() returns to refer to the open file. ( 3 ) ERR Any errors. PATH The location of files that open() tried to open. If a command tries to read a non-existent file, then the FD column returns -1 and the ERR column prints a value corresponding to the relevant error. As a result, opensnoop can help you identify an application that does not behave properly. To see more details, examples, and options for opensnoop , see /usr/share/bcc/tools/doc/opensnoop_example.txt file. For more information about open() , see open(2) manual pages. Use the biotop to monitor the top processes performing I/O operations on the disk Run the biotop program in one terminal with argument 30 to produce 30 second summary: Note When no argument provided, the output screen by default refreshes every 1 second. In another terminal, enter command to read the content from the local hard disk device and write the output to the /dev/zero file: This step generates certain I/O traffic to illustrate biotop . The terminal running biotop shows the output similar to the following: The biotop output displays the following fields: PID The process ID. ( 9568 ) COMM The process name. ( dd ) DISK The disk performing the read operations. ( vda ) I/O The number of read operations performed. (16294) Kbytes The amount of Kbytes reached by the read operations. (14,440,636) AVGms The average I/O time of read operations. (3.69) For more details, examples, and options for biotop , see the /usr/share/bcc/tools/doc/biotop_example.txt file. For more information about dd , see dd(1) manual pages. Using xfsslower to expose unexpectedly slow file system operations The xfsslower measures the time spent by XFS file system in performing read, write, open or sync ( fsync ) operations. The 1 argument ensures that the program shows only the operations that are slower than 1 ms. Run the xfsslower program in one terminal: Note When no arguments provided, xfsslower by default displays operations slower than 10 ms. In another terminal, enter the command to create a text file in the vim editor to start interaction with the XFS file system: The terminal running xfsslower shows something similar upon saving the file from the step: Each line represents an operation in the file system, which took more time than a certain threshold. xfsslower detects possible file system problems, which can take form of unexpectedly slow operations. The xfsslower output displays the following fields: COMM The process name. ( b'bash' ) T The operation type. ( R ) R ead W rite S ync OFF_KB The file offset in KB. (0) FILENAME The file that is read, written, or synced. To see more details, examples, and options for xfsslower , see /usr/share/bcc/tools/doc/xfsslower_example.txt file. For more information about fsync , see fsync(2) manual pages.
[ "dnf install bcc-tools", "ls -l /usr/share/bcc/tools/ -rwxr-xr-x. 1 root root 4198 Dec 14 17:53 dcsnoop -rwxr-xr-x. 1 root root 3931 Dec 14 17:53 dcstat -rwxr-xr-x. 1 root root 20040 Dec 14 17:53 deadlock_detector -rw-r--r--. 1 root root 7105 Dec 14 17:53 deadlock_detector.c drwxr-xr-x. 3 root root 8192 Mar 11 10:28 doc -rwxr-xr-x. 1 root root 7588 Dec 14 17:53 execsnoop -rwxr-xr-x. 1 root root 6373 Dec 14 17:53 ext4dist -rwxr-xr-x. 1 root root 10401 Dec 14 17:53 ext4slower", "/usr/share/bcc/tools/execsnoop", "ls /usr/share/bcc/tools/doc/", "PCOMM PID PPID RET ARGS ls 8382 8287 0 /usr/bin/ls --color=auto /usr/share/bcc/tools/doc/", "/usr/share/bcc/tools/opensnoop -n uname", "uname", "PID COMM FD ERR PATH 8596 uname 3 0 /etc/ld.so.cache 8596 uname 3 0 /lib64/libc.so.6 8596 uname 3 0 /usr/lib/locale/locale-archive", "/usr/share/bcc/tools/biotop 30", "dd if=/dev/vda of=/dev/zero", "PID COMM D MAJ MIN DISK I/O Kbytes AVGms 9568 dd R 252 0 vda 16294 14440636.0 3.69 48 kswapd0 W 252 0 vda 1763 120696.0 1.65 7571 gnome-shell R 252 0 vda 834 83612.0 0.33 1891 gnome-shell R 252 0 vda 1379 19792.0 0.15 7515 Xorg R 252 0 vda 280 9940.0 0.28 7579 llvmpipe-1 R 252 0 vda 228 6928.0 0.19 9515 gnome-control-c R 252 0 vda 62 6444.0 0.43 8112 gnome-terminal- R 252 0 vda 67 2572.0 1.54 7807 gnome-software R 252 0 vda 31 2336.0 0.73 9578 awk R 252 0 vda 17 2228.0 0.66 7578 llvmpipe-0 R 252 0 vda 156 2204.0 0.07 9581 pgrep R 252 0 vda 58 1748.0 0.42 7531 InputThread R 252 0 vda 30 1200.0 0.48 7504 gdbus R 252 0 vda 3 1164.0 0.30 1983 llvmpipe-1 R 252 0 vda 39 724.0 0.08 1982 llvmpipe-0 R 252 0 vda 36 652.0 0.06", "/usr/share/bcc/tools/xfsslower 1", "vim text", "TIME COMM PID T BYTES OFF_KB LAT(ms) FILENAME 13:07:14 b'bash' 4754 R 256 0 7.11 b'vim' 13:07:14 b'vim' 4754 R 832 0 4.03 b'libgpm.so.2.1.0' 13:07:14 b'vim' 4754 R 32 20 1.04 b'libgpm.so.2.1.0' 13:07:14 b'vim' 4754 R 1982 0 2.30 b'vimrc' 13:07:14 b'vim' 4754 R 1393 0 2.52 b'getscriptPlugin.vim' 13:07:45 b'vim' 4754 S 0 0 6.71 b'text' 13:07:45 b'pool' 2588 R 16 0 5.58 b'text'" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_monitoring_and_updating_the_kernel/analyzing-system-performance-with-bpf-compiler_collection_managing-monitoring-and-updating-the-kernel
4.5.5. Quorum Disk Configuration
4.5.5. Quorum Disk Configuration Clicking on the QDisk tab displays the Quorum Disk Configuration page, which provides an interface for configuring quorum disk parameters if you need to use a quorum disk. Note Quorum disk parameters and heuristics depend on the site environment and the special requirements needed. To understand the use of quorum disk parameters and heuristics, see the qdisk (5) man page. If you require assistance understanding and using quorum disk, contact an authorized Red Hat support representative. The Do Not Use a Quorum Disk parameter is enabled by default. If you need to use a quorum disk, click Use a Quorum Disk , enter the quorum disk parameters, and click Apply . You must restart the cluster for the changes to take effect. Table 4.1, "Quorum-Disk Parameters" describes the quorum disk parameters. Table 4.1. Quorum-Disk Parameters Parameter Description Specify Physical Device: By Device Label Specifies the quorum disk label created by the mkqdisk utility. If this field is used, the quorum daemon reads the /proc/partitions file and checks for qdisk signatures on every block device found, comparing the label against the specified label. This is useful in configurations where the quorum device name differs among nodes. Heuristics Path to Program - The program used to determine if this heuristic is available. This can be anything that can be executed by /bin/sh -c . A return value of 0 indicates success; anything else indicates failure. This field is required. Interval - The frequency (in seconds) at which the heuristic is polled. The default interval for every heuristic is 2 seconds. Score - The weight of this heuristic. The default score for each heuristic is 1. TKO - The number of consecutive failures required before this heuristic is declared unavailable. Minimum Total Score The minimum score for a node to be considered "alive". If omitted or set to 0, the default function, floor(( n +1)/2) , is used, where n is the sum of the heuristics scores. The Minimum Total Score value must never exceed the sum of the heuristic scores; otherwise, the quorum disk cannot be available. Note Clicking Apply on the QDisk Configuration tab propagates changes to the cluster configuration file ( /etc/cluster/cluster.conf ) in each cluster node. However, for the quorum disk to operate or for any modifications you have made to the quorum disk parameters to take effect, you must restart the cluster (see Section 5.4, "Starting, Stopping, Restarting, and Deleting Clusters" ), ensuring that you have restarted the qdiskd daemon on each node.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-qdisk-conga-CA
Appendix E. Securing Red Hat Virtualization
Appendix E. Securing Red Hat Virtualization This information is specific to Red Hat Virtualization. It does not cover fundamental security practices related to any of the following: Disabling unnecessary services Authentication Authorization Accounting Penetration testing and hardening of non-RHV services Encryption of sensitive application data Prerequisites You should be proficient in your organization's security standards and practices. If possible, consult with your organization's Security Officer. Consult the Red Hat Enterprise Linux Security hardening before deploying RHEL hosts. E.1. Applying the DISA STIG profile in RHEL based hosts and the standalone Manager When installing RHV, you can select the DISA STIG profile with the UI installer, which is the profile provided by RHEL 8. Important The DISA STIG profile is not supported for Red Hat Virtualization Host (RHVH). Procedure In the Installation Summary screen, select Security Policy . In the Security Policy screen, set the Apply security policy to On . Select DISA STIG for Red Hat Enterprise Linux 8 . Click Select profile . This action adds a green checkmark to the profile and adds packages to the list of Changes that were done or need to be done . Follow the onscreen instructions if they direct you to make any changes. Click Done . On the Installation Summary screen, verify that the status of Security Policy is Everything okay . Reboot the host. E.1.1. Enabling DISA STIG in a self-hosted engine You can enable DISA STIG in a self-hosted engine during deployment when using the command-line. Procedure Start the self-hosted engine deployment script. See Installing Red Hat Virtualization as a self-hosted engine using the command line . When the deployment script prompts Do you want to apply an OpenSCAP security profile? , enter Yes . When the deployment script prompts Please provide the security profile you would like to use? , enter stig . E.2. Applying the PCI-DSS profile in RHV hosts and the standalone Manager When installing RHVH, you can select the PCI-DSS profile with the UI installer, which is the profile provided by RHEL 8. Procedure In the Installation Summary screen, select Security Policy . In the Security Policy screen, set the Apply security policy to On . Select PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 8 . Click Select profile . This action adds a green checkmark to the profile and adds packages to the list of Changes that were done or need to be done . Follow the onscreen instructions if they direct you to make any changes. Click Done . In the Installation Summary screen, verify that the status of Security Policy is Everything okay . Reboot the host. E.2.1. Enabling PCI-DSS in a self-hosted engine You can enable PCI-DSS in a self-hosted engine during deployment when using the command-line. Procedure Start the self-hosted engine deployment script. See Installing Red Hat Virtualization as a self-hosted engine using the command line . When the deployment script prompts Do you want to apply an OpenSCAP security profile? , enter Yes . When the deployment script prompts Please provide the security profile you would like to use? , enter pci-dss .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/security
Part V. Known Issues
Part V. Known Issues This part documents known problems in Red Hat Enterprise Linux 7.6.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/known-issues
Chapter 4. API index
Chapter 4. API index API API group AdminNetworkPolicy policy.networking.k8s.io/v1alpha1 AdminPolicyBasedExternalRoute k8s.ovn.org/v1 AlertingRule monitoring.openshift.io/v1 Alertmanager monitoring.coreos.com/v1 AlertmanagerConfig monitoring.coreos.com/v1beta1 AlertRelabelConfig monitoring.openshift.io/v1 APIRequestCount apiserver.openshift.io/v1 APIServer config.openshift.io/v1 APIService apiregistration.k8s.io/v1 AppliedClusterResourceQuota quota.openshift.io/v1 Authentication config.openshift.io/v1 Authentication operator.openshift.io/v1 BareMetalHost metal3.io/v1alpha1 BaselineAdminNetworkPolicy policy.networking.k8s.io/v1alpha1 Binding v1 BMCEventSubscription metal3.io/v1alpha1 BrokerTemplateInstance template.openshift.io/v1 Build build.openshift.io/v1 Build config.openshift.io/v1 BuildConfig build.openshift.io/v1 BuildLog build.openshift.io/v1 BuildRequest build.openshift.io/v1 CatalogSource operators.coreos.com/v1alpha1 CertificateSigningRequest certificates.k8s.io/v1 CloudCredential operator.openshift.io/v1 CloudPrivateIPConfig cloud.network.openshift.io/v1 ClusterAutoscaler autoscaling.openshift.io/v1 ClusterCSIDriver operator.openshift.io/v1 ClusterOperator config.openshift.io/v1 ClusterResourceQuota quota.openshift.io/v1 ClusterRole authorization.openshift.io/v1 ClusterRole rbac.authorization.k8s.io/v1 ClusterRoleBinding authorization.openshift.io/v1 ClusterRoleBinding rbac.authorization.k8s.io/v1 ClusterServiceVersion operators.coreos.com/v1alpha1 ClusterVersion config.openshift.io/v1 ComponentStatus v1 Config imageregistry.operator.openshift.io/v1 Config operator.openshift.io/v1 Config samples.operator.openshift.io/v1 ConfigMap v1 Console config.openshift.io/v1 Console operator.openshift.io/v1 ConsoleCLIDownload console.openshift.io/v1 ConsoleExternalLogLink console.openshift.io/v1 ConsoleLink console.openshift.io/v1 ConsoleNotification console.openshift.io/v1 ConsolePlugin console.openshift.io/v1 ConsoleQuickStart console.openshift.io/v1 ConsoleSample console.openshift.io/v1 ConsoleYAMLSample console.openshift.io/v1 ContainerRuntimeConfig machineconfiguration.openshift.io/v1 ControllerConfig machineconfiguration.openshift.io/v1 ControllerRevision apps/v1 ControlPlaneMachineSet machine.openshift.io/v1 CredentialsRequest cloudcredential.openshift.io/v1 CronJob batch/v1 CSIDriver storage.k8s.io/v1 CSINode storage.k8s.io/v1 CSISnapshotController operator.openshift.io/v1 CSIStorageCapacity storage.k8s.io/v1 CustomResourceDefinition apiextensions.k8s.io/v1 DaemonSet apps/v1 DataImage metal3.io/v1alpha1 Deployment apps/v1 DeploymentConfig apps.openshift.io/v1 DeploymentConfigRollback apps.openshift.io/v1 DeploymentLog apps.openshift.io/v1 DeploymentRequest apps.openshift.io/v1 DNS config.openshift.io/v1 DNS operator.openshift.io/v1 DNSRecord ingress.operator.openshift.io/v1 EgressFirewall k8s.ovn.org/v1 EgressIP k8s.ovn.org/v1 EgressQoS k8s.ovn.org/v1 EgressRouter network.operator.openshift.io/v1 EgressService k8s.ovn.org/v1 Endpoints v1 EndpointSlice discovery.k8s.io/v1 Etcd operator.openshift.io/v1 Event v1 Event events.k8s.io/v1 Eviction policy/v1 FeatureGate config.openshift.io/v1 FirmwareSchema metal3.io/v1alpha1 FlowSchema flowcontrol.apiserver.k8s.io/v1 Group user.openshift.io/v1 HardwareData metal3.io/v1alpha1 HelmChartRepository helm.openshift.io/v1beta1 HorizontalPodAutoscaler autoscaling/v2 HostFirmwareComponents metal3.io/v1alpha1 HostFirmwareSettings metal3.io/v1alpha1 Identity user.openshift.io/v1 Image config.openshift.io/v1 Image image.openshift.io/v1 ImageContentPolicy config.openshift.io/v1 ImageContentSourcePolicy operator.openshift.io/v1alpha1 ImageDigestMirrorSet config.openshift.io/v1 ImagePruner imageregistry.operator.openshift.io/v1 ImageSignature image.openshift.io/v1 ImageStream image.openshift.io/v1 ImageStreamImage image.openshift.io/v1 ImageStreamImport image.openshift.io/v1 ImageStreamLayers image.openshift.io/v1 ImageStreamMapping image.openshift.io/v1 ImageStreamTag image.openshift.io/v1 ImageTag image.openshift.io/v1 ImageTagMirrorSet config.openshift.io/v1 Infrastructure config.openshift.io/v1 Ingress config.openshift.io/v1 Ingress networking.k8s.io/v1 IngressClass networking.k8s.io/v1 IngressController operator.openshift.io/v1 InsightsOperator operator.openshift.io/v1 InstallPlan operators.coreos.com/v1alpha1 IPAddress ipam.cluster.x-k8s.io/v1beta1 IPAddressClaim ipam.cluster.x-k8s.io/v1beta1 IPPool whereabouts.cni.cncf.io/v1alpha1 Job batch/v1 KubeAPIServer operator.openshift.io/v1 KubeControllerManager operator.openshift.io/v1 KubeletConfig machineconfiguration.openshift.io/v1 KubeScheduler operator.openshift.io/v1 KubeStorageVersionMigrator operator.openshift.io/v1 Lease coordination.k8s.io/v1 LimitRange v1 LocalResourceAccessReview authorization.openshift.io/v1 LocalSubjectAccessReview authorization.k8s.io/v1 LocalSubjectAccessReview authorization.openshift.io/v1 Machine machine.openshift.io/v1beta1 MachineAutoscaler autoscaling.openshift.io/v1beta1 MachineConfig machineconfiguration.openshift.io/v1 MachineConfigPool machineconfiguration.openshift.io/v1 MachineConfiguration operator.openshift.io/v1 MachineHealthCheck machine.openshift.io/v1beta1 MachineSet machine.openshift.io/v1beta1 Metal3Remediation infrastructure.cluster.x-k8s.io/v1beta1 Metal3RemediationTemplate infrastructure.cluster.x-k8s.io/v1beta1 MultiNetworkPolicy k8s.cni.cncf.io/v1beta1 MutatingWebhookConfiguration admissionregistration.k8s.io/v1 Namespace v1 Network config.openshift.io/v1 Network operator.openshift.io/v1 NetworkAttachmentDefinition k8s.cni.cncf.io/v1 NetworkPolicy networking.k8s.io/v1 Node v1 Node config.openshift.io/v1 NodeMetrics metrics.k8s.io/v1beta1 OAuth config.openshift.io/v1 OAuthAccessToken oauth.openshift.io/v1 OAuthAuthorizeToken oauth.openshift.io/v1 OAuthClient oauth.openshift.io/v1 OAuthClientAuthorization oauth.openshift.io/v1 OLMConfig operators.coreos.com/v1 OpenShiftAPIServer operator.openshift.io/v1 OpenShiftControllerManager operator.openshift.io/v1 Operator operators.coreos.com/v1 OperatorCondition operators.coreos.com/v2 OperatorGroup operators.coreos.com/v1 OperatorHub config.openshift.io/v1 OperatorPKI network.operator.openshift.io/v1 OverlappingRangeIPReservation whereabouts.cni.cncf.io/v1alpha1 PackageManifest packages.operators.coreos.com/v1 PerformanceProfile performance.openshift.io/v2 PersistentVolume v1 PersistentVolumeClaim v1 Pod v1 PodDisruptionBudget policy/v1 PodMetrics metrics.k8s.io/v1beta1 PodMonitor monitoring.coreos.com/v1 PodNetworkConnectivityCheck controlplane.operator.openshift.io/v1alpha1 PodSecurityPolicyReview security.openshift.io/v1 PodSecurityPolicySelfSubjectReview security.openshift.io/v1 PodSecurityPolicySubjectReview security.openshift.io/v1 PodTemplate v1 PreprovisioningImage metal3.io/v1alpha1 PriorityClass scheduling.k8s.io/v1 PriorityLevelConfiguration flowcontrol.apiserver.k8s.io/v1 Probe monitoring.coreos.com/v1 Profile tuned.openshift.io/v1 Project config.openshift.io/v1 Project project.openshift.io/v1 ProjectHelmChartRepository helm.openshift.io/v1beta1 ProjectRequest project.openshift.io/v1 Prometheus monitoring.coreos.com/v1 PrometheusRule monitoring.coreos.com/v1 Provisioning metal3.io/v1alpha1 Proxy config.openshift.io/v1 RangeAllocation security.openshift.io/v1 ReplicaSet apps/v1 ReplicationController v1 ResourceAccessReview authorization.openshift.io/v1 ResourceQuota v1 Role authorization.openshift.io/v1 Role rbac.authorization.k8s.io/v1 RoleBinding authorization.openshift.io/v1 RoleBinding rbac.authorization.k8s.io/v1 RoleBindingRestriction authorization.openshift.io/v1 Route route.openshift.io/v1 RuntimeClass node.k8s.io/v1 Scale autoscaling/v1 Scheduler config.openshift.io/v1 Secret v1 SecretList image.openshift.io/v1 SecurityContextConstraints security.openshift.io/v1 SelfSubjectAccessReview authorization.k8s.io/v1 SelfSubjectReview authentication.k8s.io/v1 SelfSubjectRulesReview authorization.k8s.io/v1 SelfSubjectRulesReview authorization.openshift.io/v1 Service v1 ServiceAccount v1 ServiceCA operator.openshift.io/v1 ServiceMonitor monitoring.coreos.com/v1 StatefulSet apps/v1 Storage operator.openshift.io/v1 StorageClass storage.k8s.io/v1 StorageState migration.k8s.io/v1alpha1 StorageVersionMigration migration.k8s.io/v1alpha1 SubjectAccessReview authorization.k8s.io/v1 SubjectAccessReview authorization.openshift.io/v1 SubjectRulesReview authorization.openshift.io/v1 Subscription operators.coreos.com/v1alpha1 Template template.openshift.io/v1 TemplateInstance template.openshift.io/v1 ThanosRuler monitoring.coreos.com/v1 TokenRequest authentication.k8s.io/v1 TokenReview authentication.k8s.io/v1 Tuned tuned.openshift.io/v1 User user.openshift.io/v1 UserIdentityMapping user.openshift.io/v1 UserOAuthAccessToken oauth.openshift.io/v1 ValidatingAdmissionPolicy admissionregistration.k8s.io/v1 ValidatingAdmissionPolicyBinding admissionregistration.k8s.io/v1 ValidatingWebhookConfiguration admissionregistration.k8s.io/v1 VolumeAttachment storage.k8s.io/v1 VolumeSnapshot snapshot.storage.k8s.io/v1 VolumeSnapshotClass snapshot.storage.k8s.io/v1 VolumeSnapshotContent snapshot.storage.k8s.io/v1
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/api_overview/api-index
Chapter 2. Host Security
Chapter 2. Host Security 2.1. Why Host Security Matters When deploying virtualization technologies, you must ensure that the host physical machine and its operating system cannot be compromised. In this case the host is a Red Hat Enterprise Linux system that manages the system, devices, memory and networks as well as all guest virtual machines. If the host physical machine is insecure, all guest virtual machines in the system are vulnerable. There are several ways to enhance security on systems using virtualization. You or your organization should create a Deployment Plan . This plan needs to contain the following: Operating specifications Specifies which services are needed on your guest virtual machines Specifies the host physical servers as well as what support is required for these services Here are a few security issues to consider while developing a deployment plan: Run only necessary services on host physical machines. The fewer processes and services running on the host physical machine, the higher the level of security and performance. Enable SELinux on the hypervisor. Read Section 2.1.2, "SELinux and Virtualization" for more information on using SELinux and virtualization. Use a firewall to restrict traffic to the host physical machine. You can setup a firewall with default-reject rules that will help secure the host physical machine from attacks. It is also important to limit network-facing services. Do not allow normal users to access the host operating system. If the host operating system is privileged, granting access to unprivileged accounts may compromise the level of security. 2.1.1. Security Concerns when Adding Block Devices to a Guest When using host block devices, partitions, and logical volumes (LVMs) it is important to follow these guidelines: The host physical machine should not use filesystem labels to identify file systems in the fstab file, the initrd file or on the kernel command line. Doing so presents a security risk if guest virtual machines have write access to whole partitions or LVM volumes, because a guest virtual machine could potentially write a filesystem label belonging to the host physical machine, to its own block device storage. Upon reboot of the host physical machine, the host physical machine could then mistakenly use the guest virtual machine's disk as a system disk, which would compromise the host physical machine system. It is preferable to use the UUID of a device to identify it in the fstab file, the initrd file or on the kernel command line. While using UUIDs is still not completely secure on certain file systems, a similar compromise with UUID is significantly less feasible. Guest virtual machines should not be given write access to whole disks or block devices (for example, /dev/sdb ). Guest virtual machines with access to whole block devices may be able to modify volume labels, which can be used to compromise the host physical machine system. Use partitions (for example, /dev/sdb1 ) or LVM volumes to prevent this problem. If you are using raw access to partitions, for example /dev/sdb1 or raw disks such as /dev/sdb, you should configure LVM to only scan disks that are safe, using the global_filter setting. Note When the guest virtual machine only has access to image files, these issues are not relevant. 2.1.2. SELinux and Virtualization Security Enhanced Linux was developed by the NSA with assistance from the Linux community to provide stronger security for Linux. SELinux limits an attacker's abilities and works to prevent many common security exploits such as buffer overflow attacks and privilege escalation. It is because of these benefits that all Red Hat Enterprise Linux systems should run with SELinux enabled and in enforcing mode. Procedure 2.1. Creating and mounting a logical volume on a guest virtual machine with SELinux enabled Create a logical volume. This example creates a 5 gigabyte logical volume named NewVolumeName on the volume group named volumegroup . This example also assumes that there is enough disk space. You may have to create additional storage on a network device and give the guest access to it. This information is discussed in more detail in the Red Hat Enterprise Linux Virtualization Administration Guide . Format the NewVolumeName logical volume with a file system that supports extended attributes, such as ext3. Create a new directory for mounting the new logical volume. This directory can be anywhere on your file system. It is advised not to put it in important system directories ( /etc , /var , /sys ) or in home directories ( /home or /root ). This example uses a directory called /virtstorage Mount the logical volume. Set the SELinux type for the folder you just created. If the targeted policy is used (targeted is the default policy) the command appends a line to the /etc/selinux/targeted/contexts/files/file_contexts.local file which makes the change persistent. The appended line may resemble this: Run the command to change the type of the mount point ( /virtstorage ) and all files under it to virt_image_t (the restorecon and setfiles commands read the files in /etc/selinux/targeted/contexts/files/ ). Note Create a new file (using the touch command) on the file system. Verify the file has been relabeled using the following command: The output shows that the new file has the correct attribute, virt_image_t . 2.1.3. SELinux This section contains topics to consider when using SELinux with your virtualization deployment. When you deploy system changes or add devices, you must update your SELinux policy accordingly. To configure an LVM volume for a guest virtual machine, you must modify the SELinux context for the respective underlying block device and volume group. Make sure that you have installed the policycoreutils-python package ( yum install policycoreutils-python ) before running the command. KVM and SELinux The following table shows the SELinux Booleans which affect KVM when launched by libvirt. KVM SELinux Booleans SELinux Boolean Description virt_use_comm Allow virt to use serial/parallel communication ports. virt_use_fusefs Allow virt to read fuse files. virt_use_nfs Allow virt to manage NFS files. virt_use_samba Allow virt to manage CIFS files. virt_use_sanlock Allow sanlock to manage virt lib files. virt_use_sysfs Allow virt to manage device configuration (PCI). virt_use_xserver Allow virtual machine to interact with the xserver. virt_use_usb Allow virt to use USB devices. 2.1.4. Virtualization Firewall Information Various ports are used for communication between guest virtual machines and corresponding management utilities. Note Any network service on a guest virtual machine must have the applicable ports open on the guest virtual machine to allow external access. If a network service on a guest virtual machine is firewalled it will be inaccessible. Always verify the guest virtual machine's network configuration first. ICMP requests must be accepted. ICMP packets are used for network testing. You cannot ping guest virtual machines if the ICMP packets are blocked. Port 22 should be open for SSH access and the initial installation. Ports 80 or 443 (depending on the security settings on the RHEV Manager) are used by the vdsm-reg service to communicate information about the host physical machine. Ports 5634 to 6166 are used for guest virtual machine console access with the SPICE protocol. Ports 49152 to 49216 are used for migrations with KVM. Migration may use any port in this range depending on the number of concurrent migrations occurring. Enabling IP forwarding ( net.ipv4.ip_forward = 1 ) is also required for shared bridges and the default bridge. Note that installing libvirt enables this variable so it will be enabled when the virtualization packages are installed unless it was manually disabled. Note Note that enabling IP forwarding is not required for physical bridge devices. When a guest virtual machine is connected through a physical bridge, traffic only operates at a level that does not require IP configuration such as IP forwarding.
[ "lvcreate -n NewVolumeName -L 5G volumegroup", "mke2fs -j /dev/volumegroup/NewVolumeName", "mkdir /virtstorage", "mount /dev/volumegroup/NewVolumeName /virtstorage", "semanage fcontext -a -t virt_image_t \"/virtstorage(/.*)?\"", "/virtstorage(/.*)? system_u:object_r:virt_image_t:s0", "restorecon -R -v /virtstorage", "touch /virtstorage/newfile", "sudo ls -Z /virtstorage -rw-------. root root system_u:object_r:virt_image_t:s0 newfile", "semanage fcontext -a -t virt_image_t -f -b /dev/sda2 restorecon /dev/sda2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_security_guide/chap-virtualization_security_guide-host_security
Appendix B. Revision history
Appendix B. Revision history 0.0-9 Tue March 18 2025, Gabriela Fialova ( [email protected] ) Added a Known Issue in RHEL-82566 (Installer) 0.0-8 Tue March 11 2025, Gabriela Fialova ( [email protected] ) Updated a Deprecated functionality in RHEL-30730 (Filesystems and storage) 0.0-7 Thu March 6 2025, Gabriela Fialova ( [email protected] ) Updated a Technology Preview in RHELPLAN-145900 (IdM) 0.0-6 Thu February 27 2025, Marc Muehlfeld ( [email protected] ) Added a Technology Preview in RHELDOCS-19773 (Networking) Added a Deprecated Functionality in RHELDOCS-19774 (Networking) Removed an enhancement about composefs, as it remains a Technology Preview (Containers) 0.0-5 Mon February 24 2025, Gabriela Fialova ( [email protected] ) Added a Known Issue in RHELDOCS-19626 (Security) Updated a Feature in RHELDOCS-18125 (RHEL in cloud environments) 0.0-4 Wed February 19 2025, Gabriela Fialova ( [email protected] ) Added a New Feature in RHELDOCS-18391 (Infrastructure services) 0.0-6 Thu Feb 06 2025, Gabriela Fialova ( [email protected] ) Added an Enhancement RHELDOCS-18451 (Filesystems) 0.0-5 Thu Jan 30 2025, Gabriela Fialova ( [email protected] ) Added an Known Issue RHELDOCS-19603 (IdM SSSD) 0.0-4 Wed Jan 22 2025, Gabriela Fialova ( [email protected] ) Updated links in a Technology Preview RHELDOCS-19061 (IdM DS) Added an Known Issue RHELDOCS-18863 (Virtualization) Updated an Enhancement RHEL-45620 (Security) Corrected typos throughout the document. 0.0-3 Mon Jan 20 2025, Gabriela Fialova ( [email protected] ) Added an Known Issue RHEL-13837 (Installer) 0.0-2 Thu January 16 2025, Marc Muehlfeld ( [email protected] ), Gabriela Fialova ( [email protected] ) Added a Bug Fix RHEL-73167 (Networking) Added a Removed Functionality RHELDOCS-19141 (Desktop) Added a Removed Functionality RHELDOCS-19156 (Desktop) 0.0-1 Thu January 9 2025, Gabriela Fialova ( [email protected] ) Updated an Enhancement RHEL-7768 (Filesystems and storage) 0.0-0 Wed November 13 2024, Gabriela Fialova ( [email protected] ) Release of the Red Hat Enterprise Linux 9.5 Release Notes.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.5_release_notes/revision_history
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_openshift_data_foundation_using_ibm_power/making-open-source-more-inclusive
2.4. Security
2.4. Security KVM virtual machines use the following features to improve their security: SELinux Security-Enhanced Linux, or SELinux, provides Mandatory Access Control (MAC) for all Linux systems, and thus benefits also Linux guests. Under the control of SELinux, all processes and files are given a type , and their access on the system is limited by fine-grained controls of various types. SELinux limits the abilities of an attacker and works to prevent many common security exploits such as buffer overflow attacks and privilege escalation. sVirt sVirt is a technology included in Red Hat Enterprise Linux 7 that integrates SELinux and virtualization. It applies Mandatory Access Control (MAC) to improve security when using virtual machines, and hardens the system against hypervisor bugs that might be used to attack the host or another virtual machine. Note For more information on security in virtualization, see the Red Hat Enterprise Linux 7 Virtualization Security Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_getting_started_guide/sec-virtualization_getting_started-advantages-security
Chapter 60. project
Chapter 60. project This chapter describes the commands under the project command. 60.1. project create Create new project Usage: Table 60.1. Positional Arguments Value Summary <project-name> New project name Table 60.2. Optional Arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain owning the project (name or id) --parent <project> Parent of the project (name or id) --description <description> Project description --enable Enable project --disable Disable project --property <key=value> Add a property to <name> (repeat option to set multiple properties) --or-show Return existing project --tag <tag> Tag to be added to the project (repeat option to set multiple tags) Table 60.3. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 60.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 60.5. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 60.6. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 60.2. project delete Delete project(s) Usage: Table 60.7. Positional Arguments Value Summary <project> Project(s) to delete (name or id) Table 60.8. Optional Arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain owning <project> (name or id) 60.3. project list List projects Usage: Table 60.9. Optional Arguments Value Summary -h, --help Show this help message and exit --domain <domain> Filter projects by <domain> (name or id) --user <user> Filter projects by <user> (name or id) --my-projects List projects for the authenticated user. supersedes other filters. --long List additional fields in output --sort <key>[:<direction>] Sort output by selected keys and directions (asc or desc) (default: asc), repeat this option to specify multiple keys and directions. --tags <tag>[,<tag>,... ] List projects which have all given tag(s) (comma- separated list of tags) --tags-any <tag>[,<tag>,... ] List projects which have any given tag(s) (comma- separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude projects which have all given tag(s) (comma- separated list of tags) --not-tags-any <tag>[,<tag>,... ] Exclude projects which have any given tag(s) (comma- separated list of tags) Table 60.10. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 60.11. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 60.12. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 60.13. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 60.4. project purge Clean resources associated with a project Usage: Table 60.14. Optional Arguments Value Summary -h, --help Show this help message and exit --dry-run List a project's resources --keep-project Clean project resources, but don't delete the project --auth-project Delete resources of the project used to authenticate --project <project> Project to clean (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. 60.5. project set Set project properties Usage: Table 60.15. Positional Arguments Value Summary <project> Project to modify (name or id) Table 60.16. Optional Arguments Value Summary -h, --help Show this help message and exit --name <name> Set project name --domain <domain> Domain owning <project> (name or id) --description <description> Set project description --enable Enable project --disable Disable project --property <key=value> Set a property on <project> (repeat option to set multiple properties) --tag <tag> Tag to be added to the project (repeat option to set multiple tags) --clear-tags Clear tags associated with the project. specify both --tag and --clear-tags to overwrite current tags --remove-tag <tag> Tag to be deleted from the project (repeat option to delete multiple tags) 60.6. project show Display project details Usage: Table 60.17. Positional Arguments Value Summary <project> Project to display (name or id) Table 60.18. Optional Arguments Value Summary -h, --help Show this help message and exit --domain <domain> Domain owning <project> (name or id) --parents Show the project's parents as a list --children Show project's subtree (children) as a list Table 60.19. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 60.20. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 60.21. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 60.22. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack project create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--domain <domain>] [--parent <project>] [--description <description>] [--enable | --disable] [--property <key=value>] [--or-show] [--tag <tag>] <project-name>", "openstack project delete [-h] [--domain <domain>] <project> [<project> ...]", "openstack project list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--domain <domain>] [--user <user>] [--my-projects] [--long] [--sort <key>[:<direction>]] [--tags <tag>[,<tag>,...]] [--tags-any <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-tags-any <tag>[,<tag>,...]]", "openstack project purge [-h] [--dry-run] [--keep-project] (--auth-project | --project <project>) [--project-domain <project-domain>]", "openstack project set [-h] [--name <name>] [--domain <domain>] [--description <description>] [--enable | --disable] [--property <key=value>] [--tag <tag>] [--clear-tags] [--remove-tag <tag>] <project>", "openstack project show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--domain <domain>] [--parents] [--children] <project>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/project
Chapter 18. Improving cluster stability in high latency environments using worker latency profiles
Chapter 18. Improving cluster stability in high latency environments using worker latency profiles If the cluster administrator has performed latency tests for platform verification, they can discover the need to adjust the operation of the cluster to ensure stability in cases of high latency. The cluster administrator needs to change only one parameter, recorded in a file, which controls four parameters affecting how supervisory processes read status and interpret the health of the cluster. Changing only the one parameter provides cluster tuning in an easy, supportable manner. The Kubelet process provides the starting point for monitoring cluster health. The Kubelet sets status values for all nodes in the OpenShift Container Platform cluster. The Kubernetes Controller Manager ( kube controller ) reads the status values every 10 seconds, by default. If the kube controller cannot read a node status value, it loses contact with that node after a configured period. The default behavior is: The node controller on the control plane updates the node health to Unhealthy and marks the node Ready condition`Unknown`. In response, the scheduler stops scheduling pods to that node. The Node Lifecycle Controller adds a node.kubernetes.io/unreachable taint with a NoExecute effect to the node and schedules any pods on the node for eviction after five minutes, by default. This behavior can cause problems if your network is prone to latency issues, especially if you have nodes at the network edge. In some cases, the Kubernetes Controller Manager might not receive an update from a healthy node due to network latency. The Kubelet evicts pods from the node even though the node is healthy. To avoid this problem, you can use worker latency profiles to adjust the frequency that the Kubelet and the Kubernetes Controller Manager wait for status updates before taking action. These adjustments help to ensure that your cluster runs properly if network latency between the control plane and the worker nodes is not optimal. These worker latency profiles contain three sets of parameters that are predefined with carefully tuned values to control the reaction of the cluster to increased latency. There is no need to experimentally find the best values manually. You can configure worker latency profiles when installing a cluster or at any time you notice increased latency in your cluster network. 18.1. Understanding worker latency profiles Worker latency profiles are four different categories of carefully-tuned parameters. The four parameters which implement these values are node-status-update-frequency , node-monitor-grace-period , default-not-ready-toleration-seconds and default-unreachable-toleration-seconds . These parameters can use values which allow you to control the reaction of the cluster to latency issues without needing to determine the best values by using manual methods. Important Setting these parameters manually is not supported. Incorrect parameter settings adversely affect cluster stability. All worker latency profiles configure the following parameters: node-status-update-frequency Specifies how often the kubelet posts node status to the API server. node-monitor-grace-period Specifies the amount of time in seconds that the Kubernetes Controller Manager waits for an update from a kubelet before marking the node unhealthy and adding the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint to the node. default-not-ready-toleration-seconds Specifies the amount of time in seconds after marking a node unhealthy that the Kube API Server Operator waits before evicting pods from that node. default-unreachable-toleration-seconds Specifies the amount of time in seconds after marking a node unreachable that the Kube API Server Operator waits before evicting pods from that node. The following Operators monitor the changes to the worker latency profiles and respond accordingly: The Machine Config Operator (MCO) updates the node-status-update-frequency parameter on the worker nodes. The Kubernetes Controller Manager updates the node-monitor-grace-period parameter on the control plane nodes. The Kubernetes API Server Operator updates the default-not-ready-toleration-seconds and default-unreachable-toleration-seconds parameters on the control plane nodes. Although the default configuration works in most cases, OpenShift Container Platform offers two other worker latency profiles for situations where the network is experiencing higher latency than usual. The three worker latency profiles are described in the following sections: Default worker latency profile With the Default profile, each Kubelet updates its status every 10 seconds ( node-status-update-frequency ). The Kube Controller Manager checks the statuses of Kubelet every 5 seconds. The Kubernetes Controller Manager waits 40 seconds ( node-monitor-grace-period ) for a status update from Kubelet before considering the Kubelet unhealthy. If no status is made available to the Kubernetes Controller Manager, it then marks the node with the node.kubernetes.io/not-ready or node.kubernetes.io/unreachable taint and evicts the pods on that node. If a pod is on a node that has the NoExecute taint, the pod runs according to tolerationSeconds . If the node has no taint, it will be evicted in 300 seconds ( default-not-ready-toleration-seconds and default-unreachable-toleration-seconds settings of the Kube API Server ). Profile Component Parameter Value Default kubelet node-status-update-frequency 10s Kubelet Controller Manager node-monitor-grace-period 40s Kubernetes API Server Operator default-not-ready-toleration-seconds 300s Kubernetes API Server Operator default-unreachable-toleration-seconds 300s Medium worker latency profile Use the MediumUpdateAverageReaction profile if the network latency is slightly higher than usual. The MediumUpdateAverageReaction profile reduces the frequency of kubelet updates to 20 seconds and changes the period that the Kubernetes Controller Manager waits for those updates to 2 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter. The Kubernetes Controller Manager waits for 2 minutes to consider a node unhealthy. In another minute, the eviction process starts. Profile Component Parameter Value MediumUpdateAverageReaction kubelet node-status-update-frequency 20s Kubelet Controller Manager node-monitor-grace-period 2m Kubernetes API Server Operator default-not-ready-toleration-seconds 60s Kubernetes API Server Operator default-unreachable-toleration-seconds 60s Low worker latency profile Use the LowUpdateSlowReaction profile if the network latency is extremely high. The LowUpdateSlowReaction profile reduces the frequency of kubelet updates to 1 minute and changes the period that the Kubernetes Controller Manager waits for those updates to 5 minutes. The pod eviction period for a pod on that node is reduced to 60 seconds. If the pod has the tolerationSeconds parameter, the eviction waits for the period specified by that parameter. The Kubernetes Controller Manager waits for 5 minutes to consider a node unhealthy. In another minute, the eviction process starts. Profile Component Parameter Value LowUpdateSlowReaction kubelet node-status-update-frequency 1m Kubelet Controller Manager node-monitor-grace-period 5m Kubernetes API Server Operator default-not-ready-toleration-seconds 60s Kubernetes API Server Operator default-unreachable-toleration-seconds 60s 18.2. Implementing worker latency profiles at cluster creation Important To edit the configuration of the installation program, first use the command openshift-install create manifests to create the default node manifest and other manifest YAML files. This file structure must exist before you can add workerLatencyProfile . The platform on which you are installing might have varying requirements. Refer to the Installing section of the documentation for your specific platform. The workerLatencyProfile must be added to the manifest in the following sequence: Create the manifest needed to build the cluster, using a folder name appropriate for your installation. Create a YAML file to define config.node . The file must be in the manifests directory. When defining workerLatencyProfile in the manifest for the first time, specify any of the profiles at cluster creation time: Default , MediumUpdateAverageReaction or LowUpdateSlowReaction . Verification Here is an example manifest creation showing the spec.workerLatencyProfile Default value in the manifest file: USD openshift-install create manifests --dir=<cluster-install-dir> Edit the manifest and add the value. In this example we use vi to show an example manifest file with the "Default" workerLatencyProfile value added: USD vi <cluster-install-dir>/manifests/config-node-default-profile.yaml Example output apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: workerLatencyProfile: "Default" 18.3. Using and changing worker latency profiles To change a worker latency profile to deal with network latency, edit the node.config object to add the name of the profile. You can change the profile at any time as latency increases or decreases. You must move one worker latency profile at a time. For example, you cannot move directly from the Default profile to the LowUpdateSlowReaction worker latency profile. You must move from the Default worker latency profile to the MediumUpdateAverageReaction profile first, then to LowUpdateSlowReaction . Similarly, when returning to the Default profile, you must move from the low profile to the medium profile first, then to Default . Note You can also configure worker latency profiles upon installing an OpenShift Container Platform cluster. Procedure To move from the default worker latency profile: Move to the medium worker latency profile: Edit the node.config object: USD oc edit nodes.config/cluster Add spec.workerLatencyProfile: MediumUpdateAverageReaction : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1 # ... 1 Specifies the medium worker latency policy. Scheduling on each worker node is disabled as the change is being applied. Optional: Move to the low worker latency profile: Edit the node.config object: USD oc edit nodes.config/cluster Change the spec.workerLatencyProfile value to LowUpdateSlowReaction : Example node.config object apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: "true" include.release.openshift.io/self-managed-high-availability: "true" include.release.openshift.io/single-node-developer: "true" release.openshift.io/create-only: "true" creationTimestamp: "2022-07-08T16:02:51Z" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: "1865" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1 # ... 1 Specifies use of the low worker latency policy. Scheduling on each worker node is disabled as the change is being applied. Verification When all nodes return to the Ready condition, you can use the following command to look in the Kubernetes Controller Manager to ensure it was applied: USD oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5 Example output # ... - lastTransitionTime: "2022-07-11T19:47:10Z" reason: ProfileUpdated status: "False" type: WorkerLatencyProfileProgressing - lastTransitionTime: "2022-07-11T19:47:10Z" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: "True" type: WorkerLatencyProfileComplete - lastTransitionTime: "2022-07-11T19:20:11Z" reason: AsExpected status: "False" type: WorkerLatencyProfileDegraded - lastTransitionTime: "2022-07-11T19:20:36Z" status: "False" # ... 1 Specifies that the profile is applied and active. To change the medium profile to default or change the default to medium, edit the node.config object and set the spec.workerLatencyProfile parameter to the appropriate value. 18.4. Example steps for displaying resulting values of workerLatencyProfile You can display the values in the workerLatencyProfile with the following commands. Verification Check the default-not-ready-toleration-seconds and default-unreachable-toleration-seconds fields output by the Kube API Server: USD oc get KubeAPIServer -o yaml | grep -A 1 default- Example output default-not-ready-toleration-seconds: - "300" default-unreachable-toleration-seconds: - "300" Check the values of the node-monitor-grace-period field from the Kube Controller Manager: USD oc get KubeControllerManager -o yaml | grep -A 1 node-monitor Example output node-monitor-grace-period: - 40s Check the nodeStatusUpdateFrequency value from the Kubelet. Set the directory /host as the root directory within the debug shell. By changing the root directory to /host , you can run binaries contained in the host's executable paths: USD oc debug node/<worker-node-name> USD chroot /host # cat /etc/kubernetes/kubelet.conf|grep nodeStatusUpdateFrequency Example output "nodeStatusUpdateFrequency": "10s" These outputs validate the set of timing variables for the Worker Latency Profile.
[ "openshift-install create manifests --dir=<cluster-install-dir>", "vi <cluster-install-dir>/manifests/config-node-default-profile.yaml", "apiVersion: config.openshift.io/v1 kind: Node metadata: name: cluster spec: workerLatencyProfile: \"Default\"", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1", "oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5", "- lastTransitionTime: \"2022-07-11T19:47:10Z\" reason: ProfileUpdated status: \"False\" type: WorkerLatencyProfileProgressing - lastTransitionTime: \"2022-07-11T19:47:10Z\" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: \"True\" type: WorkerLatencyProfileComplete - lastTransitionTime: \"2022-07-11T19:20:11Z\" reason: AsExpected status: \"False\" type: WorkerLatencyProfileDegraded - lastTransitionTime: \"2022-07-11T19:20:36Z\" status: \"False\"", "oc get KubeAPIServer -o yaml | grep -A 1 default-", "default-not-ready-toleration-seconds: - \"300\" default-unreachable-toleration-seconds: - \"300\"", "oc get KubeControllerManager -o yaml | grep -A 1 node-monitor", "node-monitor-grace-period: - 40s", "oc debug node/<worker-node-name> chroot /host cat /etc/kubernetes/kubelet.conf|grep nodeStatusUpdateFrequency", "\"nodeStatusUpdateFrequency\": \"10s\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/scalability_and_performance/scaling-worker-latency-profiles
Chapter 5. Testing the configured back ends
Chapter 5. Testing the configured back ends After you deploy the back ends to the overcloud, test that you can successfully create volumes on them. Procedure Run the following command as the stack user to load the environment variables defined in home/stack/overcloudrec : Note For more information, see Accessing the overcloud in the Director Installation and Usage guide. Create a volume type for each back end. Log in to the Controller node of the overcloud as the stack user and run the following command: These commands create the volume types backend1 and backend2 , one for each back end that is defined with the cinder::config::cinder_config class of the environment file that you created. Map each volume type to the volume_backend_name of a back end that is enabled with the cinder_user_enabled_backends class of the environment file that you created The following commands map the volume type backend1 to netapp1 and backend2 to netapp2 : Run the following command to test that it is possible to create a back end on netapp1 by invoking the backend1 volume type: Create a similar volume on the netapp2 back end by invoking the backend2 volume type:
[ "source /home/stack/overcloudrc", "cinder type-create backend1 cinder type-create backend2", "cinder type-key backend1 set volume_backend_name=netapp1 cinder type-key backend2 set volume_backend_name=netapp2", "cinder create --volume-type backend1 --display_name netappvolume_1 1", "cinder create --volume-type backend2 --display_name netappvolume_2 1" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/custom_block_storage_back_end_deployment_guide/proc_testing-configured-back-ends_custom-cinder-back-end
Providing feedback on JBoss EAP documentation
Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Please include the Document URL , the section number and describe the issue . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/red_hat_jboss_eap_xp_4.0.0_release_notes/proc_providing-feedback-on-red-hat-documentation_default
Chapter 2. Configuring the Cluster Samples Operator
Chapter 2. Configuring the Cluster Samples Operator The Cluster Samples Operator, which operates in the openshift namespace, installs and updates the Red Hat Enterprise Linux (RHEL)-based OpenShift Container Platform image streams and OpenShift Container Platform templates. Cluster Samples Operator is being downsized Starting from OpenShift Container Platform 4.13, Cluster Samples Operator is downsized. Cluster Samples Operator will stop providing the following updates for non-Source-to-Image (Non-S2I) image streams and templates: new image streams and templates updates to the existing image streams and templates unless it is a CVE update Cluster Samples Operator will provide support for Non-S2I image streams and templates as per the OpenShift Container Platform lifecycle policy dates and support guidelines . Cluster Samples Operator will continue to support the S2I builder image streams and templates and accept the updates. S2I image streams and templates include: Ruby Python Node.js Perl PHP HTTPD Nginx EAP Java Webserver .NET Go Starting from OpenShift Container Platform 4.16, Cluster Samples Operator will stop managing non-S2I image streams and templates. You can contact the image stream or template owner for any requirements and future plans. In addition, refer to the list of the repositories hosting the image stream or templates . 2.1. Understanding the Cluster Samples Operator During installation, the Operator creates the default configuration object for itself and then creates the sample image streams and templates, including quick start templates. Note To facilitate image stream imports from other registries that require credentials, a cluster administrator can create any additional secrets that contain the content of a Docker config.json file in the openshift namespace needed for image import. The Cluster Samples Operator configuration is a cluster-wide resource, and the deployment is contained within the openshift-cluster-samples-operator namespace. The image for the Cluster Samples Operator contains image stream and template definitions for the associated OpenShift Container Platform release. When each sample is created or updated, the Cluster Samples Operator includes an annotation that denotes the version of OpenShift Container Platform. The Operator uses this annotation to ensure that each sample matches the release version. Samples outside of its inventory are ignored, as are skipped samples. Modifications to any samples that are managed by the Operator, where that version annotation is modified or deleted, are reverted automatically. Note The Jenkins images are part of the image payload from installation and are tagged into the image streams directly. The Cluster Samples Operator configuration resource includes a finalizer which cleans up the following upon deletion: Operator managed image streams. Operator managed templates. Operator generated configuration resources. Cluster status resources. Upon deletion of the samples resource, the Cluster Samples Operator recreates the resource using the default configuration. 2.1.1. Cluster Samples Operator's use of management state The Cluster Samples Operator is bootstrapped as Managed by default or if global proxy is configured. In the Managed state, the Cluster Samples Operator is actively managing its resources and keeping the component active in order to pull sample image streams and images from the registry and ensure that the requisite sample templates are installed. Certain circumstances result in the Cluster Samples Operator bootstrapping itself as Removed including: If the Cluster Samples Operator cannot reach registry.redhat.io after three minutes on initial startup after a clean installation. If the Cluster Samples Operator detects it is on an IPv6 network. If the image controller configuration parameters prevent the creation of image streams by using the default image registry, or by using the image registry specified by the samplesRegistry setting . Note For OpenShift Container Platform, the default image registry is registry.redhat.io . However, if the Cluster Samples Operator detects that it is on an IPv6 network and an OpenShift Container Platform global proxy is configured, then IPv6 check supersedes all the checks. As a result, the Cluster Samples Operator bootstraps itself as Removed . Important IPv6 installations are not currently supported by registry.redhat.io . The Cluster Samples Operator pulls most of the sample image streams and images from registry.redhat.io . 2.1.1.1. Restricted network installation Boostrapping as Removed when unable to access registry.redhat.io facilitates restricted network installations when the network restriction is already in place. Bootstrapping as Removed when network access is restricted allows the cluster administrator more time to decide if samples are desired, because the Cluster Samples Operator does not submit alerts that sample image stream imports are failing when the management state is set to Removed . When the Cluster Samples Operator comes up as Managed and attempts to install sample image streams, it starts alerting two hours after initial installation if there are failing imports. 2.1.1.2. Restricted network installation with initial network access Conversely, if a cluster that is intended to be a restricted network or disconnected cluster is first installed while network access exists, the Cluster Samples Operator installs the content from registry.redhat.io since it can access it. If you want the Cluster Samples Operator to still bootstrap as Removed in order to defer samples installation until you have decided which samples are desired, set up image mirrors, and so on, then follow the instructions for using the Samples Operator with an alternate registry and customizing nodes, both linked in the additional resources section, to override the Cluster Samples Operator default configuration and initially come up as Removed . You must put the following additional YAML file in the openshift directory created by openshift-install create manifest : Example Cluster Samples Operator YAML file with managementState: Removed apiVersion: samples.operator.openshift.io/v1 kind: Config metadata: name: cluster spec: architectures: - x86_64 managementState: Removed 2.1.2. Cluster Samples Operator's tracking and error recovery of image stream imports After creation or update of a samples image stream, the Cluster Samples Operator monitors the progress of each image stream tag's image import. If an import fails, the Cluster Samples Operator retries the import through the image stream image import API, which is the same API used by the oc import-image command, approximately every 15 minutes until it sees the import succeed, or if the Cluster Samples Operator's configuration is changed such that either the image stream is added to the skippedImagestreams list, or the management state is changed to Removed . Additional resources If the Cluster Samples Operator is removed during installation, you can use the Cluster Samples Operator with an alternate registry so content can be imported, and then set the Cluster Samples Operator to Managed to get the samples. To ensure the Cluster Samples Operator bootstraps as Removed in a restricted network installation with initial network access to defer samples installation until you have decided which samples are desired, follow the instructions for customizing nodes to override the Cluster Samples Operator default configuration and initially come up as Removed . To host samples in your disconnected environment, follow the instructions for using the Cluster Samples Operator with an alternate registry . 2.1.3. Cluster Samples Operator assistance for mirroring During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag. The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name> . During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed . If you choose to change it to Managed , it installs samples. Note The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators's objects to reach the services they require. You can use this config map as a reference for which images need to be mirrored for your image streams to import. While the Cluster Samples Operator is set to Removed , you can create your mirrored registry, or determine which existing mirrored registry you want to use. Mirror the samples you want to the mirrored registry using the new config map as your guide. Add any of the image streams you did not mirror to the skippedImagestreams list of the Cluster Samples Operator configuration object. Set samplesRegistry of the Cluster Samples Operator configuration object to the mirrored registry. Then set the Cluster Samples Operator to Managed to install the image streams you have mirrored. See Using Cluster Samples Operator image streams with alternate or mirrored registries for a detailed procedure. 2.2. Cluster Samples Operator configuration parameters The samples resource offers the following configuration fields: Parameter Description managementState Managed : The Cluster Samples Operator updates the samples as the configuration dictates. Unmanaged : The Cluster Samples Operator ignores updates to its configuration resource object and any image streams or templates in the openshift namespace. Removed : The Cluster Samples Operator removes the set of Managed image streams and templates in the openshift namespace. It ignores new samples created by the cluster administrator or any samples in the skipped lists. After the removals are complete, the Cluster Samples Operator works like it is in the Unmanaged state and ignores any watch events on the sample resources, image streams, or templates. samplesRegistry Allows you to specify which registry is accessed by image streams for their image content. samplesRegistry defaults to registry.redhat.io for OpenShift Container Platform. Note Creation or update of RHEL content does not commence if the secret for pull access is not in place when either Samples Registry is not explicitly set, leaving an empty string, or when it is set to registry.redhat.io. In both cases, image imports work off of registry.redhat.io, which requires credentials. Creation or update of RHEL content is not gated by the existence of the pull secret if the Samples Registry is overridden to a value other than the empty string or registry.redhat.io. architectures Placeholder to choose an architecture type. skippedImagestreams Image streams that are in the Cluster Samples Operator's inventory but that the cluster administrator wants the Operator to ignore or not manage. You can add a list of image stream names to this parameter. For example, ["httpd","perl"] . skippedTemplates Templates that are in the Cluster Samples Operator's inventory, but that the cluster administrator wants the Operator to ignore or not manage. Secret, image stream, and template watch events can come in before the initial samples resource object is created, the Cluster Samples Operator detects and re-queues the event. 2.2.1. Configuration restrictions When the Cluster Samples Operator starts supporting multiple architectures, the architecture list is not allowed to be changed while in the Managed state. To change the architectures values, a cluster administrator must: Mark the Management State as Removed , saving the change. In a subsequent change, edit the architecture and change the Management State back to Managed . The Cluster Samples Operator still processes secrets while in Removed state. You can create the secret before switching to Removed , while in Removed before switching to Managed , or after switching to Managed state. There are delays in creating the samples until the secret event is processed if you create the secret after switching to Managed . This helps facilitate the changing of the registry, where you choose to remove all the samples before switching to insure a clean slate. Removing all samples before switching is not required. 2.2.2. Conditions The samples resource maintains the following conditions in its status: Condition Description SamplesExists Indicates the samples are created in the openshift namespace. ImageChangesInProgress True when image streams are created or updated, but not all of the tag spec generations and tag status generations match. False when all of the generations match, or unrecoverable errors occurred during import, the last seen error is in the message field. The list of pending image streams is in the reason field. This condition is deprecated in OpenShift Container Platform. ConfigurationValid True or False based on whether any of the restricted changes noted previously are submitted. RemovePending Indicator that there is a Management State: Removed setting pending, but the Cluster Samples Operator is waiting for the deletions to complete. ImportImageErrorsExist Indicator of which image streams had errors during the image import phase for one of their tags. True when an error has occurred. The list of image streams with an error is in the reason field. The details of each error reported are in the message field. MigrationInProgress True when the Cluster Samples Operator detects that the version is different than the Cluster Samples Operator version with which the current samples set are installed. This condition is deprecated in OpenShift Container Platform. 2.3. Accessing the Cluster Samples Operator configuration You can configure the Cluster Samples Operator by editing the file with the provided parameters. Prerequisites Install the OpenShift CLI ( oc ). Procedure Access the Cluster Samples Operator configuration: USD oc edit configs.samples.operator.openshift.io/cluster -o yaml The Cluster Samples Operator configuration resembles the following example: apiVersion: samples.operator.openshift.io/v1 kind: Config # ... 2.4. Removing deprecated image stream tags from the Cluster Samples Operator The Cluster Samples Operator leaves deprecated image stream tags in an image stream because users can have deployments that use the deprecated image stream tags. You can remove deprecated image stream tags by editing the image stream with the oc tag command. Note Deprecated image stream tags that the samples providers have removed from their image streams are not included on initial installations. Prerequisites You installed the oc CLI. Procedure Remove deprecated image stream tags by editing the image stream with the oc tag command. USD oc tag -d <image_stream_name:tag> Example output Deleted tag default/<image_stream_name:tag>. Additional resources For more information about configuring credentials, see Using image pull secrets .
[ "apiVersion: samples.operator.openshift.io/v1 kind: Config metadata: name: cluster spec: architectures: - x86_64 managementState: Removed", "oc edit configs.samples.operator.openshift.io/cluster -o yaml", "apiVersion: samples.operator.openshift.io/v1 kind: Config", "oc tag -d <image_stream_name:tag>", "Deleted tag default/<image_stream_name:tag>." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/images/configuring-samples-operator
Chapter 13. Configuring IP failover
Chapter 13. Configuring IP failover This topic describes configuring IP failover for pods and services on your OpenShift Container Platform cluster. IP failover uses Keepalived to host a set of externally accessible Virtual IP (VIP) addresses on a set of hosts. Each VIP address is only serviced by a single host at a time. Keepalived uses the Virtual Router Redundancy Protocol (VRRP) to determine which host, from the set of hosts, services which VIP. If a host becomes unavailable, or if the service that Keepalived is watching does not respond, the VIP is switched to another host from the set. This means a VIP is always serviced as long as a host is available. Every VIP in the set is serviced by a node selected from the set. If a single node is available, the VIPs are served. There is no way to explicitly distribute the VIPs over the nodes, so there can be nodes with no VIPs and other nodes with many VIPs. If there is only one node, all VIPs are on it. The administrator must ensure that all of the VIP addresses meet the following requirements: Accessible on the configured hosts from outside the cluster. Not used for any other purpose within the cluster. Keepalived on each node determines whether the needed service is running. If it is, VIPs are supported and Keepalived participates in the negotiation to determine which node serves the VIP. For a node to participate, the service must be listening on the watch port on a VIP or the check must be disabled. Note Each VIP in the set might be served by a different node. IP failover monitors a port on each VIP to determine whether the port is reachable on the node. If the port is not reachable, the VIP is not assigned to the node. If the port is set to 0 , this check is suppressed. The check script does the needed testing. When a node running Keepalived passes the check script, the VIP on that node can enter the master state based on its priority and the priority of the current master and as determined by the preemption strategy. A cluster administrator can provide a script through the OPENSHIFT_HA_NOTIFY_SCRIPT variable, and this script is called whenever the state of the VIP on the node changes. Keepalived uses the master state when it is servicing the VIP, the backup state when another node is servicing the VIP, or in the fault state when the check script fails. The notify script is called with the new state whenever the state changes. You can create an IP failover deployment configuration on OpenShift Container Platform. The IP failover deployment configuration specifies the set of VIP addresses, and the set of nodes on which to service them. A cluster can have multiple IP failover deployment configurations, with each managing its own set of unique VIP addresses. Each node in the IP failover configuration runs an IP failover pod, and this pod runs Keepalived. When using VIPs to access a pod with host networking, the application pod runs on all nodes that are running the IP failover pods. This enables any of the IP failover nodes to become the master and service the VIPs when needed. If application pods are not running on all nodes with IP failover, either some IP failover nodes never service the VIPs or some application pods never receive any traffic. Use the same selector and replication count, for both IP failover and the application pods, to avoid this mismatch. While using VIPs to access a service, any of the nodes can be in the IP failover set of nodes, since the service is reachable on all nodes, no matter where the application pod is running. Any of the IP failover nodes can become master at any time. The service can either use external IPs and a service port or it can use a NodePort . Setting up a NodePort is a privileged operation. When using external IPs in the service definition, the VIPs are set to the external IPs, and the IP failover monitoring port is set to the service port. When using a node port, the port is open on every node in the cluster, and the service load-balances traffic from whatever node currently services the VIP. In this case, the IP failover monitoring port is set to the NodePort in the service definition. Important Even though a service VIP is highly available, performance can still be affected. Keepalived makes sure that each of the VIPs is serviced by some node in the configuration, and several VIPs can end up on the same node even when other nodes have none. Strategies that externally load-balance across a set of VIPs can be thwarted when IP failover puts multiple VIPs on the same node. When you use ExternalIP , you can set up IP failover to have the same VIP range as the ExternalIP range. You can also disable the monitoring port. In this case, all of the VIPs appear on same node in the cluster. Any user can set up a service with an ExternalIP and make it highly available. Important There are a maximum of 254 VIPs in the cluster. 13.1. IP failover environment variables The following table contains the variables used to configure IP failover. Table 13.1. IP failover environment variables Variable Name Default Description OPENSHIFT_HA_MONITOR_PORT 80 The IP failover pod tries to open a TCP connection to this port on each Virtual IP (VIP). If connection is established, the service is considered to be running. If this port is set to 0 , the test always passes. OPENSHIFT_HA_NETWORK_INTERFACE The interface name that IP failover uses to send Virtual Router Redundancy Protocol (VRRP) traffic. The default value is eth0 . If your cluster uses the OVN-Kubernetes network plugin, set this value to br-ex to avoid packet loss. For a cluster that uses the OVN-Kubernetes network plugin, all listening interfaces do not serve VRRP but instead expect inbound traffic over a br-ex bridge. OPENSHIFT_HA_REPLICA_COUNT 2 The number of replicas to create. This must match spec.replicas value in IP failover deployment configuration. OPENSHIFT_HA_VIRTUAL_IPS The list of IP address ranges to replicate. This must be provided. For example, 1.2.3.4-6,1.2.3.9 . OPENSHIFT_HA_VRRP_ID_OFFSET 0 The offset value used to set the virtual router IDs. Using different offset values allows multiple IP failover configurations to exist within the same cluster. The default offset is 0 , and the allowed range is 0 through 255 . OPENSHIFT_HA_VIP_GROUPS The number of groups to create for VRRP. If not set, a group is created for each virtual IP range specified with the OPENSHIFT_HA_VIP_GROUPS variable. OPENSHIFT_HA_IPTABLES_CHAIN INPUT The name of the iptables chain, to automatically add an iptables rule to allow the VRRP traffic on. If the value is not set, an iptables rule is not added. If the chain does not exist, it is not created. OPENSHIFT_HA_CHECK_SCRIPT The full path name in the pod file system of a script that is periodically run to verify the application is operating. OPENSHIFT_HA_CHECK_INTERVAL 2 The period, in seconds, that the check script is run. OPENSHIFT_HA_NOTIFY_SCRIPT The full path name in the pod file system of a script that is run whenever the state changes. OPENSHIFT_HA_PREEMPTION preempt_nodelay 300 The strategy for handling a new higher priority host. The nopreempt strategy does not move master from the lower priority host to the higher priority host. 13.2. Configuring IP failover in your cluster As a cluster administrator, you can configure IP failover on an entire cluster, or on a subset of nodes, as defined by the label selector. You can also configure multiple IP failover deployments in your cluster, where each one is independent of the others. The IP failover deployment ensures that a failover pod runs on each of the nodes matching the constraints or the label used. This pod runs Keepalived, which can monitor an endpoint and use Virtual Router Redundancy Protocol (VRRP) to fail over the virtual IP (VIP) from one node to another if the first node cannot reach the service or endpoint. For production use, set a selector that selects at least two nodes, and set replicas equal to the number of selected nodes. Prerequisites You are logged in to the cluster as a user with cluster-admin privileges. You created a pull secret. Red Hat OpenStack Platform (RHOSP) only: You installed an RHOSP client (RHCOS documentation) on the target environment. You also downloaded the RHOSP openrc.sh rc file (RHCOS documentation) . Procedure Create an IP failover service account: USD oc create sa ipfailover Update security context constraints (SCC) for hostNetwork : USD oc adm policy add-scc-to-user privileged -z ipfailover USD oc adm policy add-scc-to-user hostnetwork -z ipfailover Red Hat OpenStack Platform (RHOSP) only: Complete the following steps to make a failover VIP address reachable on RHOSP ports. Use the RHOSP CLI to show the default RHOSP API and VIP addresses in the allowed_address_pairs parameter of your RHOSP cluster: USD openstack port show <cluster_name> -c allowed_address_pairs Output example *Field* *Value* allowed_address_pairs ip_address='192.168.0.5', mac_address='fa:16:3e:31:f9:cb' ip_address='192.168.0.7', mac_address='fa:16:3e:31:f9:cb' Set a different VIP address for the IP failover deployment and make the address reachable on RHOSP ports by entering the following command in the RHOSP CLI. Do not set any default RHOSP API and VIP addresses as the failover VIP address for the IP failover deployment. Example of adding the 1.1.1.1 failover IP address as an allowed address on RHOSP ports. USD openstack port set <cluster_name> --allowed-address ip-address=1.1.1.1,mac-address=fa:fa:16:3e:31:f9:cb Create a deployment YAML file to configure IP failover for your deployment. See "Example deployment YAML for IP failover configuration" in a later step. Specify the following specification in the IP failover deployment so that you pass the failover VIP address to the OPENSHIFT_HA_VIRTUAL_IPS environment variable: Example of adding the 1.1.1.1 VIP address to OPENSHIFT_HA_VIRTUAL_IPS apiVersion: apps/v1 kind: Deployment metadata: name: ipfailover-keepalived # ... spec: env: - name: OPENSHIFT_HA_VIRTUAL_IPS value: "1.1.1.1" # ... Create a deployment YAML file to configure IP failover. Note For Red Hat OpenStack Platform (RHOSP), you do not need to re-create the deployment YAML file. You already created this file as part of the earlier instructions. Example deployment YAML for IP failover configuration apiVersion: apps/v1 kind: Deployment metadata: name: ipfailover-keepalived 1 labels: ipfailover: hello-openshift spec: strategy: type: Recreate replicas: 2 selector: matchLabels: ipfailover: hello-openshift template: metadata: labels: ipfailover: hello-openshift spec: serviceAccountName: ipfailover privileged: true hostNetwork: true nodeSelector: node-role.kubernetes.io/worker: "" containers: - name: openshift-ipfailover image: registry.redhat.io/openshift4/ose-keepalived-ipfailover:v4.12 ports: - containerPort: 63000 hostPort: 63000 imagePullPolicy: IfNotPresent securityContext: privileged: true volumeMounts: - name: lib-modules mountPath: /lib/modules readOnly: true - name: host-slash mountPath: /host readOnly: true mountPropagation: HostToContainer - name: etc-sysconfig mountPath: /etc/sysconfig readOnly: true - name: config-volume mountPath: /etc/keepalive env: - name: OPENSHIFT_HA_CONFIG_NAME value: "ipfailover" - name: OPENSHIFT_HA_VIRTUAL_IPS 2 value: "1.1.1.1-2" - name: OPENSHIFT_HA_VIP_GROUPS 3 value: "10" - name: OPENSHIFT_HA_NETWORK_INTERFACE 4 value: "ens3" #The host interface to assign the VIPs - name: OPENSHIFT_HA_MONITOR_PORT 5 value: "30060" - name: OPENSHIFT_HA_VRRP_ID_OFFSET 6 value: "0" - name: OPENSHIFT_HA_REPLICA_COUNT 7 value: "2" #Must match the number of replicas in the deployment - name: OPENSHIFT_HA_USE_UNICAST value: "false" #- name: OPENSHIFT_HA_UNICAST_PEERS #value: "10.0.148.40,10.0.160.234,10.0.199.110" - name: OPENSHIFT_HA_IPTABLES_CHAIN 8 value: "INPUT" #- name: OPENSHIFT_HA_NOTIFY_SCRIPT 9 # value: /etc/keepalive/mynotifyscript.sh - name: OPENSHIFT_HA_CHECK_SCRIPT 10 value: "/etc/keepalive/mycheckscript.sh" - name: OPENSHIFT_HA_PREEMPTION 11 value: "preempt_delay 300" - name: OPENSHIFT_HA_CHECK_INTERVAL 12 value: "2" livenessProbe: initialDelaySeconds: 10 exec: command: - pgrep - keepalived volumes: - name: lib-modules hostPath: path: /lib/modules - name: host-slash hostPath: path: / - name: etc-sysconfig hostPath: path: /etc/sysconfig # config-volume contains the check script # created with `oc create configmap keepalived-checkscript --from-file=mycheckscript.sh` - configMap: defaultMode: 0755 name: keepalived-checkscript name: config-volume imagePullSecrets: - name: openshift-pull-secret 13 1 The name of the IP failover deployment. 2 The list of IP address ranges to replicate. This must be provided. For example, 1.2.3.4-6,1.2.3.9 . 3 The number of groups to create for VRRP. If not set, a group is created for each virtual IP range specified with the OPENSHIFT_HA_VIP_GROUPS variable. 4 The interface name that IP failover uses to send VRRP traffic. By default, eth0 is used. 5 The IP failover pod tries to open a TCP connection to this port on each VIP. If connection is established, the service is considered to be running. If this port is set to 0 , the test always passes. The default value is 80 . 6 The offset value used to set the virtual router IDs. Using different offset values allows multiple IP failover configurations to exist within the same cluster. The default offset is 0 , and the allowed range is 0 through 255 . 7 The number of replicas to create. This must match spec.replicas value in IP failover deployment configuration. The default value is 2 . 8 The name of the iptables chain to automatically add an iptables rule to allow the VRRP traffic on. If the value is not set, an iptables rule is not added. If the chain does not exist, it is not created, and Keepalived operates in unicast mode. The default is INPUT . 9 The full path name in the pod file system of a script that is run whenever the state changes. 10 The full path name in the pod file system of a script that is periodically run to verify the application is operating. 11 The strategy for handling a new higher priority host. The default value is preempt_delay 300 , which causes a Keepalived instance to take over a VIP after 5 minutes if a lower-priority master is holding the VIP. 12 The period, in seconds, that the check script is run. The default value is 2 . 13 Create the pull secret before creating the deployment, otherwise you will get an error when creating the deployment. 13.3. Configuring check and notify scripts Keepalived monitors the health of the application by periodically running an optional user-supplied check script. For example, the script can test a web server by issuing a request and verifying the response. As cluster administrator, you can provide an optional notify script, which is called whenever the state changes. The check and notify scripts run in the IP failover pod and use the pod file system, not the host file system. However, the IP failover pod makes the host file system available under the /hosts mount path. When configuring a check or notify script, you must provide the full path to the script. The recommended approach for providing the scripts is to use a ConfigMap object. The full path names of the check and notify scripts are added to the Keepalived configuration file, _/etc/keepalived/keepalived.conf , which is loaded every time Keepalived starts. The scripts can be added to the pod with a ConfigMap object as described in the following methods. Check script When a check script is not provided, a simple default script is run that tests the TCP connection. This default test is suppressed when the monitor port is 0 . Each IP failover pod manages a Keepalived daemon that manages one or more virtual IP (VIP) addresses on the node where the pod is running. The Keepalived daemon keeps the state of each VIP for that node. A particular VIP on a particular node might be in master , backup , or fault state. If the check script returns non-zero, the node enters the backup state, and any VIPs it holds are reassigned. Notify script Keepalived passes the following three parameters to the notify script: USD1 - group or instance USD2 - Name of the group or instance USD3 - The new state: master , backup , or fault Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. Procedure Create the desired script and create a ConfigMap object to hold it. The script has no input arguments and must return 0 for OK and 1 for fail . The check script, mycheckscript.sh : #!/bin/bash # Whatever tests are needed # E.g., send request and verify response exit 0 Create the ConfigMap object : USD oc create configmap mycustomcheck --from-file=mycheckscript.sh Add the script to the pod. The defaultMode for the mounted ConfigMap object files must able to run by using oc commands or by editing the deployment configuration. A value of 0755 , 493 decimal, is typical: USD oc set env deploy/ipfailover-keepalived \ OPENSHIFT_HA_CHECK_SCRIPT=/etc/keepalive/mycheckscript.sh USD oc set volume deploy/ipfailover-keepalived --add --overwrite \ --name=config-volume \ --mount-path=/etc/keepalive \ --source='{"configMap": { "name": "mycustomcheck", "defaultMode": 493}}' Note The oc set env command is whitespace sensitive. There must be no whitespace on either side of the = sign. Tip You can alternatively edit the ipfailover-keepalived deployment configuration: USD oc edit deploy ipfailover-keepalived spec: containers: - env: - name: OPENSHIFT_HA_CHECK_SCRIPT 1 value: /etc/keepalive/mycheckscript.sh ... volumeMounts: 2 - mountPath: /etc/keepalive name: config-volume dnsPolicy: ClusterFirst ... volumes: 3 - configMap: defaultMode: 0755 4 name: customrouter name: config-volume ... 1 In the spec.container.env field, add the OPENSHIFT_HA_CHECK_SCRIPT environment variable to point to the mounted script file. 2 Add the spec.container.volumeMounts field to create the mount point. 3 Add a new spec.volumes field to mention the config map. 4 This sets run permission on the files. When read back, it is displayed in decimal, 493 . Save the changes and exit the editor. This restarts ipfailover-keepalived . 13.4. Configuring VRRP preemption When a Virtual IP (VIP) on a node leaves the fault state by passing the check script, the VIP on the node enters the backup state if it has lower priority than the VIP on the node that is currently in the master state. The nopreempt strategy does not move master from the lower priority VIP on the host to the higher priority VIP on the host. With preempt_delay 300 , the default, Keepalived waits the specified 300 seconds and moves master to the higher priority VIP on the host. Procedure To specify preemption enter oc edit deploy ipfailover-keepalived to edit the router deployment configuration: USD oc edit deploy ipfailover-keepalived ... spec: containers: - env: - name: OPENSHIFT_HA_PREEMPTION 1 value: preempt_delay 300 ... 1 Set the OPENSHIFT_HA_PREEMPTION value: preempt_delay 300 : Keepalived waits the specified 300 seconds and moves master to the higher priority VIP on the host. This is the default value. nopreempt : does not move master from the lower priority VIP on the host to the higher priority VIP on the host. 13.5. Deploying multiple IP failover instances Each IP failover pod managed by the IP failover deployment configuration, 1 pod per node or replica, runs a Keepalived daemon. As more IP failover deployment configurations are configured, more pods are created and more daemons join into the common Virtual Router Redundancy Protocol (VRRP) negotiation. This negotiation is done by all the Keepalived daemons and it determines which nodes service which virtual IPs (VIP). Internally, Keepalived assigns a unique vrrp-id to each VIP. The negotiation uses this set of vrrp-ids , when a decision is made, the VIP corresponding to the winning vrrp-id is serviced on the winning node. Therefore, for every VIP defined in the IP failover deployment configuration, the IP failover pod must assign a corresponding vrrp-id . This is done by starting at OPENSHIFT_HA_VRRP_ID_OFFSET and sequentially assigning the vrrp-ids to the list of VIPs. The vrrp-ids can have values in the range 1..255 . When there are multiple IP failover deployment configurations, you must specify OPENSHIFT_HA_VRRP_ID_OFFSET so that there is room to increase the number of VIPs in the deployment configuration and none of the vrrp-id ranges overlap. 13.6. Configuring IP failover for more than 254 addresses IP failover management is limited to 254 groups of Virtual IP (VIP) addresses. By default OpenShift Container Platform assigns one IP address to each group. You can use the OPENSHIFT_HA_VIP_GROUPS variable to change this so multiple IP addresses are in each group and define the number of VIP groups available for each Virtual Router Redundancy Protocol (VRRP) instance when configuring IP failover. Grouping VIPs creates a wider range of allocation of VIPs per VRRP in the case of VRRP failover events, and is useful when all hosts in the cluster have access to a service locally. For example, when a service is being exposed with an ExternalIP . Note As a rule for failover, do not limit services, such as the router, to one specific host. Instead, services should be replicated to each host so that in the case of IP failover, the services do not have to be recreated on the new host. Note If you are using OpenShift Container Platform health checks, the nature of IP failover and groups means that all instances in the group are not checked. For that reason, the Kubernetes health checks must be used to ensure that services are live. Prerequisites You are logged in to the cluster with a user with cluster-admin privileges. Procedure To change the number of IP addresses assigned to each group, change the value for the OPENSHIFT_HA_VIP_GROUPS variable, for example: Example Deployment YAML for IP failover configuration ... spec: env: - name: OPENSHIFT_HA_VIP_GROUPS 1 value: "3" ... 1 If OPENSHIFT_HA_VIP_GROUPS is set to 3 in an environment with seven VIPs, it creates three groups, assigning three VIPs to the first group, and two VIPs to the two remaining groups. Note If the number of groups set by OPENSHIFT_HA_VIP_GROUPS is fewer than the number of IP addresses set to fail over, the group contains more than one IP address, and all of the addresses move as a single unit. 13.7. High availability For ExternalIP In non-cloud clusters, IP failover and ExternalIP to a service can be combined. The result is high availability services for users that create services using ExternalIP . The approach is to specify an spec.ExternalIP.autoAssignCIDRs range of the cluster network configuration, and then use the same range in creating the IP failover configuration. Because IP failover can support up to a maximum of 255 VIPs for the entire cluster, the spec.ExternalIP.autoAssignCIDRs must be /24 or smaller. Additional resources Configuration for ExternalIP Kubernetes documentation on ExternalIP 13.8. Removing IP failover When IP failover is initially configured, the worker nodes in the cluster are modified with an iptables rule that explicitly allows multicast packets on 224.0.0.18 for Keepalived. Because of the change to the nodes, removing IP failover requires running a job to remove the iptables rule and removing the virtual IP addresses used by Keepalived. Procedure Optional: Identify and delete any check and notify scripts that are stored as config maps: Identify whether any pods for IP failover use a config map as a volume: USD oc get pod -l ipfailover \ -o jsonpath="\ {range .items[?(@.spec.volumes[*].configMap)]} {'Namespace: '}{.metadata.namespace} {'Pod: '}{.metadata.name} {'Volumes that use config maps:'} {range .spec.volumes[?(@.configMap)]} {'volume: '}{.name} {'configMap: '}{.configMap.name}{'\n'}{end} {end}" Example output If the preceding step provided the names of config maps that are used as volumes, delete the config maps: USD oc delete configmap <configmap_name> Identify an existing deployment for IP failover: USD oc get deployment -l ipfailover Example output NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE default ipfailover 2/2 2 2 105d Delete the deployment: USD oc delete deployment <ipfailover_deployment_name> Remove the ipfailover service account: USD oc delete sa ipfailover Run a job that removes the IP tables rule that was added when IP failover was initially configured: Create a file such as remove-ipfailover-job.yaml with contents that are similar to the following example: apiVersion: batch/v1 kind: Job metadata: generateName: remove-ipfailover- labels: app: remove-ipfailover spec: template: metadata: name: remove-ipfailover spec: containers: - name: remove-ipfailover image: registry.redhat.io/openshift4/ose-keepalived-ipfailover:v4.12 command: ["/var/lib/ipfailover/keepalived/remove-failover.sh"] nodeSelector: 1 kubernetes.io/hostname: <host_name> 2 restartPolicy: Never 1 The nodeSelector is likely the same as the selector used in the old IP failover deployment. 2 Run the job for each node in your cluster that was configured for IP failover and replace the hostname each time. Run the job: USD oc create -f remove-ipfailover-job.yaml Example output Verification Confirm that the job removed the initial configuration for IP failover. USD oc logs job/remove-ipfailover-2h8dm Example output remove-failover.sh: OpenShift IP Failover service terminating. - Removing ip_vs module ... - Cleaning up ... - Releasing VIPs (interface eth0) ...
[ "oc create sa ipfailover", "oc adm policy add-scc-to-user privileged -z ipfailover", "oc adm policy add-scc-to-user hostnetwork -z ipfailover", "openstack port show <cluster_name> -c allowed_address_pairs", "*Field* *Value* allowed_address_pairs ip_address='192.168.0.5', mac_address='fa:16:3e:31:f9:cb' ip_address='192.168.0.7', mac_address='fa:16:3e:31:f9:cb'", "openstack port set <cluster_name> --allowed-address ip-address=1.1.1.1,mac-address=fa:fa:16:3e:31:f9:cb", "apiVersion: apps/v1 kind: Deployment metadata: name: ipfailover-keepalived spec: env: - name: OPENSHIFT_HA_VIRTUAL_IPS value: \"1.1.1.1\"", "apiVersion: apps/v1 kind: Deployment metadata: name: ipfailover-keepalived 1 labels: ipfailover: hello-openshift spec: strategy: type: Recreate replicas: 2 selector: matchLabels: ipfailover: hello-openshift template: metadata: labels: ipfailover: hello-openshift spec: serviceAccountName: ipfailover privileged: true hostNetwork: true nodeSelector: node-role.kubernetes.io/worker: \"\" containers: - name: openshift-ipfailover image: registry.redhat.io/openshift4/ose-keepalived-ipfailover:v4.12 ports: - containerPort: 63000 hostPort: 63000 imagePullPolicy: IfNotPresent securityContext: privileged: true volumeMounts: - name: lib-modules mountPath: /lib/modules readOnly: true - name: host-slash mountPath: /host readOnly: true mountPropagation: HostToContainer - name: etc-sysconfig mountPath: /etc/sysconfig readOnly: true - name: config-volume mountPath: /etc/keepalive env: - name: OPENSHIFT_HA_CONFIG_NAME value: \"ipfailover\" - name: OPENSHIFT_HA_VIRTUAL_IPS 2 value: \"1.1.1.1-2\" - name: OPENSHIFT_HA_VIP_GROUPS 3 value: \"10\" - name: OPENSHIFT_HA_NETWORK_INTERFACE 4 value: \"ens3\" #The host interface to assign the VIPs - name: OPENSHIFT_HA_MONITOR_PORT 5 value: \"30060\" - name: OPENSHIFT_HA_VRRP_ID_OFFSET 6 value: \"0\" - name: OPENSHIFT_HA_REPLICA_COUNT 7 value: \"2\" #Must match the number of replicas in the deployment - name: OPENSHIFT_HA_USE_UNICAST value: \"false\" #- name: OPENSHIFT_HA_UNICAST_PEERS #value: \"10.0.148.40,10.0.160.234,10.0.199.110\" - name: OPENSHIFT_HA_IPTABLES_CHAIN 8 value: \"INPUT\" #- name: OPENSHIFT_HA_NOTIFY_SCRIPT 9 # value: /etc/keepalive/mynotifyscript.sh - name: OPENSHIFT_HA_CHECK_SCRIPT 10 value: \"/etc/keepalive/mycheckscript.sh\" - name: OPENSHIFT_HA_PREEMPTION 11 value: \"preempt_delay 300\" - name: OPENSHIFT_HA_CHECK_INTERVAL 12 value: \"2\" livenessProbe: initialDelaySeconds: 10 exec: command: - pgrep - keepalived volumes: - name: lib-modules hostPath: path: /lib/modules - name: host-slash hostPath: path: / - name: etc-sysconfig hostPath: path: /etc/sysconfig # config-volume contains the check script # created with `oc create configmap keepalived-checkscript --from-file=mycheckscript.sh` - configMap: defaultMode: 0755 name: keepalived-checkscript name: config-volume imagePullSecrets: - name: openshift-pull-secret 13", "#!/bin/bash # Whatever tests are needed # E.g., send request and verify response exit 0", "oc create configmap mycustomcheck --from-file=mycheckscript.sh", "oc set env deploy/ipfailover-keepalived OPENSHIFT_HA_CHECK_SCRIPT=/etc/keepalive/mycheckscript.sh", "oc set volume deploy/ipfailover-keepalived --add --overwrite --name=config-volume --mount-path=/etc/keepalive --source='{\"configMap\": { \"name\": \"mycustomcheck\", \"defaultMode\": 493}}'", "oc edit deploy ipfailover-keepalived", "spec: containers: - env: - name: OPENSHIFT_HA_CHECK_SCRIPT 1 value: /etc/keepalive/mycheckscript.sh volumeMounts: 2 - mountPath: /etc/keepalive name: config-volume dnsPolicy: ClusterFirst volumes: 3 - configMap: defaultMode: 0755 4 name: customrouter name: config-volume", "oc edit deploy ipfailover-keepalived", "spec: containers: - env: - name: OPENSHIFT_HA_PREEMPTION 1 value: preempt_delay 300", "spec: env: - name: OPENSHIFT_HA_VIP_GROUPS 1 value: \"3\"", "oc get pod -l ipfailover -o jsonpath=\" {range .items[?(@.spec.volumes[*].configMap)]} {'Namespace: '}{.metadata.namespace} {'Pod: '}{.metadata.name} {'Volumes that use config maps:'} {range .spec.volumes[?(@.configMap)]} {'volume: '}{.name} {'configMap: '}{.configMap.name}{'\\n'}{end} {end}\"", "Namespace: default Pod: keepalived-worker-59df45db9c-2x9mn Volumes that use config maps: volume: config-volume configMap: mycustomcheck", "oc delete configmap <configmap_name>", "oc get deployment -l ipfailover", "NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE default ipfailover 2/2 2 2 105d", "oc delete deployment <ipfailover_deployment_name>", "oc delete sa ipfailover", "apiVersion: batch/v1 kind: Job metadata: generateName: remove-ipfailover- labels: app: remove-ipfailover spec: template: metadata: name: remove-ipfailover spec: containers: - name: remove-ipfailover image: registry.redhat.io/openshift4/ose-keepalived-ipfailover:v4.12 command: [\"/var/lib/ipfailover/keepalived/remove-failover.sh\"] nodeSelector: 1 kubernetes.io/hostname: <host_name> 2 restartPolicy: Never", "oc create -f remove-ipfailover-job.yaml", "job.batch/remove-ipfailover-2h8dm created", "oc logs job/remove-ipfailover-2h8dm", "remove-failover.sh: OpenShift IP Failover service terminating. - Removing ip_vs module - Cleaning up - Releasing VIPs (interface eth0)" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/configuring-ipfailover
Chapter 4. Enabling monitoring for user-defined projects
Chapter 4. Enabling monitoring for user-defined projects In OpenShift Container Platform 4.9, you can enable monitoring for user-defined projects in addition to the default platform monitoring. You can now monitor your own projects in OpenShift Container Platform without the need for an additional monitoring solution. Using this new feature centralizes monitoring for core platform components and user-defined projects. Note Versions of Prometheus Operator installed using Operator Lifecycle Manager (OLM) are not compatible with user-defined monitoring. Therefore, custom Prometheus instances installed as a Prometheus custom resource (CR) managed by the OLM Prometheus Operator are not supported in OpenShift Container Platform. 4.1. Enabling monitoring for user-defined projects Cluster administrators can enable monitoring for user-defined projects by setting the enableUserWorkload: true field in the cluster monitoring ConfigMap object. Important In OpenShift Container Platform 4.9 you must remove any custom Prometheus instances before enabling monitoring for user-defined projects. Note You must have access to the cluster as a user with the cluster-admin role to enable monitoring for user-defined projects in OpenShift Container Platform. Cluster administrators can then optionally grant users permission to configure the components that are responsible for monitoring user-defined projects. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). You have created the cluster-monitoring-config ConfigMap object. You have optionally created and configured the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project. You can add configuration options to this ConfigMap object for the components that monitor user-defined projects. Note Every time you save configuration changes to the user-workload-monitoring-config ConfigMap object, the pods in the openshift-user-workload-monitoring project are redeployed. It can sometimes take a while for these components to redeploy. You can create and configure the ConfigMap object before you first enable monitoring for user-defined projects, to prevent having to redeploy the pods often. Procedure Edit the cluster-monitoring-config ConfigMap object: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add enableUserWorkload: true under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1 1 When set to true , the enableUserWorkload parameter enables monitoring for user-defined projects in a cluster. Save the file to apply the changes. Monitoring for user-defined projects is then enabled automatically. Warning When changes are saved to the cluster-monitoring-config ConfigMap object, the pods and other resources in the openshift-monitoring project might be redeployed. The running monitoring processes in that project might also be restarted. Check that the prometheus-operator , prometheus-user-workload and thanos-ruler-user-workload pods are running in the openshift-user-workload-monitoring project. It might take a short while for the pods to start: USD oc -n openshift-user-workload-monitoring get pod Example output NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h Additional resources Creating a cluster monitoring config map Configuring the monitoring stack Granting users permission to configure monitoring for user-defined projects 4.2. Granting users permission to monitor user-defined projects Cluster administrators can monitor all core OpenShift Container Platform and user-defined projects. Cluster administrators can grant developers and other users permission to monitor their own projects. Privileges are granted by assigning one of the following monitoring roles: The monitoring-rules-view role provides read access to PrometheusRule custom resources for a project. The monitoring-rules-edit role grants a user permission to create, modify, and deleting PrometheusRule custom resources for a project. The monitoring-edit role grants the same privileges as the monitoring-rules-edit role. Additionally, it enables a user to create new scrape targets for services or pods. With this role, you can also create, modify, and delete ServiceMonitor and PodMonitor resources. You can also grant users permission to configure the components that are responsible for monitoring user-defined projects: The user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project enables you to edit the user-workload-monitoring-config ConfigMap object. With this role, you can edit the ConfigMap object to configure Prometheus, Prometheus Operator and Thanos Ruler for user-defined workload monitoring. This section provides details on how to assign these roles by using the OpenShift Container Platform web console or the CLI. 4.2.1. Granting user permissions by using the web console You can grant users permissions to monitor their own projects, by using the OpenShift Container Platform web console. Prerequisites You have access to the cluster as a user with the cluster-admin role. The user account that you are assigning the role to already exists. Procedure In the Administrator perspective within the OpenShift Container Platform web console, navigate to User Management Role Bindings Create Binding . In the Binding Type section, select the "Namespace Role Binding" type. In the Name field, enter a name for the role binding. In the Namespace field, select the user-defined project where you want to grant the access. Important The monitoring role will be bound to the project that you apply in the Namespace field. The permissions that you grant to a user by using this procedure will apply only to the selected project. Select monitoring-rules-view , monitoring-rules-edit , or monitoring-edit in the Role Name list. In the Subject section, select User . In the Subject Name field, enter the name of the user. Select Create to apply the role binding. 4.2.2. Granting user permissions by using the CLI You can grant users permissions to monitor their own projects, by using the OpenShift CLI ( oc ). Prerequisites You have access to the cluster as a user with the cluster-admin role. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure Assign a monitoring role to a user for a project: USD oc policy add-role-to-user <role> <user> -n <namespace> 1 1 Substitute <role> with monitoring-rules-view , monitoring-rules-edit , or monitoring-edit . Important Whichever role you choose, you must bind it against a specific project as a cluster administrator. As an example, substitute <role> with monitoring-edit , <user> with johnsmith , and <namespace> with ns1 . This assigns the user johnsmith permission to set up metrics collection and to create alerting rules in the ns1 namespace. 4.3. Granting users permission to configure monitoring for user-defined projects You can grant users permission to configure monitoring for user-defined projects. Prerequisites You have access to the cluster as a user with the cluster-admin role. The user account that you are assigning the role to already exists. You have installed the OpenShift CLI ( oc ). Procedure Assign the user-workload-monitoring-config-edit role to a user in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring adm policy add-role-to-user \ user-workload-monitoring-config-edit <user> \ --role-namespace openshift-user-workload-monitoring 4.4. Accessing metrics from outside the cluster for custom applications Learn how to query Prometheus statistics from the command line when monitoring your own services. You can access monitoring data from outside the cluster with the thanos-querier route. Prerequisites You deployed your own service, following the Enabling monitoring for user-defined projects procedure. Procedure Extract a token to connect to Prometheus: USD SECRET=`oc get secret -n openshift-user-workload-monitoring | grep prometheus-user-workload-token | head -n 1 | awk '{print USD1 }'` USD TOKEN=`echo USD(oc get secret USDSECRET -n openshift-user-workload-monitoring -o json | jq -r '.data.token') | base64 -d` Extract your route host: USD THANOS_QUERIER_HOST=`oc get route thanos-querier -n openshift-monitoring -o json | jq -r '.spec.host'` Query the metrics of your own services in the command line. For example: USD NAMESPACE=ns1 USD curl -X GET -kG "https://USDTHANOS_QUERIER_HOST/api/v1/query?" --data-urlencode "query=up{namespace='USDNAMESPACE'}" -H "Authorization: Bearer USDTOKEN" The output will show you the duration that your application pods have been up. Example output {"status":"success","data":{"resultType":"vector","result":[{"metric":{"__name__":"up","endpoint":"web","instance":"10.129.0.46:8080","job":"prometheus-example-app","namespace":"ns1","pod":"prometheus-example-app-68d47c4fb6-jztp2","service":"prometheus-example-app"},"value":[1591881154.748,"1"]}]}} 4.5. Excluding a user-defined project from monitoring Individual user-defined projects can be excluded from user workload monitoring. To do so, simply add the openshift.io/user-monitoring label to the project's namespace with a value of false . Procedure Add the label to the project namespace: USD oc label namespace my-project 'openshift.io/user-monitoring=false' To re-enable monitoring, remove the label from the namespace: USD oc label namespace my-project 'openshift.io/user-monitoring-' Note If there were any active monitoring targets for the project, it may take a few minutes for Prometheus to stop scraping them after adding the label. 4.6. Disabling monitoring for user-defined projects After enabling monitoring for user-defined projects, you can disable it again by setting enableUserWorkload: false in the cluster monitoring ConfigMap object. Note Alternatively, you can remove enableUserWorkload: true to disable monitoring for user-defined projects. Procedure Edit the cluster-monitoring-config ConfigMap object: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Set enableUserWorkload: to false under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: false Save the file to apply the changes. Monitoring for user-defined projects is then disabled automatically. Check that the prometheus-operator , prometheus-user-workload and thanos-ruler-user-workload pods are terminated in the openshift-user-workload-monitoring project. This might take a short while: USD oc -n openshift-user-workload-monitoring get pod Example output No resources found in openshift-user-workload-monitoring project. Note The user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project is not automatically deleted when monitoring for user-defined projects is disabled. This is to preserve any custom configurations that you may have created in the ConfigMap object. 4.7. steps Managing metrics
[ "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: true 1", "oc -n openshift-user-workload-monitoring get pod", "NAME READY STATUS RESTARTS AGE prometheus-operator-6f7b748d5b-t7nbg 2/2 Running 0 3h prometheus-user-workload-0 4/4 Running 1 3h prometheus-user-workload-1 4/4 Running 1 3h thanos-ruler-user-workload-0 3/3 Running 0 3h thanos-ruler-user-workload-1 3/3 Running 0 3h", "oc policy add-role-to-user <role> <user> -n <namespace> 1", "oc -n openshift-user-workload-monitoring adm policy add-role-to-user user-workload-monitoring-config-edit <user> --role-namespace openshift-user-workload-monitoring", "SECRET=`oc get secret -n openshift-user-workload-monitoring | grep prometheus-user-workload-token | head -n 1 | awk '{print USD1 }'`", "TOKEN=`echo USD(oc get secret USDSECRET -n openshift-user-workload-monitoring -o json | jq -r '.data.token') | base64 -d`", "THANOS_QUERIER_HOST=`oc get route thanos-querier -n openshift-monitoring -o json | jq -r '.spec.host'`", "NAMESPACE=ns1", "curl -X GET -kG \"https://USDTHANOS_QUERIER_HOST/api/v1/query?\" --data-urlencode \"query=up{namespace='USDNAMESPACE'}\" -H \"Authorization: Bearer USDTOKEN\"", "{\"status\":\"success\",\"data\":{\"resultType\":\"vector\",\"result\":[{\"metric\":{\"__name__\":\"up\",\"endpoint\":\"web\",\"instance\":\"10.129.0.46:8080\",\"job\":\"prometheus-example-app\",\"namespace\":\"ns1\",\"pod\":\"prometheus-example-app-68d47c4fb6-jztp2\",\"service\":\"prometheus-example-app\"},\"value\":[1591881154.748,\"1\"]}]}}", "oc label namespace my-project 'openshift.io/user-monitoring=false'", "oc label namespace my-project 'openshift.io/user-monitoring-'", "oc -n openshift-monitoring edit configmap cluster-monitoring-config", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | enableUserWorkload: false", "oc -n openshift-user-workload-monitoring get pod", "No resources found in openshift-user-workload-monitoring project." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/monitoring/enabling-monitoring-for-user-defined-projects
E.5. Additional Resources
E.5. Additional Resources Below are additional sources of information about the proc file system. Installable Documentation /usr/share/doc/kernel-doc- kernel_version /Documentation/ - This directory, which is provided by the kernel-doc package, contains documentation about the proc file system. Before accessing the kernel documentation, you must run the following command as root: /usr/share/doc/kernel-doc- kernel_version /Documentation/filesystems/proc.txt - Contains assorted, but limited, information about all aspects of the /proc/ directory. /usr/share/doc/kernel-doc- kernel_version /Documentation/sysrq.txt - An overview of System Request Key options. /usr/share/doc/kernel-doc- kernel_version /Documentation/sysctl/ - A directory containing a variety of sysctl tips, including modifying values that concern the kernel ( kernel.txt ), accessing file systems ( fs.txt ), and virtual memory use ( vm.txt ). /usr/share/doc/kernel-doc- kernel_version /Documentation/networking/ip-sysctl.txt - A detailed overview of IP networking options.
[ "~]# yum install kernel-doc" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s1-proc-additional-resources
probe::netdev.change_rx_flag
probe::netdev.change_rx_flag Name probe::netdev.change_rx_flag - Called when the device RX flag will be changed Synopsis netdev.change_rx_flag Values flags The new flags dev_name The device that will be changed
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-netdev-change-rx-flag
Chapter 1. Notification of name change to Streams for Apache Kafka
Chapter 1. Notification of name change to Streams for Apache Kafka AMQ Streams is being renamed as streams for Apache Kafka as part of a branding effort. This change aims to increase awareness among customers of Red Hat's product for Apache Kafka. During this transition period, you may encounter references to the old name, AMQ Streams. We are actively working to update our documentation, resources, and media to reflect the new name.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/release_notes_for_streams_for_apache_kafka_2.9_on_rhel/ref-name-change-str
Chapter 5. Using the Red Hat Satellite API
Chapter 5. Using the Red Hat Satellite API This chapter provides a range of examples of how to use the Red Hat Satellite API to perform different tasks. You can use the API on Satellite Server via HTTPS on port 443, or on Capsule Server via HTTPS on port 8443. You can address these different port requirements within the script itself. For example, in Ruby, you can specify the Satellite and Capsule URLs as follows: For the host that is subscribed to Satellite Server or Capsule Server, you can determine the correct port required to access the API from the /etc/rhsm/rhsm.conf file, in the port entry of the [server] section. You can use these values to fully automate your scripts, removing any need to verify which ports to use. This chapter uses curl for sending API requests. For more information, see Section 4.1, "API requests with curl" . Examples in this chapter use the Python json.tool module to format the output. 5.1. Working with hosts Listing hosts This example returns a list of Satellite hosts. Example request: Example response: Requesting information for a host This request returns information for the host satellite.example.com . Example request: Example response: Listing host facts This request returns all facts for the host satellite.example.com . Example request: Example response: Searching for hosts with matching patterns This query returns all hosts that match the pattern "example". Example request: Example response: Searching for hosts in an environment This query returns all hosts in the production environment. Example request: Example response: Searching for hosts with a specific fact value This query returns all hosts with a model name RHEV Hypervisor . Example request: Example response: Deleting a host This request deletes a host with a name host1.example.com . Example request: Downloading a full boot disk image This request downloads a full boot disk image for a host by its ID. Example request: 5.2. Working with life cycle environments Satellite divides application life cycles into life cycle environments, which represent each stage of the application life cycle. Life cycle environments are linked to from an environment path. To create linked life cycle environments with the API, use the prior_id parameter. You can find the built-in API reference for life cycle environments at https:// satellite.example.com /apidoc/v2/lifecycle_environments.html . The API routes include /katello/api/environments and /katello/api/organizations/:organization_id/environments . Listing life cycle environments Use this API call to list all the current life cycle environments on your Satellite for the default organization with ID 1 . Example request: Example response: Creating linked life cycle environments Use this example to create a path of life cycle environments. This procedure uses the default Library environment with ID 1 as the starting point for creating life cycle environments. Choose an existing life cycle environment that you want to use as a starting point. List the environment using its ID, in this case, the environment with ID 1 : Example request: Example response: Create a JSON file, for example, life-cycle.json , with the following content: Create a life cycle environment using the prior option set to 1 . Example request: Example response: In the command output, you can see the ID for this life cycle environment is 2 , and the life cycle environment prior to this one is 1 . Use the life cycle environment with ID 2 to create a successor to this environment. Edit the previously created life-cycle.json file, updating the label , name , and prior values. Create a life cycle environment, using the prior option set to 2 . Example request: Example response: In the command output, you can see the ID for this life cycle environment is 3 , and the life cycle environment prior to this one is 2 . Updating a life cycle environment You can update a life cycle environment using a PUT command. This example request updates a description of the life cycle environment with ID 3 . Example request: Example response: Deleting a life cycle environment You can delete a life cycle environment provided it has no successor. Therefore, delete them in reverse order using a command in the following format: Example request: 5.3. Uploading content to the Satellite Server This section outlines how to use the Satellite 6 API to upload and import large files to your Satellite Server. This process involves four steps: Create an upload request. Upload the content. Import the content. Delete the upload request. The maximum file size that you can upload is 2MB. For information about uploading larger content, see Uploading content larger than 2 MB . Procedure Assign the package name to the variable name : Example request: Assign the checksum of the file to the variable checksum : Example request: Assign the file size to the variable size : Example request: The following command creates a new upload request and returns the upload ID of the request using size and checksum . Example request: where 76, in this case, is an example Repository ID. Example request: Assign the upload ID to the variable upload_id : Example request: Assign the path of the package you want to upload to the variable path : Upload your content. Ensure you use the correct MIME type when you upload data. The API uses the application/json MIME type for the majority of requests to Satellite 6. Combine the upload_id, MIME type, and other parameters to upload content. Example request: After you have uploaded the content to the Satellite Server, you need to import it into the appropriate repository. Until you complete this step, the Satellite Server does not detect the new content. Example request: After you have successfully uploaded and imported your content, you can delete the upload request. This frees any temporary disk space that data is using during the upload. Example request: Uploading content larger than 2 MB The following example demonstrates how to split a large file into chunks, create an upload request, upload the individual files, import them to Satellite, and then delete the upload request. Note that this example uses sample content, host names, user names, repository ID, and file names. Assign the package name to the variable name : Assign the checksum of the file to the variable checksum : Assign the file size to the variable size : The following command creates a new upload request and returns the upload ID of the request using size and checksum . Example request: where 76, in this case, is an example Repository ID. Example output Assign the upload ID to the variable upload_id : Split the file in 2MB chunks: Example output Assign the prefix of the split files to the variable path. Upload the file chunks. The offset starts at 0 for the first chunk and increases by 2000000 for each file. Note the use of the offset parameter and how it relates to the file size. Note also that the indexes are used after the path variable, for example, USD{path}0, USD{path}1. Example requests: Import the complete upload to the repository: Delete the upload request: Uploading duplicate content Note that if you try to upload duplicate content using: Example request: The call will return a content unit ID instead of an upload ID, similar to this: You can copy this output and call import uploads directly to add the content to a repository: Example request: Note that the call changes from using upload_id to using content_unit_id . 5.4. Applying errata to a host or host collection You can use the API to apply errata to a host, host group, or host collection. The following is the basic syntax of a PUT request: You can browse the built in API doc to find a URL to use for applying Errata. You can use the Satellite web UI to help discover the format for the search query. Navigate to Hosts > Host Collections and select a host collection. Go to Collection Actions > Errata Installation and notice the search query box contents. For example, for a Host Collection called my-collection , the search box contains host_collection="my-collection" . Applying errata to a host This example uses the API URL for bulk actions /katello/api/hosts/bulk/install_content to show the format required for a simple search. Example request: Applying errata to a host collection In this example, notice the level of escaping required to pass the search string host_collection="my-collection" as seen in the Satellite web UI. Example request: 5.5. Using extended searches You can find search parameters that you can use to build your search queries in the web UI. For more information, see Building Search Queries in Administering Red Hat Satellite . For example, to search for hosts, complete the following steps: In the Satellite web UI, navigate to Hosts > All Hosts and click the Search field to display a list of search parameters. Locate the search parameters that you want to use. For this example, locate os_title and model . Combine the search parameters in your API query as follows: Example request: Example response: 5.6. Using searches with pagination control You can use the per_page and page pagination parameters to limit the search results that an API search query returns. The per_page parameter specifies the number of results per page and the page parameter specifies which page, as calculated by the per_page parameter, to return. The default number of items to return is set to 1000 when you do not specify any pagination parameters, but the per_page value has a default of 20 which applies when you specify the page parameter. Listing content views This example returns a list of Content Views in pages. The list contains 10 keys per page and returns the third page. Example request: Listing activation keys This example returns a list of activation keys for an organization with ID 1 in pages. The list contains 30 keys per page and returns the second page. Example request: Returning multiple pages You can use a for loop structure to get multiple pages of results. This example returns pages 1 to 3 of Content Views with 5 results per page: 5.7. Overriding Smart Class Parameters You can search for Smart Parameters using the API and supply a value to override a Smart Parameter in a Class. You can find the full list of attributes that you can modify in the built-in API reference at https:// satellite.example.com /apidoc/v2/smart_class_parameters/update.html . Find the ID of the Smart Class parameter you want to change: List all Smart Class Parameters. Example request: If you know the Puppet class ID, for example 5, you can restrict the scope: Example request: Both calls accept a search parameter. You can view the full list of searchable fields in the Satellite web UI. Navigate to Configure > Smart variables and click in the search query box to reveal the list of fields. Two particularly useful search parameters are puppetclass_name and key , which you can use to search for a specific parameter. For example, using the --data option to pass URL encoded data. Example request: Satellite supports standard scoped-search syntax. When you find the ID of the parameter, list the full details including current override values. Example request: Enable overriding of parameter values. Example request: Note that you cannot create or delete the parameters manually. You can only modify their attributes. Satellite creates and deletes parameters only upon class import from a proxy. Add custom override matchers. Example request: For more information about override values, see https:// satellite.example.com /apidoc/v2/override_values.html . You can delete override values. Example request: 5.8. Modifying a Smart Class parameter using an external file Using external files simplifies working with JSON data. Using an editor with syntax highlighting can help you avoid and locate mistakes. Modifying a Smart Class parameter using an external file This example uses a MOTD Puppet manifest. Search for the Puppet Class by name, motd in this case. Example request: Examine the following output. Each Smart Class Parameter has an ID that is global for the same Satellite instance. The content parameter of the motd class has id=3 in this Satellite Server. Do not confuse this with the Puppet Class ID that displays before the Puppet Class name. Example response: Use the parameter ID 3 to get the information specific to the motd parameter and redirect the output to a file, for example, output_file.json . Example request: Copy the file created in the step to a new file for editing, for example, changed_file.json : Modify the required values in the file. In this example, change the content parameter of the motd module, which requires changing the override option from false to true : After editing the file, verify that it looks as follows and then save the changes: Apply the changes to Satellite Server: 5.9. Deleting OpenSCAP reports In Satellite Server, you can delete one or more OpenSCAP reports. However, when you delete reports, you must delete one page at a time. If you want to delete all Openscap reports, use the bash script that follows. Deleting an OpenSCAP report To delete an OpenSCAP report, complete the following steps: List all OpenSCAP reports. Note the IDs of the reports that you want to delete. Example request: Example response: Using an ID from the step, delete the OpenSCAP report. Repeat for each ID that you want to delete. Example request: Example response: Example BASH script to delete all OpenSCAP reports Use the following bash script to delete all the OpenSCAP reports: #!/bin/bash #this script removes all the arf reports from the satellite server #settings USER= username PASS= password URI=https:// satellite.example.com #check amount of reports while [ USD(curl --insecure --user USDUSER:USDPASS USDURI/api/v2/compliance/arf_reports/ | python -m json.tool | grep \"\total\": | cut --fields=2 --delimiter":" | cut --fields=1 --delimiter"," | sed "s/ //g") -gt 0 ]; do #fetch reports for i in USD(curl --insecure --user USDUSER:USDPASS USDURI/api/v2/compliance/arf_reports/ | python -m json.tool | grep \"\id\": | cut --fields=2 --delimiter":" | cut --fields=1 --delimiter"," | sed "s/ //g") #delete reports do curl --insecure --user USDUSER:USDPASS --header "Content-Type: application/json" --request DELETE USDURI/api/v2/compliance/arf_reports/USDi done done 5.10. Working with Pulp using Satellite API When sending API requests to Pulp integrated with Satellite, use certificate-based authentication. The following examples of Pulp API requests include examples of how to use the Pulp CLI as an alternative. When you run pulp commands as root, Pulp CLI uses system certificates configured in /root/.config/pulp/cli.toml . Listing repositories The endpoint to list all repositories is /pulp/api/v3/repositories/ . The following query obtains a list of repositories from satellite.example.com while supplying the certificates necessary to issue a request from a Satellite Server. Example request: Example response: Alternatively, use the Pulp CLI to list repositories: Inspecting Pulp status The endpoint to return status information about Pulp is /pulp/api/v3/status/ . Requests for Pulp Status do not require authentication. Example request: Example response: Alternatively, use the Pulp CLI to retrieve Pulp status: Additional resources Run pulp --help for details on how to use the Pulp CLI. The full API reference for Pulp is available on your Satellite Server at https:// <satellite.example.com> /pulp/api/v3/docs/ .
[ "url = 'https:// satellite.example.com /api/v2/' capsule_url = 'https:// capsule.example.com :8443/api/v2/' katello_url = 'https:// satellite.example.com /katello/api/v2/'", "curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/v2/hosts | python3 -m json.tool", "{ \"total\" => 2, \"subtotal\" => 2, \"page\" => 1, \"per_page\" => 1000, \"search\" => nil, \"sort\" => { \"by\" => nil, \"order\" => nil }, \"results\" => [ }", "curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/v2/hosts/ satellite.example.com | python -m json.tool", "{ \"all_puppetclasses\": [], \"architecture_id\": 1, \"architecture_name\": \"x86_64\", \"build\": false, \"capabilities\": [ \"build\" ], \"certname\": \" satellite.example.com \", \"comment\": null, \"compute_profile_id\": null, }", "curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/v2/hosts/ satellite.example.com /facts | python -m json.tool", "{ \"results\": { \" satellite.example.com \": { \"augeasversion\": \"1.0.0\", \"bios_release_date\": \"01/01/2007\", \"bios_version\": \"0.5.1\", \"blockdevice_sr0_size\": \"1073741312\", \"facterversion\": \"1.7.6\", }", "curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/v2/hosts?search=example | python -m json.tool", "{ \"results\": [ { \"name\": \" satellite.example.com \", } ], \"search\": \"example\", }", "curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/v2/hosts?search=environment=production | python -m json.tool", "{ \"results\": [ { \"environment_name\": \"production\", \"name\": \" satellite.example.com \", } ], \"search\": \"environment=production\", }", "curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/v2/hosts?search=model=\\\"RHEV+Hypervisor\\\" | python -m json.tool", "{ \"results\": [ { \"model_id\": 1, \"model_name\": \"RHEV Hypervisor\", \"name\": \" satellite.example.com \", } ], \"search\": \"model=\\\"RHEV Hypervisor\\\"\", }", "curl --request DELETE --insecure --user sat_username:sat_password https:// satellite.example.com /api/v2/hosts/ host1.example.com | python -m json.tool", "curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/bootdisk/hosts/ host_ID ?full=true --output image .iso", "curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request GET --user sat_username:sat_password --insecure https:// satellite.example.com /katello/api/organizations/1/environments | python -m json.tool`", "output omitted \"description\": null, \"id\": 1, \"label\": \"Library\", \"library\": true, \"name\": \"Library\", \"organization\": { \"id\": 1, \"label\": \"Default_Organization\", \"name\": \"Default Organization\" }, \"permissions\": { \"destroy_lifecycle_environments\": false, \"edit_lifecycle_environments\": true, \"promote_or_remove_content_views_to_environments\": true, \"view_lifecycle_environments\": true }, \"prior\": null, \"successor\": null, output truncated", "curl --request GET --user sat_username:sat_password --insecure https:// satellite.example.com /katello/api/environments/1 | python -m json.tool", "output omitted \"id\": 1, \"label\": \"Library\", output omitted \"prior\": null, \"successor\": null, output truncated", "{\"organization_id\":1,\"label\":\"api-dev\",\"name\":\"API Development\",\"prior\":1}", "curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request POST --user sat_username:sat_password --insecure --data @life-cycle.json https:// satellite.example.com /katello/api/environments | python -m json.tool", "output omitted \"description\": null, \"id\": 2, \"label\": \"api-dev\", \"library\": false, \"name\": \"API Development\", \"organization\": { \"id\": 1, \"label\": \"Default_Organization\", \"name\": \"Default Organization\" }, \"permissions\": { \"destroy_lifecycle_environments\": true, \"edit_lifecycle_environments\": true, \"promote_or_remove_content_views_to_environments\": true, \"view_lifecycle_environments\": true }, \"prior\": { \"id\": 1, \"name\": \"Library\" }, output truncated", "{\"organization_id\":1,\"label\":\"api-qa\",\"name\":\"API QA\",\"prior\":2}", "curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request POST --user sat_username:sat_password --insecure --data @life-cycle.json https:// satellite.example.com /katello/api/environments | python -m json.tool", "output omitted \"description\": null, \"id\": 3, \"label\": \"api-qa\", \"library\": false, \"name\": \"API QA\", \"organization\": { \"id\": 1, \"label\": \"Default_Organization\", \"name\": \"Default Organization\" }, \"permissions\": { \"destroy_lifecycle_environments\": true, \"edit_lifecycle_environments\": true, \"promote_or_remove_content_views_to_environments\": true, \"view_lifecycle_environments\": true }, \"prior\": { \"id\": 2, \"name\": \"API Development\" }, \"successor\": null, output truncated", "curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request POST --user sat_username:sat_password --insecure --data '{\"description\":\"Quality Acceptance Testing\"}' https:// satellite.example.com /katello/api/environments/3 | python -m json.tool", "output omitted \"description\": \"Quality Acceptance Testing\", \"id\": 3, \"label\": \"api-qa\", \"library\": false, \"name\": \"API QA\", \"organization\": { \"id\": 1, \"label\": \"Default_Organization\", \"name\": \"Default Organization\" }, \"permissions\": { \"destroy_lifecycle_environments\": true, \"edit_lifecycle_environments\": true, \"promote_or_remove_content_views_to_environments\": true, \"view_lifecycle_environments\": true }, \"prior\": { \"id\": 2, \"name\": \"API Development\" }, output truncated", "curl --request DELETE --user sat_username:sat_password --insecure https:// satellite.example.com /katello/api/environments/ :id", "export name=jq-1.6-2.el7.x86_64.rpm", "export checksum=USD(sha256sum USDname|cut -c 1-65)", "export size=USD(du -bs USDname|cut -f 1)", "curl -H 'Content-Type: application/json' -X POST -k -u sat_username:sat_password -d \"{\\\"size\\\": \\\"USDsize\\\", \\\"checksum\\\":\\\"USDchecksum\\\"}\" https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads", "{\"upload_id\":\"37eb5900-597e-4ac3-9bc5-2250c302fdc4\"}", "export upload_id=37eb5900-597e-4ac3-9bc5-2250c302fdc4", "export path=/root/jq/jq-1.6-2.el7.x86_64.rpm", "curl -u sat_username:sat_password -H Accept:application/json -H Content-Type:multipart/form-data -X PUT --data-urlencode size=USDsize --data-urlencode offset=0 --data-urlencode content@USD{path} https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads/USDupload_id", "curl -H \"Content-Type:application/json\" -X PUT -u sat_username:sat_password -k -d \"{\\\"uploads\\\":[{\\\"id\\\": \\\"USDupload_id\\\", \\\"name\\\": \\\"USDname\\\", \\\"checksum\\\": \\\"USDchecksum\\\" }]}\" https://USD(hostname -f)/katello/api/v2/repositories/76/import_uploads", "curl -H 'Content-Type: application/json' -X DELETE -k -u sat_username:sat_password -d \"{}\" https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads/USDupload_id", "export name=bpftool-3.10.0-1160.2.1.el7.centos.plus.x86_64.rpm", "export checksum=USD(sha256sum USDname|cut -c 1-65)", "export size=USD(du -bs USDname|cut -f 1)", "curl -H 'Content-Type: application/json' -X POST -k -u sat_username:sat_password -d \"{\\\"size\\\": \\\"USDsize\\\", \\\"checksum\\\":\\\"USDchecksum\\\"}\" https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads", "{\"upload_id\":\"37eb5900-597e-4ac3-9bc5-2250c302fdc4\"}", "export upload_id=37eb5900-597e-4ac3-9bc5-2250c302fdc4", "split --bytes 2MB --numeric-suffixes --suffix-length=1 bpftool-3.10.0-1160.2.1.el7.centos.plus.x86_64.rpm bpftool", "ls bpftool[0-9] -l -rw-r--r--. 1 root root 2000000 Mar 31 14:15 bpftool0 -rw-r--r--. 1 root root 2000000 Mar 31 14:15 bpftool1 -rw-r--r--. 1 root root 2000000 Mar 31 14:15 bpftool2 -rw-r--r--. 1 root root 2000000 Mar 31 14:15 bpftool3 -rw-r--r--. 1 root root 868648 Mar 31 14:15 bpftool4", "export path=/root/tmp/bpftool", "curl -u sat_username:sat_password -H Accept:application/json -H Content-Type:multipart/form-data -X PUT --data-urlencode size=USDsize --data-urlencode offset=0 --data-urlencode content@USD{path}0 https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads/USDupload_id curl -u sat_username:sat_password -H Accept:application/json -H Content-Type:multipart/form-data -X PUT --data-urlencode size=USDsize --data-urlencode offset=2000000 --data-urlencode content@USD{path}1 https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads/USDupload_id curl -u sat_username:sat_password -H Accept:application/json -H Content-Type:multipart/form-data -X PUT --data-urlencode size=USDsize --data-urlencode offset=4000000 --data-urlencode content@USD{path}2 https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads/USDupload_id USDcurl -u sat_username:sat_password -H Accept:application/json -H Content-Type:multipart/form-data -X PUT --data-urlencode size=USDsize --data-urlencode offset=6000000 --data-urlencode content@USD{path}3 https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads/USDupload_id curl -u sat_username:sat_password -H Accept:application/json -H Content-Type:multipart/form-data -X PUT --data-urlencode size=USDsize --data-urlencode offset=8000000 --data-urlencode content@USD{path}4 https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads/USDupload_id", "curl -H \"Content-Type:application/json\" -X PUT -u sat_username:sat_password -k -d \"{\\\"uploads\\\":[{\\\"id\\\": \\\"USDupload_id\\\", \\\"name\\\": \\\"USDname\\\", \\\"checksum\\\": \\\"USDchecksum\\\" }]}\" https://USD(hostname -f)/katello/api/v2/repositories/76/import_uploads", "curl -H 'Content-Type: application/json' -X DELETE -k -u sat_username:sat_password -d \"{}\" https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads/USDupload_id", "curl -H 'Content-Type: application/json' -X POST -k -u sat_username:sat_password -d \"{\\\"size\\\": \\\"USDsize\\\", \\\"checksum\\\":\\\"USDchecksum\\\"}\" https://USD(hostname -f)/katello/api/v2/repositories/76/content_uploads", "{\"content_unit_href\":\"/pulp/api/v3/content/file/files/c1bcdfb8-d840-4604-845e-86e82454c747/\"}", "curl -H \"Content-Type:application/json\" -X PUT -u sat_username:sat_password -k \\-d \"{\\\"uploads\\\":[{\\\"content_unit_id\\\": \\\"/pulp/api/v3/content/file/files/c1bcdfb8-d840-4604-845e-86e82454c747/\\\", \\\"name\\\": \\\"USDname\\\", \\ \\\"checksum\\\": \\\"USDchecksum\\\" }]}\" https://USD(hostname -f)/katello/api/v2/repositories/76/import_uploads", "curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request PUT --user sat_username:sat_password --insecure --data json-formatted-data https:// satellite7.example.com", "curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request PUT --user sat_username:sat_password --insecure --data \"{\\\"organization_id\\\":1,\\\"included\\\":{\\\"search\\\":\\\" my-host \\\"},\\\"content_type\\\":\\\"errata\\\",\\\"content\\\":[\\\" RHBA-2016:1981 \\\"]}\" https:// satellite.example.com /api/v2/hosts/bulk/install_content", "curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request PUT --user sat_username:sat_password --insecure --data \"{\\\"organization_id\\\":1,\\\"included\\\":{\\\"search\\\":\\\"host_collection=\\\\\\\" my-collection \\\\\\\"\\\"},\\\"content_type\\\":\\\"errata\\\",\\\"content\\\":[\\\" RHBA-2016:1981 \\\"]}\" https:// satellite.example.com /api/v2/hosts/bulk/install_content", "curl --insecure --user sat_username:sat_password https:// satellite.example.com /api/v2/hosts?search=os_title=\\\"RedHat+7.7\\\",model=\\\"PowerEdge+R330\\\" | python -m json.tool", "{ \"results\": [ { \"model_id\": 1, \"model_name\": \"PowerEdge R330\", \"name\": \" satellite.example.com \", \"operatingsystem_id\": 1, \"operatingsystem_name\": \"RedHat 7.7\", } ], \"search\": \"os_title=\\\"RedHat 7.7\\\",model=\\\"PowerEdge R330\\\"\", \"subtotal\": 1, \"total\": 11 }", "curl --request GET --user sat_username:sat_password https://satellite.example.com/katello/api/content_views?per_page=10&amp;page=3", "curl --request GET --user sat_username:sat_password https://satellite.example.com/katello/api/activation_keys?organization_id=1&amp;per_page=30&amp;page=2", "for i in seq 1 3 ; do curl --request GET --user sat_username:sat_password https://satellite.example.com/katello/api/content_views?per_page=5&amp;page=USDi; done", "curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/smart_class_parameters", "curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/puppetclasses/5/smart_class_parameters", "curl --request GET --insecure --user sat_username:sat_password --data 'search=puppetclass_name = access_insights_client and key = authmethod' https:// satellite.example.com /api/smart_class_parameters", "curl --request GET --insecure --user sat_username:sat_password https:// satellite.example.com /api/smart_class_parameters/ 63", "curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request PUT --insecure --user sat_username:sat_password --data '{\"smart_class_parameter\":{\"override\":true}}' https:// satellite.example.com /api/smart_class_parameters/63", "curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request PUT --insecure --user sat_username:sat_password --data '{\"smart_class_parameter\":{\"override_value\":{\"match\":\"hostgroup=Test\",\"value\":\"2.4.6\"}}}' https:// satellite.example.com /api/smart_class_parameters/63", "curl --request DELETE --user sat_username:sat_password https:// satellite.example.com /api/smart_class_parameters/63/override_values/3", "curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request GET --user sat_user:sat_password --insecure https:// satellite.example.com /api/smart_class_parameters?search=puppetclass_name=motd | python -m json.tool", "{ \"avoid_duplicates\": false, \"created_at\": \"2017-02-06 12:37:48 UTC\", # Remove this line. \"default_value\": \"\", # Add a new value here. \"description\": \"\", \"hidden_value\": \"\", \"hidden_value?\": false, \"id\": 3, \"merge_default\": false, \"merge_overrides\": false, \"override\": false, # Set the override value to true . \"override_value_order\": \"fqdn\\nhostgroup\\nos\\ndomain\", \"override_values\": [], # Remove this line. \"override_values_count\": 0, \"parameter\": \"content\", \"parameter_type\": \"string\", \"puppetclass_id\": 3, \"puppetclass_name\": \"motd\", \"required\": false, \"updated_at\": \"2017-02-07 11:56:55 UTC\", # Remove this line. \"use_puppet_default\": false, \"validator_rule\": null, \"validator_type\": \"\" }", "curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request GET --user sat_user:sat_password --insecure \\` https:// satellite.example.com /api/smart_class_parameters/3 | python -m json.tool > output_file.json", "cp output_file.json changed_file.json", "{ \"avoid_duplicates\": false, \"created_at\": \"2017-02-06 12:37:48 UTC\", # Remove this line. \"default_value\": \"\", # Add a new value here. \"description\": \"\", \"hidden_value\": \"\", \"hidden_value?\": false, \"id\": 3, \"merge_default\": false, \"merge_overrides\": false, \"override\": false, # Set the override value to true . \"override_value_order\": \"fqdn\\nhostgroup\\nos\\ndomain\", \"override_values\": [], # Remove this line. \"override_values_count\": 0, \"parameter\": \"content\", \"parameter_type\": \"string\", \"puppetclass_id\": 3, \"puppetclass_name\": \"motd\", \"required\": false, \"updated_at\": \"2017-02-07 11:56:55 UTC\", # Remove this line. \"use_puppet_default\": false, \"validator_rule\": null, \"validator_type\": \"\" }", "{ \"avoid_duplicates\": false, \"default_value\": \" No Unauthorized Access Allowed \", \"description\": \"\", \"hidden_value\": \"\", \"hidden_value?\": false, \"id\": 3, \"merge_default\": false, \"merge_overrides\": false, \"override\": true, \"override_value_order\": \"fqdn\\nhostgroup\\nos\\ndomain\", \"override_values_count\": 0, \"parameter\": \"content\", \"parameter_type\": \"string\", \"puppetclass_id\": 3, \"puppetclass_name\": \"motd\", \"required\": false, \"use_puppet_default\": false, \"validator_rule\": null, \"validator_type\": \"\" }", "curl --header \"Accept:application/json\" --header \"Content-Type:application/json\" --request PUT --user sat_username:sat_password --insecure --data @changed_file.json https:// satellite.example.com /api/smart_class_parameters/3", "curl --insecure --user username :_password_ https:// satellite.example.com /api/v2/compliance/arf_reports/ | python -m json.tool", "% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 3252 0 3252 0 0 4319 0 --:--:-- --:--:-- --:--:-- 4318 { \"page\": 1, \"per_page\": 20, \"results\": [ { \"created_at\": \"2017-05-16 13:27:09 UTC\", \"failed\": 0, \"host\": \" host1.example.com \", \"id\": 404, \"othered\": 0, \"passed\": 0, \"updated_at\": \"2017-05-16 13:27:09 UTC\" }, { \"created_at\": \"2017-05-16 13:26:07 UTC\", \"failed\": 0, \"host\": \" host2.example.com , \"id\": 405, \"othered\": 0, \"passed\": 0, \"updated_at\": \"2017-05-16 13:26:07 UTC\" }, { \"created_at\": \"2017-05-16 13:25:07 UTC\", \"failed\": 0, \"host\": \" host3.example.com \", \"id\": 406, \"othered\": 0, \"passed\": 0, \"updated_at\": \"2017-05-16 13:25:07 UTC\" }, { \"created_at\": \"2017-05-16 13:24:07 UTC\", \"failed\": 0, \"host\": \" host4.example.com \", \"id\": 407, \"othered\": 0, \"passed\": 0, \"updated_at\": \"2017-05-16 13:24:07 UTC\" }, ], \"search\": null, \"sort\": { \"by\": null, \"order\": null }, \"subtotal\": 29, \"total\": 29", "curl --insecure --user username :_password_ --header \"Content-Type: application/json\" --request DELETE https:// satellite.example.com /api/v2/compliance/arf_reports/405", "HTTP/1.1 200 OK Date: Thu, 18 May 2017 07:14:36 GMT Server: Apache/2.4.6 (Red Hat Enterprise Linux) X-Frame-Options: SAMEORIGIN X-XSS-Protection: 1; mode=block X-Content-Type-Options: nosniff Foreman_version: 1.11.0.76 Foreman_api_version: 2 Apipie-Checksum: 2d39dc59aed19120d2359f7515e10d76 Cache-Control: max-age=0, private, must-revalidate X-Request-Id: f47eb877-35c7-41fe-b866-34274b56c506 X-Runtime: 0.661831 X-Powered-By: Phusion Passenger 4.0.18 Set-Cookie: request_method=DELETE; path=/ Set-Cookie: _session_id=d58fe2649e6788b87f46eabf8a461edd; path=/; secure; HttpOnly ETag: \"2574955fc0afc47cb5394ce95553f428\" Status: 200 OK Vary: Accept-Encoding Transfer-Encoding: chunked Content-Type: application/json; charset=utf-8", "#!/bin/bash #this script removes all the arf reports from the satellite server #settings USER= username PASS= password URI=https:// satellite.example.com #check amount of reports while [ USD(curl --insecure --user USDUSER:USDPASS USDURI/api/v2/compliance/arf_reports/ | python -m json.tool | grep \\\"\\total\\\": | cut --fields=2 --delimiter\":\" | cut --fields=1 --delimiter\",\" | sed \"s/ //g\") -gt 0 ]; do #fetch reports for i in USD(curl --insecure --user USDUSER:USDPASS USDURI/api/v2/compliance/arf_reports/ | python -m json.tool | grep \\\"\\id\\\": | cut --fields=2 --delimiter\":\" | cut --fields=1 --delimiter\",\" | sed \"s/ //g\") #delete reports do curl --insecure --user USDUSER:USDPASS --header \"Content-Type: application/json\" --request DELETE USDURI/api/v2/compliance/arf_reports/USDi done done", "curl --cacert /etc/pki/katello/certs/katello-server-ca.crt --cert /etc/foreman/client_cert.pem --key /etc/foreman/client_key.pem https:// <satellite.example.com> /pulp/api/v3/repositories/ | python3 -m json.tool", "{ \"count\": 1, \"next\": null, \"previous\": null, \"results\": [ { \"pulp_href\": \"/pulp/api/v3/repositories/rpm/rpm/018cd05a-4b83-73db-b71c-587c6181d89b/\", \"pulp_created\": \"2024-01-03T17:23:47.715882Z\", \"versions_href\": \"/pulp/api/v3/repositories/rpm/rpm/018cd05a-4b83-73db-b71c-587c6181d89b/versions/\", \"pulp_labels\": {}, \"latest_version_href\": \"/pulp/api/v3/repositories/rpm/rpm/018cd05a-4b83-73db-b71c-587c6181d89b/versions/1/\", \"name\": \"Red_Hat_Enterprise_Linux_8_for_x86_64_-_BaseOS_Kickstart_8_9-49838\", \"description\": null, \"retain_repo_versions\": null, \"remote\": null } ] }", "pulp repository list [ { \"pulp_href\": \"/pulp/api/v3/repositories/rpm/rpm/018cd025-c6ef-7237-a99e-70bab3d30941/\", \"pulp_created\": \"2024-01-03T16:26:25.904682Z\", \"versions_href\": \"/pulp/api/v3/repositories/rpm/rpm/018cd025-c6ef-7237-a99e-70bab3d30941/versions/\", \"pulp_labels\": {}, \"latest_version_href\": \"/pulp/api/v3/repositories/rpm/rpm/018cd025-c6ef-7237-a99e-70bab3d30941/versions/1/\", \"name\": \"Red_Hat_Enterprise_Linux_8_for_x86_64_-_AppStream_RPMs_8-2875\", \"description\": null, \"retain_repo_versions\": null, \"remote\": null } ]", "curl https:// <satellite.example.com> /pulp/api/v3/status/ | python3 -m json.tool", "{ \"versions\": [ { \"component\": \"core\", \"version\": \"3.39.4\", \"package\": \"pulpcore\", \"domain_compatible\": true }, { \"component\": \"rpm\", \"version\": \"3.23.0\", \"package\": \"pulp-rpm\", \"domain_compatible\": true },", "pulp status { \"versions\": [ { \"component\": \"core\", \"version\": \"3.39.4\", \"package\": \"pulpcore\", \"domain_compatible\": true }, { \"component\": \"rpm\", \"version\": \"3.23.0\", \"package\": \"pulp-rpm\", \"domain_compatible\": true }," ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/api_guide/chap-red_hat_satellite-api_guide-using_the_red_hat_satellite_api
A.2. Installed Documentation
A.2. Installed Documentation scl (1) - The man page for the scl tool for enabling Software Collections and running programs in Software Collection's environment. scl --help - General usage information for the scl tool for enabling Software Collections and running programs in Software Collection's environment. rpmbuild (8) - The man page for the rpmbuild utility for building both binary and source packages.
null
https://docs.redhat.com/en/documentation/red_hat_software_collections/3/html/packaging_guide/sect-installed_documentation
Chapter 4. Testing your Eclipse Vert.x application with JUnit
Chapter 4. Testing your Eclipse Vert.x application with JUnit After you build your Eclipse Vert.x application in the getting-started project, test your application with the JUnit 5 framework to ensure that it runs as expected. The following two dependencies in the Eclipse Vert.x pom.xml file are used for JUnit 5 testing: <dependency> <groupId>io.vertx</groupId> <artifactId>vertx-junit5</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter-engine</artifactId> <version>5.4.0</version> <scope>test</scope> </dependency> The vertx-junit5 dependency is required for testing. JUnit 5 provides various annotations, such as, @Test , @BeforeEach , @DisplayName , and so on which are used to request asynchronous injection of Vertx and VertxTestContext instances. The junit-jupiter-engine dependency is required for execution of tests at runtime. Prerequisites You have built the Eclipse Vert.x getting-started project using the pom.xml file. Procedure Open the generated pom.xml file and set the version of the Surefire Maven plug-in: <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>3.0.0-M5</version> </plugin> Create a directory structure src/test/java/com/example/ in the root directory, and navigate to it. USD mkdir -p src/test/java/com/example/ USD cd src/test/java/com/example/ Create a Java class file MyTestApp.java containing the application code. package com.example; import static org.junit.jupiter.api.Assertions.assertEquals; import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.DisplayName; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.extension.ExtendWith; import io.vertx.core.Vertx; import io.vertx.core.http.HttpMethod; import io.vertx.junit5.VertxExtension; import io.vertx.junit5.VertxTestContext; @ExtendWith(VertxExtension.class) class MyAppTest { @BeforeEach void prepare(Vertx vertx, VertxTestContext testContext) { // Deploy the verticle vertx.deployVerticle(new MyApp()) .onSuccess(ok -> testContext.completeNow()) .onFailure(failure -> testContext.failNow(failure)); } @Test @DisplayName("Smoke test: check that the HTTP server responds") void smokeTest(Vertx vertx, VertxTestContext testContext) { // Issue an HTTP request vertx.createHttpClient() .request(HttpMethod.GET, 8080, "127.0.0.1", "/") .compose(request -> request.send()) .compose(response -> response.body()) .onSuccess(body -> testContext.verify(() -> { // Check the response assertEquals("Greetings!", body.toString()); testContext.completeNow(); })) .onFailure(failure -> testContext.failNow(failure)); } } To run the JUnit test on my application using Maven run the following command from the root directory of the application. mvn clean verify You can check the test results in the target/surefire-reports . The com.example.MyAppTest.txt file contains the test results.
[ "<dependency> <groupId>io.vertx</groupId> <artifactId>vertx-junit5</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter-engine</artifactId> <version>5.4.0</version> <scope>test</scope> </dependency>", "<plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>3.0.0-M5</version> </plugin>", "mkdir -p src/test/java/com/example/ cd src/test/java/com/example/", "package com.example; import static org.junit.jupiter.api.Assertions.assertEquals; import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.DisplayName; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.extension.ExtendWith; import io.vertx.core.Vertx; import io.vertx.core.http.HttpMethod; import io.vertx.junit5.VertxExtension; import io.vertx.junit5.VertxTestContext; @ExtendWith(VertxExtension.class) class MyAppTest { @BeforeEach void prepare(Vertx vertx, VertxTestContext testContext) { // Deploy the verticle vertx.deployVerticle(new MyApp()) .onSuccess(ok -> testContext.completeNow()) .onFailure(failure -> testContext.failNow(failure)); } @Test @DisplayName(\"Smoke test: check that the HTTP server responds\") void smokeTest(Vertx vertx, VertxTestContext testContext) { // Issue an HTTP request vertx.createHttpClient() .request(HttpMethod.GET, 8080, \"127.0.0.1\", \"/\") .compose(request -> request.send()) .compose(response -> response.body()) .onSuccess(body -> testContext.verify(() -> { // Check the response assertEquals(\"Greetings!\", body.toString()); testContext.completeNow(); })) .onFailure(failure -> testContext.failNow(failure)); } }", "mvn clean verify" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/getting_started_with_eclipse_vert.x/proc-testing-vertx-application-using-junit_vertx
Chapter 1. Monitoring overview
Chapter 1. Monitoring overview 1.1. About OpenShift Container Platform monitoring OpenShift Container Platform includes a preconfigured, preinstalled, and self-updating monitoring stack that provides monitoring for core platform components. You also have the option to enable monitoring for user-defined projects . A cluster administrator can configure the monitoring stack with the supported configurations. OpenShift Container Platform delivers monitoring best practices out of the box. A set of alerts are included by default that immediately notify administrators about issues with a cluster. Default dashboards in the OpenShift Container Platform web console include visual representations of cluster metrics to help you to quickly understand the state of your cluster. With the OpenShift Container Platform web console, you can view and manage metrics , alerts , and review monitoring dashboards . In the Observe section of OpenShift Container Platform web console, you can access and manage monitoring features such as metrics , alerts , monitoring dashboards , and metrics targets . After installing OpenShift Container Platform, cluster administrators can optionally enable monitoring for user-defined projects. By using this feature, cluster administrators, developers, and other users can specify how services and pods are monitored in their own projects. As a cluster administrator, you can find answers to common problems such as user metrics unavailability and high consumption of disk space by Prometheus in Troubleshooting monitoring issues . 1.2. Understanding the monitoring stack The OpenShift Container Platform monitoring stack is based on the Prometheus open source project and its wider ecosystem. The monitoring stack includes the following: Default platform monitoring components . A set of platform monitoring components are installed in the openshift-monitoring project by default during an OpenShift Container Platform installation. This provides monitoring for core OpenShift Container Platform components including Kubernetes services. The default monitoring stack also enables remote health monitoring for clusters. These components are illustrated in the Installed by default section in the following diagram. Components for monitoring user-defined projects . After optionally enabling monitoring for user-defined projects, additional monitoring components are installed in the openshift-user-workload-monitoring project. This provides monitoring for user-defined projects. These components are illustrated in the User section in the following diagram. 1.2.1. Default monitoring components By default, the OpenShift Container Platform 4.11 monitoring stack includes these components: Table 1.1. Default monitoring stack components Component Description Cluster Monitoring Operator The Cluster Monitoring Operator (CMO) is a central component of the monitoring stack. It deploys, manages, and automatically updates Prometheus and Alertmanager instances, Thanos Querier, Telemeter Client, and metrics targets. The CMO is deployed by the Cluster Version Operator (CVO). Prometheus Operator The Prometheus Operator (PO) in the openshift-monitoring project creates, configures, and manages platform Prometheus instances and Alertmanager instances. It also automatically generates monitoring target configurations based on Kubernetes label queries. Prometheus Prometheus is the monitoring system on which the OpenShift Container Platform monitoring stack is based. Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing. Prometheus Adapter The Prometheus Adapter (PA in the preceding diagram) translates Kubernetes node and pod queries for use in Prometheus. The resource metrics that are translated include CPU and memory utilization metrics. The Prometheus Adapter exposes the cluster resource metrics API for horizontal pod autoscaling. The Prometheus Adapter is also used by the oc adm top nodes and oc adm top pods commands. Alertmanager The Alertmanager service handles alerts received from Prometheus. Alertmanager is also responsible for sending the alerts to external notification systems. kube-state-metrics agent The kube-state-metrics exporter agent (KSM in the preceding diagram) converts Kubernetes objects to metrics that Prometheus can use. openshift-state-metrics agent The openshift-state-metrics exporter (OSM in the preceding diagram) expands upon kube-state-metrics by adding metrics for OpenShift Container Platform-specific resources. node-exporter agent The node-exporter agent (NE in the preceding diagram) collects metrics about every node in a cluster. The node-exporter agent is deployed on every node. Thanos Querier Thanos Querier aggregates and optionally deduplicates core OpenShift Container Platform metrics and metrics for user-defined projects under a single, multi-tenant interface. Telemeter Client Telemeter Client sends a subsection of the data from platform Prometheus instances to Red Hat to facilitate Remote Health Monitoring for clusters. All of the components in the monitoring stack are monitored by the stack and are automatically updated when OpenShift Container Platform is updated. Note All components of the monitoring stack use the TLS security profile settings that are centrally configured by a cluster administrator. If you configure a monitoring stack component that uses TLS security settings, the component uses the TLS security profile settings that already exist in the tlsSecurityProfile field in the global OpenShift Container Platform apiservers.config.openshift.io/cluster resource. 1.2.2. Default monitoring targets In addition to the components of the stack itself, the default monitoring stack monitors: CoreDNS Elasticsearch (if Logging is installed) etcd Fluentd (if Logging is installed) HAProxy Image registry Kubelets Kubernetes API server Kubernetes controller manager Kubernetes scheduler OpenShift API server OpenShift Controller Manager Operator Lifecycle Manager (OLM) Note Each OpenShift Container Platform component is responsible for its monitoring configuration. For problems with the monitoring of an OpenShift Container Platform component, open a Jira issue against that component, not against the general monitoring component. Other OpenShift Container Platform framework components might be exposing metrics as well. For details, see their respective documentation. 1.2.3. Components for monitoring user-defined projects OpenShift Container Platform 4.11 includes an optional enhancement to the monitoring stack that enables you to monitor services and pods in user-defined projects. This feature includes the following components: Table 1.2. Components for monitoring user-defined projects Component Description Prometheus Operator The Prometheus Operator (PO) in the openshift-user-workload-monitoring project creates, configures, and manages Prometheus and Thanos Ruler instances in the same project. Prometheus Prometheus is the monitoring system through which monitoring is provided for user-defined projects. Prometheus sends alerts to Alertmanager for processing. Thanos Ruler The Thanos Ruler is a rule evaluation engine for Prometheus that is deployed as a separate process. In OpenShift Container Platform 4.11, Thanos Ruler provides rule and alerting evaluation for the monitoring of user-defined projects. Alertmanager The Alertmanager service handles alerts received from Prometheus and Thanos Ruler. Alertmanager is also responsible for sending user-defined alerts to external notification systems. Deploying this service is optional. Note The components in the preceding table are deployed after monitoring is enabled for user-defined projects. All of the components in the monitoring stack are monitored by the stack and are automatically updated when OpenShift Container Platform is updated. 1.2.4. Monitoring targets for user-defined projects When monitoring is enabled for user-defined projects, you can monitor: Metrics provided through service endpoints in user-defined projects. Pods running in user-defined projects. 1.3. Glossary of common terms for OpenShift Container Platform monitoring This glossary defines common terms that are used in OpenShift Container Platform architecture. Alertmanager Alertmanager handles alerts received from Prometheus. Alertmanager is also responsible for sending the alerts to external notification systems. Alerting rules Alerting rules contain a set of conditions that outline a particular state within a cluster. Alerts are triggered when those conditions are true. An alerting rule can be assigned a severity that defines how the alerts are routed. Cluster Monitoring Operator The Cluster Monitoring Operator (CMO) is a central component of the monitoring stack. It deploys and manages Prometheus instances such as, the Thanos Querier, the Telemeter Client, and metrics targets to ensure that they are up to date. The CMO is deployed by the Cluster Version Operator (CVO). Cluster Version Operator The Cluster Version Operator (CVO) manages the lifecycle of cluster Operators, many of which are installed in OpenShift Container Platform by default. config map A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type ConfigMap . Applications running in a pod can use this data. Container A container is a lightweight and executable image that includes software and all its dependencies. Containers virtualize the operating system. As a result, you can run containers anywhere from a data center to a public or private cloud as well as a developer's laptop. custom resource (CR) A CR is an extension of the Kubernetes API. You can create custom resources. etcd etcd is the key-value store for OpenShift Container Platform, which stores the state of all resource objects. Fluentd Fluentd gathers logs from nodes and feeds them to Elasticsearch. Kubelets Runs on nodes and reads the container manifests. Ensures that the defined containers have started and are running. Kubernetes API server Kubernetes API server validates and configures data for the API objects. Kubernetes controller manager Kubernetes controller manager governs the state of the cluster. Kubernetes scheduler Kubernetes scheduler allocates pods to nodes. labels Labels are key-value pairs that you can use to organize and select subsets of objects such as a pod. node A worker machine in the OpenShift Container Platform cluster. A node is either a virtual machine (VM) or a physical machine. Operator The preferred method of packaging, deploying, and managing a Kubernetes application in an OpenShift Container Platform cluster. An Operator takes human operational knowledge and encodes it into software that is packaged and shared with customers. Operator Lifecycle Manager (OLM) OLM helps you install, update, and manage the lifecycle of Kubernetes native applications. OLM is an open source toolkit designed to manage Operators in an effective, automated, and scalable way. Persistent storage Stores the data even after the device is shut down. Kubernetes uses persistent volumes to store the application data. Persistent volume claim (PVC) You can use a PVC to mount a PersistentVolume into a Pod. You can access the storage without knowing the details of the cloud environment. pod The pod is the smallest logical unit in Kubernetes. A pod is comprised of one or more containers to run in a worker node. Prometheus Prometheus is the monitoring system on which the OpenShift Container Platform monitoring stack is based. Prometheus is a time-series database and a rule evaluation engine for metrics. Prometheus sends alerts to Alertmanager for processing. Prometheus adapter The Prometheus Adapter translates Kubernetes node and pod queries for use in Prometheus. The resource metrics that are translated include CPU and memory utilization. The Prometheus Adapter exposes the cluster resource metrics API for horizontal pod autoscaling. Prometheus Operator The Prometheus Operator (PO) in the openshift-monitoring project creates, configures, and manages platform Prometheus and Alertmanager instances. It also automatically generates monitoring target configurations based on Kubernetes label queries. Silences A silence can be applied to an alert to prevent notifications from being sent when the conditions for an alert are true. You can mute an alert after the initial notification, while you work on resolving the underlying issue. storage OpenShift Container Platform supports many types of storage, both for on-premise and cloud providers. You can manage container storage for persistent and non-persistent data in an OpenShift Container Platform cluster. Thanos Ruler The Thanos Ruler is a rule evaluation engine for Prometheus that is deployed as a separate process. In OpenShift Container Platform, Thanos Ruler provides rule and alerting evaluation for the monitoring of user-defined projects. web console A user interface (UI) to manage OpenShift Container Platform. 1.4. Additional resources About remote health monitoring Granting users permission to monitor user-defined projects Configuring TLS security profiles 1.5. steps Configuring the monitoring stack
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/monitoring/monitoring-overview
Chapter 64. security
Chapter 64. security This chapter describes the commands under the security command. 64.1. security group create Create a new security group Usage: Table 64.1. Positional arguments Value Summary <name> New security group name Table 64.2. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --description <description> Security group description --project <project> Owner's project (name or id) --stateful Security group is stateful (default) --stateless Security group is stateless --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --tag <tag> Tag to be added to the security group (repeat option to set multiple tags) --no-tag No tags associated with the security group Table 64.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 64.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 64.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 64.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 64.2. security group delete Delete security group(s) Usage: Table 64.7. Positional arguments Value Summary <group> Security group(s) to delete (name or id) Table 64.8. Command arguments Value Summary -h, --help Show this help message and exit 64.3. security group list List security groups Usage: Table 64.9. Command arguments Value Summary -h, --help Show this help message and exit --project <project> List security groups according to the project (name or ID) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. --tags <tag>[,<tag>,... ] List security group which have all given tag(s) (Comma-separated list of tags) --any-tags <tag>[,<tag>,... ] List security group which have any given tag(s) (Comma-separated list of tags) --not-tags <tag>[,<tag>,... ] Exclude security group which have all given tag(s) (Comma-separated list of tags) --not-any-tags <tag>[,<tag>,... ] Exclude security group which have any given tag(s) (Comma-separated list of tags) Table 64.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 64.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 64.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 64.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 64.4. security group rule create Create a new security group rule Usage: Table 64.14. Positional arguments Value Summary <group> Create rule in this security group (name or id) Table 64.15. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --remote-ip <ip-address> Remote ip address block (may use cidr notation; default for IPv4 rule: 0.0.0.0/0, default for IPv6 rule: ::/0) --remote-group <group> Remote security group (name or id) --remote-address-group <group> Remote address group (name or id) --dst-port <port-range> Destination port, may be a single port or a starting and ending port range: 137:139. Required for IP protocols TCP and UDP. Ignored for ICMP IP protocols. --protocol <protocol> Ip protocol (ah, dccp, egp, esp, gre, icmp, igmp, ipv6-encap, ipv6-frag, ipv6-icmp, ipv6-nonxt, ipv6-opts, ipv6-route, ospf, pgm, rsvp, sctp, tcp, udp, udplite, vrrp and integer representations [0-255] or any; default: any (all protocols)) --description <description> Set security group rule description --icmp-type <icmp-type> Icmp type for icmp ip protocols --icmp-code <icmp-code> Icmp code for icmp ip protocols --ingress Rule applies to incoming network traffic (default) --egress Rule applies to outgoing network traffic --ethertype <ethertype> Ethertype of network traffic (ipv4, ipv6; default: based on IP protocol) --project <project> Owner's project (name or id) --project-domain <project-domain> Domain the project belongs to (name or id). this can be used in case collisions between project names exist. Table 64.16. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 64.17. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 64.18. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 64.19. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 64.5. security group rule delete Delete security group rule(s) Usage: Table 64.20. Positional arguments Value Summary <rule> Security group rule(s) to delete (id only) Table 64.21. Command arguments Value Summary -h, --help Show this help message and exit 64.6. security group rule list List security group rules Usage: Table 64.22. Positional arguments Value Summary <group> List all rules in this security group (name or id) Table 64.23. Command arguments Value Summary -h, --help Show this help message and exit --protocol <protocol> List rules by the ip protocol (ah, dhcp, egp, esp, gre, icmp, igmp, ipv6-encap, ipv6-frag, ipv6-icmp, ipv6-nonxt, ipv6-opts, ipv6-route, ospf, pgm, rsvp, sctp, tcp, udp, udplite, vrrp and integer representations [0-255] or any; default: any (all protocols)) --ethertype <ethertype> List rules by the ethertype (ipv4 or ipv6) --ingress List rules applied to incoming network traffic --egress List rules applied to outgoing network traffic --long deprecated this argument is no longer needed Table 64.24. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 64.25. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 64.26. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 64.27. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 64.7. security group rule show Display security group rule details Usage: Table 64.28. Positional arguments Value Summary <rule> Security group rule to display (id only) Table 64.29. Command arguments Value Summary -h, --help Show this help message and exit Table 64.30. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 64.31. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 64.32. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 64.33. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 64.8. security group set Set security group properties Usage: Table 64.34. Positional arguments Value Summary <group> Security group to modify (name or id) Table 64.35. Command arguments Value Summary -h, --help Show this help message and exit --extra-property type=<property_type>,name=<property_name>,value=<property_value> Additional parameters can be passed using this property. Default type of the extra property is string ( str ), but other types can be used as well. Available types are: dict , list , str , bool , int . In case of list type, value can be semicolon-separated list of values. For dict value is semicolon-separated list of the key:value pairs. --name <new-name> New security group name --description <description> New security group description --stateful Security group is stateful (default) --stateless Security group is stateless --tag <tag> Tag to be added to the security group (repeat option to set multiple tags) --no-tag Clear tags associated with the security group. specify both --tag and --no-tag to overwrite current tags 64.9. security group show Display security group details Usage: Table 64.36. Positional arguments Value Summary <group> Security group to display (name or id) Table 64.37. Command arguments Value Summary -h, --help Show this help message and exit Table 64.38. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 64.39. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 64.40. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 64.41. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 64.10. security group unset Unset security group properties Usage: Table 64.42. Positional arguments Value Summary <group> Security group to modify (name or id) Table 64.43. Command arguments Value Summary -h, --help Show this help message and exit --tag <tag> Tag to be removed from the security group (repeat option to remove multiple tags) --all-tag Clear all tags associated with the security group
[ "openstack security group create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--description <description>] [--project <project>] [--stateful | --stateless] [--project-domain <project-domain>] [--tag <tag> | --no-tag] <name>", "openstack security group delete [-h] <group> [<group> ...]", "openstack security group list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--project <project>] [--project-domain <project-domain>] [--tags <tag>[,<tag>,...]] [--any-tags <tag>[,<tag>,...]] [--not-tags <tag>[,<tag>,...]] [--not-any-tags <tag>[,<tag>,...]]", "openstack security group rule create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--remote-ip <ip-address> | --remote-group <group> | --remote-address-group <group>] [--dst-port <port-range>] [--protocol <protocol>] [--description <description>] [--icmp-type <icmp-type>] [--icmp-code <icmp-code>] [--ingress | --egress] [--ethertype <ethertype>] [--project <project>] [--project-domain <project-domain>] <group>", "openstack security group rule delete [-h] <rule> [<rule> ...]", "openstack security group rule list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--protocol <protocol>] [--ethertype <ethertype>] [--ingress | --egress] [--long] [<group>]", "openstack security group rule show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <rule>", "openstack security group set [-h] [--extra-property type=<property_type>,name=<property_name>,value=<property_value>] [--name <new-name>] [--description <description>] [--stateful | --stateless] [--tag <tag>] [--no-tag] <group>", "openstack security group show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <group>", "openstack security group unset [-h] [--tag <tag> | --all-tag] <group>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/security
Chapter 2. Requirements for bare metal provisioning
Chapter 2. Requirements for bare metal provisioning To provide an overcloud where cloud users can launch bare metal instances, your Red Hat OpenStack Platform (RHOSP) environment must have the required hardware and network configuration. 2.1. Hardware requirements The hardware requirements for the bare metal machines that you want to make available to your cloud users for provisioning depend on the operating system. For information about the hardware requirements for Red Hat Enterprise Linux installations, see Product Documentation for Red Hat Enterprise Linux . All bare metal machines that you want to make available to your cloud users for provisioning must have the following capabilities: A NIC to connect to the bare metal network. A power management interface, for example, Redfish or IPMI, that is connected to a network that is reachable from the ironic-conductor service. By default, ironic-conductor runs on all of the Controller nodes, unless you use composable roles and run ironic-conductor elsewhere. PXE boot on the bare metal network. Disable PXE boot on all other NICs in the deployment. 2.2. Networking requirements The bare metal network must be a private network for the Bare Metal Provisioning service to use for the following operations: The provisioning and management of bare metal machines on the overcloud. Cleaning bare metal nodes when a node is unprovisioned. Tenant access to the bare metal machines. The bare metal network provides DHCP and PXE boot functions to discover bare metal systems. This network must use a native VLAN on a trunked interface so that the Bare Metal Provisioning service can serve PXE boot and DHCP requests. The Bare Metal Provisioning service in the overcloud is designed for a trusted tenant environment because the bare metal machines have direct access to the control plane network of your Red Hat OpenStack Platform (RHOSP) environment. Therefore, the default bare metal network uses a flat network for ironic-conductor services. The default flat provisioning network can introduce security concerns in a customer environment because a tenant can interfere with the control plane network. To prevent this risk, you can configure a custom composable bare metal provisioning network for the Bare Metal Provisioning service that does not have access to the control plane. The bare metal network must be untagged for provisioning, and must also have access to the Bare Metal Provisioning API. The control plane network, also known as the director provisioning network, is always untagged. Other networks can be tagged. The Controller nodes that host the Bare Metal Provisioning service must have access to the bare metal network. The NIC that the bare metal machine is configured to PXE-boot from must have access to the bare metal network. The bare metal network is created by the OpenStack operator. Cloud users have direct access to the public OpenStack APIs, and to the bare metal network. With the default flat bare metal network, cloud users also have indirect access to the control plane. The Bare Metal Provisioning service uses the bare metal network for node cleaning. 2.2.1. The default bare metal network In the default Bare Metal Provisioning service deployment architecture, the bare metal network is separate from the control plane network. The bare metal network is a flat network that also acts as the tenant network. This network must route to the Bare Metal Provisioning services on the control plane, known as the director provisioning network. If you define an isolated bare metal network, the bare metal nodes cannot PXE boot. Default bare metal network architecture diagram 2.2.2. The custom composable bare metal network When you use a custom composable bare metal network in your Bare Metal Provisioning service deployment architecture, the bare metal network is a custom composable network that does not have access to the control plane. Use a custom composable bare metal network if you want to limit access to the control plane.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/bare_metal_provisioning/assembly_requirements-for-bare-metal-provisioning
Chapter 1. Introduction to Red Hat software certification program
Chapter 1. Introduction to Red Hat software certification program Use this guide to certify and distribute your software application product on the Red Hat Enterprise Linux and Red Hat OpenShift platforms. 1.1. The Red Hat certification program overview The Red Hat software certification program ensures compatibility of your software application products targeting Red Hat Enterprise Linux and Red Hat OpenShift as the deployment platform. The program has five main elements: Product listing : A source of all the essential product information that potential customers look for before using your product. Components : It comprises the containers, operators, helm charts, and various other infrastructure services that are attached to the product listing. Additionally, it includes the online workflow where the progress and status of certification requests are tracked and reported. Test suite : Tests implemented as an integrated pipeline for software application products undergoing certification. Publication : Non-containerized products : Certified traditional, non-containerized products are published on the Red Hat Ecosystem Catalog. Containerized applications : It has the following product categories: Containers : Certified containers are published on the Red Hat Ecosystem Catalog. Operators : Certified Operators are published on the Red Hat Ecosystem Catalog and in the embedded OperatorHub. Helm Charts : Certified Helm Charts are published on the Red Hat Ecosystem Catalog. Functional certification for OpenShift badges: Cloud-native Network Functions (CNFs) : Partner Validated and Certified CNFs are attached to the product listings and are published on the Red Hat Ecosystem Catalog. Container Network Interface (CNI) : Certified CNIs are published on the Red Hat Ecosystem Catalog. Container Storage Interface (CSI) : Certified CSIs are published on the Red Hat Ecosystem Catalog. Meets Best Practices : Workload follows Red Hat's best practices. Applications on OpenStack Infrastructure: Non-containerized, containerized, and VNF applications are certified on OpenStack Infrastructure and published on the Red Hat Ecosystem Catalog. Support : A joint support relationship between you and Red Hat to ensure customer success when deploying certified software application products. This table summarizes the basic differences between a product listing and components: Product listing Component (Project) Includes detailed information about your product. The individual containers, operators, helm charts, and infrastructure services that you test, certify, and then add to the product listing. Products are composed of one or more components. Components are added to a product listing. You add components to a product for proceeding with certification. A component can be used in multiple products by adding it to each product listing. A product can not be published without certified components. Certified components are published as part of a product listing. 1.2. Certification workflow Note Red Hat recommends that you are a Red Hat Certified Engineer or hold equivalent experience before starting the certification process. The following diagram gives an overview of the software certification process. Figure 1.1. Certification workflow 1.3. Getting help and giving feedback For any questions related to the Red Hat certification toolset, certification process, or procedure described in this documentation, refer to the KB Articles , Red Hat Customer Portal , and Red Hat Partner Connect . Note To receive Red Hat product assistance, it is necessary to have the required product entitlements or subscriptions, which may be separate from the partner program and certification program memberships. Opening a support case To open a support case, see How do I open and manage a support case ? To open a support case for any certification issue, complete the Support Case Form for Partner Acceleration Desk with special attention to the following fields: From the Issue Category, select Product Certification . From the Product field, select the required product. From the Product Version field, select the version on which your product or application is being certified. In the Problem Statement field, type a problem statement or issue or feedback using the following format: {Partner Certification} (The Issue/Problem or Feedback) Replace (The Issue/Problem or Feedback) with either the issue or problem faced in the certification process or Red Hat product or feedback on the certification toolset or documentation. Note Red Hat recommends that you are a Red Hat Certified Engineer or hold equivalent experience before starting the certification process. Additional resources To know more about the software certification program and platforms, see Red Hat certified software . For a one-stop solution on all your certification needs, see Red Hat Software Certification Quick Start Guide . For more information about program requirements and policies, see Red Hat OpenShift Software Certification Policy Guide and Red Hat Enterprise Linux Software Certification Policy Guide .
[ "For example: {Partner Certification} Error occurred while submitting certification test results using the Red Hat Certification application." ]
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_software_certification_workflow_guide/assembly_introduction-to-redhat-openshift-operator-certification_openshift-sw-cert-workflow
4.8.2. Modifying a Failover Domain
4.8.2. Modifying a Failover Domain To modify a failover domain, follow the steps in this section. From the cluster-specific page, you can configure Failover Domains for that cluster by clicking on Failover Domains along the top of the cluster display. This displays the failover domains that have been configured for this cluster. Click on the name of a failover domain. This displays the configuration page for that failover domain. To modify the Prioritized , Restricted , or No Failback properties for the failover domain, click or unclick the check box to the property and click Update Properties . To modify the failover domain membership, click or unclick the check box to the cluster member. If the failover domain is prioritized, you can also modify the priority setting for the cluster member. Click Update Settings .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/s2-config-modify-failoverdm-conga-ca
Chapter 13. Configuring AWS STS for Red Hat Quay
Chapter 13. Configuring AWS STS for Red Hat Quay Support for Amazon Web Services (AWS) Security Token Service (STS) is available for standalone Red Hat Quay deployments and Red Hat Quay on OpenShift Container Platform. AWS STS is a web service for requesting temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users and for users that you authenticate, or federated users . This feature is useful for clusters using Amazon S3 as an object storage, allowing Red Hat Quay to use STS protocols to authenticate with Amazon S3, which can enhance the overall security of the cluster and help to ensure that access to sensitive data is properly authenticated and authorized. Configuring AWS STS is a multi-step process that requires creating an AWS IAM user, creating an S3 role, and configuring your Red Hat Quay config.yaml file to include the proper resources. Use the following procedures to configure AWS STS for Red Hat Quay. 13.1. Creating an IAM user Use the following procedure to create an IAM user. Procedure Log in to the Amazon Web Services (AWS) console and navigate to the Identity and Access Management (IAM) console. In the navigation pane, under Access management click Users . Click Create User and enter the following information: Enter a valid username, for example, quay-user . For Permissions options , click Add user to group . On the review and create page, click Create user . You are redirected to the Users page. Click the username, for example, quay-user . Copy the ARN of the user, for example, arn:aws:iam::123492922789:user/quay-user . On the same page, click the Security credentials tab. Navigate to Access keys . Click Create access key . On the Access key best practices & alternatives page, click Command Line Interface (CLI) , then, check the confirmation box. Then click . Optional. On the Set description tag - optional page, enter a description. Click Create access key . Copy and store the access key and the secret access key. Important This is the only time that the secret access key can be viewed or downloaded. You cannot recover it later. However, you can create a new access key any time. Click Done . 13.2. Creating an S3 role Use the following procedure to create an S3 role for AWS STS. Prerequisites You have created an IAM user and stored the access key and the secret access key. Procedure If you are not already, navigate to the IAM dashboard by clicking Dashboard . In the navigation pane, click Roles under Access management . Click Create role . Click Custom Trust Policy , which shows an editable JSON policy. By default, it shows the following information: { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect": "Allow", "Principal": {}, "Action": "sts:AssumeRole" } ] } Under the Principal configuration field, add your AWS ARN information. For example: { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::123492922789:user/quay-user" }, "Action": "sts:AssumeRole" } ] } Click . On the Add permissions page, type AmazonS3FullAccess in the search box. Check the box to add that policy to the S3 role, then click . On the Name, review, and create page, enter the following information: Enter a role name, for example, example-role . Optional. Add a description. Click the Create role button. You are navigated to the Roles page. Under Role name , the newly created S3 should be available. 13.3. Configuring Red Hat Quay to use AWS STS Use the following procedure to edit your Red Hat Quay config.yaml file to use AWS STS. Procedure Update your config.yaml file for Red Hat Quay to include the following information: # ... DISTRIBUTED_STORAGE_CONFIG: default: - STSS3Storage - sts_role_arn: <role_arn> 1 s3_bucket: <s3_bucket_name> 2 storage_path: <storage_path> 3 s3_region: <region> 4 sts_user_access_key: <s3_user_access_key> 5 sts_user_secret_key: <s3_user_secret_key> 6 # ... 1 The unique Amazon Resource Name (ARN) required when configuring AWS STS 2 The name of your s3 bucket. 3 The storage path for data. Usually /datastorage . 4 Optional. The Amazon Web Services region. Defaults to us-east-1 . 5 The generated AWS S3 user access key required when configuring AWS STS. 6 The generated AWS S3 user secret key required when configuring AWS STS. Restart your Red Hat Quay deployment. Verification Tag a sample image, for example, busybox , that will be pushed to the repository. For example: USD podman tag docker.io/library/busybox <quay-server.example.com>/<organization_name>/busybox:test Push the sample image by running the following command: USD podman push <quay-server.example.com>/<organization_name>/busybox:test Verify that the push was successful by navigating to the Organization that you pushed the image to in your Red Hat Quay registry Tags . Navigate to the Amazon Web Services (AWS) console and locate your s3 bucket. Click the name of your s3 bucket. On the Objects page, click datastorage/ . On the datastorage/ page, the following resources should seen: sha256/ uploads/ These resources indicate that the push was successful, and that AWS STS is properly configured.
[ "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"Statement1\", \"Effect\": \"Allow\", \"Principal\": {}, \"Action\": \"sts:AssumeRole\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"Statement1\", \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::123492922789:user/quay-user\" }, \"Action\": \"sts:AssumeRole\" } ] }", "DISTRIBUTED_STORAGE_CONFIG: default: - STSS3Storage - sts_role_arn: <role_arn> 1 s3_bucket: <s3_bucket_name> 2 storage_path: <storage_path> 3 s3_region: <region> 4 sts_user_access_key: <s3_user_access_key> 5 sts_user_secret_key: <s3_user_secret_key> 6", "podman tag docker.io/library/busybox <quay-server.example.com>/<organization_name>/busybox:test", "podman push <quay-server.example.com>/<organization_name>/busybox:test" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/manage_red_hat_quay/configuring-aws-sts-quay
9.4. Configuring Publishing to an LDAP Directory
9.4. Configuring Publishing to an LDAP Directory The general process to configure publishing involves setting up a publisher to publish the certificates or CRLs to the specific location. There can be a single publisher or multiple publishers, depending on how many locations will be used. The locations can be split by certificates and CRLs or finer definitions, such as certificate type. Rules determine which type to publish and to what location by being associated with the publisher. Configuring LDAP publishing is similar to other publishing procedures, with additional steps to configure the directory: Configure the Directory Server to which certificates will be published. Certain attributes have to be added to entries and bind identities and authentication methods have to be configured. Configure a publisher for each type of object published: CA certificates, cross-pair certificates, CRLs, and user certificates. The publisher declares in which attribute to store the object. The attributes set by default are the X.500 standard attributes for storing each object type. This attribute can be changed in the publisher, but generally, it is not necessary to change the LDAP publishers. Set up mappers to enable an entry's DN to be derived from the certificate's subject name. This generally does not need set for CA certificates, CRLs, and user certificates. There can be more than one mapper set for a type of certificate. This can be useful, for example, to publish certificates for two sets of users from different divisions of a company who are located in different parts of the directory tree. A mapper is created for each of the groups to specify a different branch of the tree. For details about setting up mappers, see Section 9.4.3, "Creating Mappers" . Create rules to connect publishers to mappers, as described in Section 9.5, "Creating Rules" . Enable publishing, as described in Section 9.6, "Enabling Publishing" . 9.4.1. Configuring the LDAP Directory Before certificates and CRLs can be published, the Directory Server must be configured to work with the publishing system. This means that user entries must have attributes that allow them to receive certificate information, and entries must be created to represent the CRLs. Set up the entry for the CA. For the Certificate Manager to publish its CA certificate and CRL, the directory must include an entry for the CA. Note When LDAP publishing is configured, the Certificate Manager automatically creates or converts an entry for the CA in the directory. This option is set in both the CA and CRL mapper instances and enabled by default. If the directory restricts the Certificate Manager from creating entries in the directory, turn off this option in those mapper instances, and add an entry for the CA manually in the directory. When adding the CA's entry to the directory, select the entry type based on the DN of the CA: If the CA's DN begins with the cn component, create a new person entry for the CA. Selecting a different type of entry may not allow the cn component to be specified. If the CA's DN begins with the ou component, create a new organizationalunit entry for the CA. The entry does not have to be in the pkiCA or certificationAuthority object class. The Certificate Manager will convert this entry to the pkiCA or certificationAuthority object class automatically by publishing its CA's signing certificate. Note The pkiCA object class is defined in RFC 4523, while the certificationAuthority object class is defined in the (obsolete) RFC 2256. Either object class is acceptable, depending on the schema definitions used by the Directory Server. In some situations, both object classes can be used for the same CA entry. For more information on creating directory entries, see the Red Hat Directory Server documentation. Add the correct schema elements to the CA and user directory entries. For a Certificate Manager to publish certificates and CRLs to a directory, it must be configured with specific attributes and object classes. Object Type Schema Reason End-entity certificate userCertificate;binary (attribute) This is the attribute to which the Certificate Manager publishes the certificate. This is a multi-valued attribute, and each value is a DER-encoded binary X.509 certificate. The LDAP object class named inetOrgPerson allows this attribute. The strongAuthenticationUser object class allows this attribute and can be combined with any other object class to allow certificates to be published to directory entries with other object classes. The Certificate Manager does not automatically add this object class to the schema table of the corresponding Directory Server. If the directory object that it finds does not allow the userCertificate;binary attribute, adding or removing the certificate fails. CA certificate caCertificate;binary (attribute) This is the attribute to which the Certificate Manager publishes the certificate. The Certificate Manager publishes its own CA certificate to its own LDAP directory entry when the server starts. The entry corresponds to the Certificate Manager's issuer name. This is a required attribute of the pkiCA or certificationAuthority object class. The Certificate Manager adds this object class to the directory entry for the CA if it can find the CA's directory entry. CRL certificateRevocationList;binary (attribute) This is the attribute to which the Certificate Manager publishes the CRL. The Certificate Manager publishes the CRL to its own LDAP directory entry. The entry corresponds to the Certificate Manager's issuer name. This is an attribute of the pkiCA or certificationAuthority object class. The value of the attribute is the DER-encoded binary X.509 CRL. The CA's entry must already contain the pkiCA or certificationAuthority object class for the CRL to be published to the entry. Delta CRL deltaRevocationList;binary (attribute) This is the attribute to which the Certificate Manager publishes the delta CRL. The Certificate Manager publishes the delta CRL to its own LDAP directory entry, separate from the full CRL. The delta CRL entry corresponds to the Certificate Manager's issuer name. This attribute belongs to the deltaCRL or certificationAuthority-V2 object class. The value of the attribute is the DER-encoded binary X.509 delta CRL. Set up a bind DN for the Certificate Manager to use to access the Directory Server. The Certificate Manager user must have read-write permissions to the directory to publish certificates and CRLs to the directory so that the Certificate Manager can modify the user entries with certificate-related information and the CA entry with CA's certificate and CRL related information. The bind DN entry can be either of the following: An existing DN that has write access, such as the Directory Manager. A new user which is granted write access. The entry can be identified by the Certificate Manager's DN, such as cn=testCA, ou=Research Dept, o=Example Corporation, st=California, c=US . Note Carefully consider what privileges are given to this user. This user can be restricted in what it can write to the directory by creating ACLs for the account. For instructions on giving write access to the Certificate Manager's entry, see the Directory Server documentation. Set the directory authentication method for how the Certificate Manager authenticates to Directory Server. There are three options: basic authentication (simple username and password); SSL without client authentication (simple username and password); and SSL with client authentication (certificate-based). See the Red Hat Directory Server documentation for instructions on setting up these methods of communication with the server. 9.4.2. Configuring LDAP Publishers The Certificate Manager creates, configures, and enables a set of publishers that are associated with LDAP publishing. The default publishers (for CA certificates, user certificates, CRLs, and cross-pair certificates) already conform to the X.500 standard attributes for storing certificates and CRLs and do not need to be changed. Table 9.1. LDAP Publishers Publisher Description LdapCaCertPublisher Publishes CA certificates to the LDAP directory. LdapCrlPublisher Publishes CRLs to the LDAP directory. LdapDeltaCrlPublisher Publishes delta CRLs to the LDAP directory. LdapUserCertPublisher Publishes all types of end-entity certificates to the LDAP directory. LdapCrossCertPairPublisher Publishes cross-signed certificates to the LDAP directory. 9.4.3. Creating Mappers Mappers are only used with LDAP publishing. Mappers define a relationship between a certificate's subject name and the DN of the directory entry to which the certificate is published. The Certificate Manager needs to derive the DN of the entry from the certificate or the certificate request so it can determine which entry to use. The mapper defines the relationship between the DN for the user entry and the subject name of the certificate or other input information so that the exact DN of the entry can be determined and found in the directory. When it is configured, the Certificate Manager automatically creates a set of mappers defining the most common relationships. The default mappers are listed in Table 9.2, "Default Mappers" . Table 9.2. Default Mappers Mapper Description LdapUserCertMap Locates the correct attribute of user entries in the directory in order to publish user certificates. LdapCrlMap Locates the correct attribute of the CA's entry in the directory in order to publish the CRL. LdapCaCertMap Locates the correct attribute of the CA's entry in the directory in order to publish the CA certificate. To use the default mappers, configure each of the macros by specifying the DN pattern and whether to create the CA entry in the directory. To use other mappers, create and configure an instance of the mapper. For more information, see Section C.2, "Mapper Plug-in Modules " . Log into the Certificate Manager Console. In the Configuration tab, select Certificate Manager from the navigation tree on the left. Select Publishing , and then Mappers . The Mappers Management tab, which lists configured mappers, opens on the right. To create a new mapper instance, click Add . The Select Mapper Plugin Implementation window opens, which lists registered mapper modules. Select a module, and edit it. For complete information about these modules, see Section C.2, "Mapper Plug-in Modules " . Edit the mapper instance, and click OK . See Section C.2, "Mapper Plug-in Modules " for detailed information about each mapper. Note pkiconsole is being deprecated. 9.4.4. Completing Configuration: Rules and Enabling After configuring the mappers for LDAP publishing, configure the rules for the published certificates and CRLs, as described in Section 9.5, "Creating Rules" . Once the configuration is complete, enable publishing, as described in Section 9.6, "Enabling Publishing" .
[ "pkiconsole https://server.example.com:8443/ca" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Configuring_Publishers_for_LDAP_Publishing
Chapter 6. Installing a cluster on AWS with network customizations
Chapter 6. Installing a cluster on AWS with network customizations In OpenShift Container Platform version 4.12, you can install a cluster on Amazon Web Services (AWS) with customized network configuration options. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an AWS account to host the cluster. Important If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-lived credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program. If you use a firewall, you configured it to allow the sites that your cluster requires access to. If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the kube-system namespace, you can manually create and maintain IAM credentials . 6.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 6.5. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information on these fields, refer to Installation configuration parameters . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. Important The CIDR range 172.17.0.0/16 is reserved by libVirt. You cannot use this range or any range that overlaps with this range for any networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. You cannot override the values specified in phase 1 in the install-config.yaml file during phase 2. However, you can further customize the network plugin during phase 2. 6.6. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS). Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select AWS as the platform to target. If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program. Select the AWS region to deploy the cluster to. Select the base domain for the Route 53 service that you configured for your cluster. Enter a descriptive name for your cluster. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 6.6.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 6.6.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 6.1. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 6.6.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 6.2. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 6.6.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 6.3. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String compute.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String controlPlane.hyperthreading Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings platform.aws.lbType Required to set the NLB load balancer type in AWS. Valid values are Classic or NLB . If no value is specified, the installation program defaults to Classic . The installation program sets the value provided here in the ingress cluster configuration object. If you do not specify a load balancer type for other Ingress Controllers, they use the type set in this parameter. Classic or NLB . The default value is Classic . publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 6.6.1.4. Optional AWS configuration parameters Optional AWS configuration parameters are described in the following table: Table 6.4. Optional AWS parameters Parameter Description Values compute.platform.aws.amiID The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. compute.platform.aws.iamRole A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. compute.platform.aws.rootVolume.iops The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. Integer, for example 4000 . compute.platform.aws.rootVolume.size The size in GiB of the root volume. Integer, for example 500 . compute.platform.aws.rootVolume.type The type of the root volume. Valid AWS EBS volume type , such as io1 . compute.platform.aws.rootVolume.kmsKeyARN The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of worker nodes with a specific KMS key. Valid key ID or the key ARN . compute.platform.aws.type The EC2 instance type for the compute machines. Valid AWS instance type, such as m4.2xlarge . See the Supported AWS machine types table that follows. compute.platform.aws.zones The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . compute.aws.region The AWS region that the installation program creates compute resources in. Any valid AWS region , such as us-east-1 . You can use the AWS CLI to access the regions available based on your selected instance type. For example: aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. Currently, AWS Graviton3 processors are only available in some regions. controlPlane.platform.aws.amiID The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. controlPlane.platform.aws.iamRole A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. controlPlane.platform.aws.rootVolume.kmsKeyARN The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of control plane nodes with a specific KMS key. Valid key ID and the key ARN . controlPlane.platform.aws.type The EC2 instance type for the control plane machines. Valid AWS instance type, such as m6i.xlarge . See the Supported AWS machine types table that follows. controlPlane.platform.aws.zones The availability zones where the installation program creates machines for the control plane machine pool. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . controlPlane.aws.region The AWS region that the installation program creates control plane resources in. Valid AWS region , such as us-east-1 . platform.aws.amiID The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. platform.aws.hostedZone An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. String, for example Z3URY6TWQ91KVV . platform.aws.serviceEndpoints.name The AWS service endpoint name. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. Valid AWS service endpoint name. platform.aws.serviceEndpoints.url The AWS service endpoint URL. The URL must use the https protocol and the host must trust the certificate. Valid AWS service endpoint URL. platform.aws.userTags A map of keys and values that the installation program adds as tags to all resources that it creates. Any valid YAML map, such as key value pairs in the <key>: <value> format. For more information about AWS tags, see Tagging Your Amazon EC2 Resources in the AWS documentation. Note You can add up to 25 user defined tags during installation. The remaining 25 tags are reserved for OpenShift Container Platform. platform.aws.propagateUserTags A flag that directs in-cluster Operators to include the specified user tags in the tags of the AWS resources that the Operators create. Boolean values, for example true or false . platform.aws.subnets If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same machineNetwork[].cidr ranges that you specify. For a standard cluster, specify a public and a private subnet for each availability zone. For a private cluster, specify a private subnet for each availability zone. Valid subnet IDs. 6.6.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.5. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS, RHEL 8.6 and later [3] 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or hyperthreading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later. If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.6.3. Tested instance types for AWS The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 6.1. Machine types based on 64-bit x86 architecture c4.* c5.* c5a.* i3.* m4.* m5.* m5a.* m6a.* m6i.* r4.* r5.* r5a.* r6i.* t3.* t3a.* 6.6.4. Tested instance types for AWS on 64-bit ARM infrastructures The following Amazon Web Services (AWS) ARM64 instance types have been tested with OpenShift Container Platform. Note Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation". Example 6.2. Machine types based on 64-bit ARM architecture c6g.* c7g.* m6g.* m7g.* r8g.* 6.6.5. Sample customized install-config.yaml file for AWS You can customize the installation configuration file ( install-config.yaml ) to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: 13 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 14 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 15 propagateUserTags: true 16 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-96c6f8f7 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 19 sshKey: ssh-ed25519 AAAA... 20 pullSecret: '{"auths": ...}' 21 1 12 15 21 Required. The installation program prompts you for this value. 2 Optional: Add this parameter to force the Cloud Credential Operator (CCO) to use the specified mode, instead of having the CCO dynamically try to determine the capabilities of the credentials. For details about CCO modes, see the Cloud Credential Operator entry in the Red Hat Operators reference content. 3 8 13 16 If you do not provide these parameters and values, the installation program provides the default value. 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 5 9 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger instance types, such as m4.2xlarge or m5.2xlarge , for your machines if you disable simultaneous multithreading. 6 10 To configure faster storage for etcd, especially for larger clusters, set the storage type as io1 and set iops to 2000 . 7 11 Whether to require the Amazon EC2 Instance Metadata Service v2 (IMDSv2). To require IMDSv2, set the parameter value to Required . To allow the use of both IMDSv1 and IMDSv2, set the parameter value to Optional . If no value is specified, both IMDSv1 and IMDSv2 are allowed. Note The IMDS configuration for control plane machines that is set during cluster installation can only be changed by using the AWS CLI. The IMDS configuration for compute machines can be changed by using compute machine sets. 14 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 17 The ID of the AMI used to boot machines for the cluster. If set, the AMI must belong to the same region as the cluster. 18 The AWS service endpoints. Custom endpoints are required when installing to an unknown AWS region. The endpoint URL must use the https protocol and the host must trust the certificate. 19 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 20 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 6.6.6. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. If you have added the Amazon EC2 , Elastic Load Balancing , and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.7. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin, such as OpenShift SDN or OVN-Kubernetes. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 6.7.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 6.6. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 6.7. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN network plugin. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OpenShift SDN network plugin The following table describes the configuration fields for the OpenShift SDN network plugin: Table 6.8. openshiftSDNConfig object Field Type Description mode string Configures the network isolation mode for OpenShift SDN. The default value is NetworkPolicy . The values Multitenant and Subnet are available for backwards compatibility with OpenShift Container Platform 3.x but are not recommended. This value cannot be changed after cluster installation. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 50 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1450 . This value cannot be changed after cluster installation. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . This value cannot be changed after cluster installation. If you are running in a virtualized environment with existing nodes that are part of another VXLAN network, then you might be required to change this. For example, when running an OpenShift SDN overlay on top of VMware NSX-T, you must select an alternate port for the VXLAN, because both SDNs use the same default VXLAN port number. On Amazon Web Services (AWS), you can select an alternate port for the VXLAN between port 9000 and port 9999 . Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 6.9. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify an empty object to enable IPsec encryption. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. v4InternalSubnet If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . This field cannot be changed after installation. The default value is 100.64.0.0/16 . v6InternalSubnet If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . Table 6.10. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 6.11. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. Note In OpenShift Container Platform {version}, egress IP is only assigned to the primary interface. Consequentially, setting routingViaHost to true will not work for egress IP in OpenShift Container Platform {version}. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 6.12. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 6.8. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following examples: Specify a different VXLAN port for the OpenShift SDN network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800 Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {} Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. Note For more information about using a Network Load Balancer (NLB) on AWS, see Configuring Ingress cluster traffic on AWS using a Network Load Balancer . 6.9. Configuring an Ingress Controller Network Load Balancer on a new AWS cluster You can create an Ingress Controller backed by an AWS Network Load Balancer (NLB) on a new cluster. Prerequisites Create the install-config.yaml file and complete any modifications to it. Procedure Create an Ingress Controller backed by an AWS NLB on a new cluster. Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the name of the directory that contains the install-config.yaml file for your cluster. Create a file that is named cluster-ingress-default-ingresscontroller.yaml in the <installation_directory>/manifests/ directory: USD touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1 1 For <installation_directory> , specify the directory name that contains the manifests/ directory for your cluster. After creating the file, several network configuration files are in the manifests/ directory, as shown: USD ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml Example output cluster-ingress-default-ingresscontroller.yaml Open the cluster-ingress-default-ingresscontroller.yaml file in an editor and enter a custom resource (CR) that describes the Operator configuration you want: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService Save the cluster-ingress-default-ingresscontroller.yaml file and quit the text editor. Optional: Back up the manifests/cluster-ingress-default-ingresscontroller.yaml file. The installation program deletes the manifests/ directory when creating the cluster. 6.10. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with OVN-Kubernetes. This allows a hybrid cluster that supports different node networking configurations. For example, this is necessary to run both Linux and Windows nodes in a cluster. Important You must configure hybrid networking with OVN-Kubernetes during the installation of your cluster. You cannot switch to hybrid networking after the installation process. Prerequisites You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, such as in the following example: Specify a hybrid networking configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2 1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR must not overlap with the clusterNetwork CIDR. 2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. Note For more information on using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads . 6.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Configure an account with the cloud platform that hosts your cluster. Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Optional: Remove or disable the AdministratorAccess policy from the IAM account that you used to install the cluster. Note The elevated permissions provided by the AdministratorAccess policy are required only during installation. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.12. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 6.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 6.14. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 6.15. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service. 6.16. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials .
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge", "apiVersion: v1 baseDomain: example.com 1 credentialsMode: Mint 2 controlPlane: 3 4 hyperthreading: Enabled 5 name: master platform: aws: zones: - us-west-2a - us-west-2b rootVolume: iops: 4000 size: 500 type: io1 6 metadataService: authentication: Optional 7 type: m6i.xlarge replicas: 3 compute: 8 - hyperthreading: Enabled 9 name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 10 metadataService: authentication: Optional 11 type: c5.4xlarge zones: - us-west-2c replicas: 3 metadata: name: test-cluster 12 networking: 13 clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 14 serviceNetwork: - 172.30.0.0/16 platform: aws: region: us-west-2 15 propagateUserTags: true 16 userTags: adminContact: jdoe costCenter: 7536 amiID: ami-96c6f8f7 17 serviceEndpoints: 18 - name: ec2 url: https://vpce-id.ec2.us-west-2.vpce.amazonaws.com fips: false 19 sshKey: ssh-ed25519 AAAA... 20 pullSecret: '{\"auths\": ...}' 21", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23", "spec: serviceNetwork: - 172.30.0.0/14", "defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789", "defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}", "kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s", "./openshift-install create manifests --dir <installation_directory> 1", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: openshiftSDNConfig: vxlanPort: 4800", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: {}", "./openshift-install create manifests --dir <installation_directory> 1", "touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml 1", "ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml", "cluster-ingress-default-ingresscontroller.yaml", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService", "./openshift-install create manifests --dir <installation_directory>", "cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "cat <installation_directory>/auth/kubeadmin-password", "oc get routes -n openshift-console | grep 'console-openshift'", "console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_aws/installing-aws-network-customizations
21.9. Live Migration Errors
21.9. Live Migration Errors There may be cases where a live migration causes the memory contents to be re-transferred over and over again. This process causes the guest to be in a state where it is constantly writing to memory and therefore will slow down migration. If this should occur, and the guest is writing more than several tens of MBs per second, then live migration may fail to finish (converge). This issue is not scheduled to be resolved at the moment for Red Hat Enterprise Linux 6, and is scheduled to be fixed in Red Hat Enterprise Linux 7. The current live-migration implementation has a default migration time configured to 30ms. This value determines the guest pause time at the end of the migration in order to transfer the leftovers. Higher values increase the odds that live migration will converge
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/live-migration-errors
Chapter 2. Ceph block devices
Chapter 2. Ceph block devices As a storage administrator, being familiar with Ceph's block device commands can help you effectively manage the Red Hat Ceph Storage cluster. You can create and manage block devices pools and images, along with enabling and disabling the various features of Ceph block devices. 2.1. Prerequisites A running Red Hat Ceph Storage cluster. 2.2. Displaying the command help Display command, and sub-command online help from the command-line interface. Note The -h option still displays help for all available commands. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure Use the rbd help command to display help for a particular rbd command and its subcommand: Syntax To display help for the snap list command: 2.3. Creating a block device pool Before using the block device client, ensure a pool for rbd exists, is enabled and initialized. Note You MUST create a pool first before you can specify it as a source. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure To create an rbd pool, execute the following: Syntax Example Additional Resources See the Pools chapter in the Red Hat Ceph Storage Storage Strategies Guide for additional details. 2.4. Creating a block device image Before adding a block device to a node, create an image for it in the Ceph storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure To create a block device image, execute the following command: Syntax Example This example creates a 1 GB image named image1 that stores information in a pool named pool1 . Note Ensure the pool exists before creating an image. Additional Resources See the Creating a block device pool section in the Red Hat Ceph Storage Block Device Guide for additional details. 2.5. Listing the block device images List the block device images. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure To list block devices in the rbd pool, execute the following command: Note rbd is the default pool name. Example To list block devices in a specific pool: Syntax Example 2.6. Retrieving the block device image information Retrieve information on the block device image. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure To retrieve information from a particular image in the default rbd pool, run the following command: Syntax Example To retrieve information from an image within a pool: Syntax Example 2.7. Resizing a block device image Ceph block device images are thin-provisioned. They do not actually use any physical storage until you begin saving data to them. However, they do have a maximum capacity that you set with the --size option. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure To increase the maximum size of a Ceph block device image for the default rbd pool: Syntax Example To decrease the maximum size of a Ceph block device image for the default rbd pool: Syntax Example To increase the maximum size of a Ceph block device image for a specific pool: Syntax Example To decrease the maximum size of a Ceph block device image for a specific pool: Syntax Example 2.8. Removing a block device image Remove a block device image. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure To remove a block device from the default rbd pool: Syntax Example To remove a block device from a specific pool: Syntax Example 2.9. Moving a block device image to the trash RADOS Block Device (RBD) images can be moved to the trash using the rbd trash command. This command provides more options than the rbd rm command. Once an image is moved to the trash, it can be removed from the trash at a later time. This helps to avoid accidental deletion. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure To move an image to the trash execute the following: Syntax Example Once an image is in the trash, a unique image ID is assigned. Note You need this image ID to specify the image later if you need to use any of the trash options. Execute the rbd trash list POOL_NAME for a list of IDs of the images in the trash. This command also returns the image's pre-deletion name. In addition, there is an optional --image-id argument that can be used with rbd info and rbd snap commands. Use --image-id with the rbd info command to see the properties of an image in the trash, and with rbd snap to remove an image's snapshots from the trash. To remove an image from the trash execute the following: Syntax Example Important Once an image is removed from the trash, it cannot be restored. Execute the rbd trash restore command to restore the image: Syntax Example To remove all expired images from trash: Syntax Example 2.10. Defining an automatic trash purge schedule You can schedule periodic trash purge operations on a pool. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure To add a trash purge schedule, execute: Syntax Example To list the trash purge schedule, execute: Syntax Example To know the status of trash purge schedule, execute: Example To remove the trash purge schedule, execute: Syntax Example 2.11. Enabling and disabling image features The block device images, such as fast-diff , exclusive-lock , object-map , or deep-flatten , are enabled by default. You can enable or disable these image features on already existing images. Note The deep flatten feature can be only disabled on already existing images but not enabled. To use deep flatten , enable it when creating images. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure Retrieve information from a particular image in a pool: Syntax Example Enable a feature: Syntax To enable the exclusive-lock feature on the image1 image in the pool1 pool: Example Important If you enable the fast-diff and object-map features, then rebuild the object map: Syntax Disable a feature: Syntax To disable the fast-diff feature on the image1 image in the pool1 pool: Example 2.12. Working with image metadata Ceph supports adding custom image metadata as key-value pairs. The pairs do not have any strict format. Also, by using metadata, you can set the RADOS Block Device (RBD) configuration parameters for particular images. Use the rbd image-meta commands to work with metadata. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the client node. Procedure To set a new metadata key-value pair: Syntax Example This example sets the last_update key to the 2021-06-06 value on the image1 image in the pool1 pool. To view a value of a key: Syntax Example This example views the value of the last_update key. To show all metadata on an image: Syntax Example This example lists the metadata set for the image1 image in the pool1 pool. To remove a metadata key-value pair: Syntax Example This example removes the last_update key-value pair from the image1 image in the pool1 pool. To override the RBD image configuration settings set in the Ceph configuration file for a particular image: Syntax Example This example disables the RBD cache for the image1 image in the pool1 pool. Additional Resources See the Block device general options section in the Red Hat Ceph Storage Block Device Guide for a list of possible configuration options. 2.13. Moving images between pools You can move RADOS Block Device (RBD) images between different pools within the same cluster. During this process, the source image is copied to the target image with all snapshot history and optionally with link to the source image's parent to help preserve sparseness. The source image is read only, the target image is writable. The target image is linked to the source image while the migration is in progress. You can safely run this process in the background while the new target image is in use. However, stop all clients using the target image before the preparation step to ensure that clients using the image are updated to point to the new target image. Important The krbd kernel module does not support live migration at this time. Prerequisites Stop all clients that use the source image. Root-level access to the client node. Procedure Prepare for migration by creating the new target image that cross-links the source and target images: Syntax Replace: SOURCE_IMAGE with the name of the image to be moved. Use the POOL / IMAGE_NAME format. TARGET_IMAGE with the name of the new image. Use the POOL / IMAGE_NAME format. Example Verify the state of the new target image, which is supposed to be prepared : Syntax Example Optionally, restart the clients using the new target image name. Copy the source image to target image: Syntax Example Ensure that the migration is completed: Example Commit the migration by removing the cross-link between the source and target images, and this also removes the source image: Syntax Example If the source image is a parent of one or more clones, use the --force option after ensuring that the clone images are not in use: Example If you did not restart the clients after the preparation step, restart them using the new target image name. 2.14. Migrating pools You can migrate or copy RADOS Block Device (RBD) images. During this process, the source image is exported and then imported. Important Use this migration process if the workload contains only RBD images. No rados cppool images can exist in the workload. If rados cppool images exist in the workload, see Migrating a pool in the Storage Strategies Guide . Important While running the export and import commands, be sure that there is no active I/O in the related RBD images. It is recommended to take production down during this pool migration time. Prerequisites Stop all active I/O in the RBD images which are being exported and imported. Root-level access to the client node. Procedure Migrate the volume. Syntax Example If using the local drive for import or export is necessary, the commands can be divided, first exporting to a local drive and then importing the files to a new pool. Syntax Example 2.15. The rbdmap service The systemd unit file, rbdmap.service , is included with the ceph-common package. The rbdmap.service unit executes the rbdmap shell script. This script automates the mapping and unmapping of RADOS Block Devices (RBD) for one or more RBD images. The script can be ran manually at any time, but the typical use case is to automatically mount RBD images at boot time, and unmount at shutdown. The script takes a single argument, which can be either map , for mounting or unmap , for unmounting RBD images. The script parses a configuration file, the default is /etc/ceph/rbdmap , but can be overridden using an environment variable called RBDMAPFILE . Each line of the configuration file corresponds to an RBD image. The format of the configuration file format is as follows: IMAGE_SPEC RBD_OPTS Where IMAGE_SPEC specifies the POOL_NAME / IMAGE_NAME , or just the IMAGE_NAME , in which case the POOL_NAME defaults to rbd . The RBD_OPTS is an optional list of options to be passed to the underlying rbd map command. These parameters and their values should be specified as a comma-separated string: OPT1 = VAL1 , OPT2 = VAL2 ,... , OPT_N = VAL_N This will cause the script to issue an rbd map command like the following: Syntax Note For options and values which contain commas or equality signs, a simple apostrophe can be used to prevent replacing them. When successful, the rbd map operation maps the image to a /dev/rbdX device, at which point a udev rule is triggered to create a friendly device name symlink, for example, /dev/rbd/ POOL_NAME / IMAGE_NAME , pointing to the real mapped device. For mounting or unmounting to succeed, the friendly device name must have a corresponding entry in /etc/fstab file. When writing /etc/fstab entries for RBD images, it is a good idea to specify the noauto or nofail mount option. This prevents the init system from trying to mount the device too early, before the device exists. Additional Resources See the rbd manpage for a full list of possible options. 2.16. Configuring the rbdmap service To automatically map and mount, or unmap and unmount, RADOS Block Devices (RBD) at boot time, or at shutdown respectively. Prerequisites Root-level access to the node doing the mounting. Installation of the ceph-common package. Procedure Open for editing the /etc/ceph/rbdmap configuration file. Add the RBD image or images to the configuration file: Example Save changes to the configuration file. Enable the RBD mapping service: Example Additional Resources See the The rbdmap service section of the Red Hat Ceph Storage Block Device Guide for more details on the RBD system service. 2.17. Persistent Write Log Cache Important Persistent Write Log (PWL) with SSD as a cache device is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend to use them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. See the support scope for Red Hat Technology Preview features for more details. In a Red Hat Ceph Storage cluster, Persistent Write Log (PWL) cache provides a persistent, fault-tolerant write-back cache for librbd-based RBD clients. PWL cache uses a log-ordered write-back design which maintains checkpoints internally so that writes that get flushed back to the cluster are always crash consistent. If the client cache is lost entirely, the disk image is still consistent but the data appears stale. You can use PWL cache with persistent memory (PMEM) or solid-state disks (SSD) as cache devices. For PMEM, the cache mode is replica write log (RWL) and for SSD, the cache mode is (SSD). Currently, PWL cache supports RWL and SSD modes and is disabled by default. Primary benefits of PWL cache are: PWL cache can provide high performance when the cache is not full. The larger the cache, the longer the duration of high performance. PWL cache provides persistence and is not much slower than RBD cache. RBD cache is faster but volatile and cannot guarantee data order and persistence. In a steady state, where the cache is full, performance is affected by the number of I/Os in flight. For example, PWL can provide higher performance at low io_depth, but at high io_depth, such as when the number of I/Os is greater than 32, the performance is often worse than that in cases without cache. Use cases for PMEM caching are: Different from RBD cache, PWL cache has non-volatile characteristics and is used in scenarios where you do not want data loss and need performance. RWL mode provides low latency. It has a stable low latency for burst I/Os and it is suitable for those scenarios with high requirements for stable low latency. RWL mode also has high continuous and stable performance improvement in scenarios with low I/O depth or not too much inflight I/O. Use case for SSD caching is: The advantages of SSD mode are similar to RWL mode. SSD hardware is relatively cheap and popular, but its performance is slightly lower than PMEM. 2.18. Persistent write log cache limitations When using Persistent Write Log (PWL) cache, there are several limitations that should be considered. The underlying implementation of persistent memory (PMEM) and solid-state disks (SSD) is different, with PMEM having higher performance. At present, PMEM can provide "persist on write" and SSD is "persist on flush or checkpoint". In future releases, these two modes will be configurable. When users switch frequently and open and close images repeatedly, Ceph displays poor performance. If PWL cache is enabled, the performance is worse. It is not recommended to set num_jobs in a Flexible I/O (fio) test, but instead setup multiple jobs to write different images. 2.19. Enabling persistent write log cache You can enable persistent write log cache (PWL) on a Red Hat Ceph Storage cluster by setting the Ceph RADOS block device (RBD) rbd_persistent_cache_mode and rbd_plugins options. Important The exclusive-lock feature must be enabled to enable persistent write log cache. The cache can be loaded only after the exclusive-lock is acquired. Exclusive-locks are enabled on newly created images by default unless overridden by the rbd_default_features configuration option or the --image-feature flag for the rbd create command. See the Enabling and disabling image features section for more details on the exclusive-lock feature. Set the persistent write log cache options at the host level by using the ceph config set command. Set the persistent write log cache options at the pool or image level is by using the rbd config pool set or the rbd config image set commands. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the monitor node. The exclusive-lock feature is enabled. Client-side disks are persistent memory (PMEM) or solid-state disks (SSD). RBD cache is disabled. Procedure Enable PWL cache: At the host level, use the ceph config set command: Syntax Replace CACHE_MODE with rwl or ssd . Example At the pool level, use the rbd config pool set command: Syntax Replace CACHE_MODE with rwl or ssd . Example At the image level, use the rbd config image set command: Syntax Replace CACHE_MODE with rwl or ssd . Example Optional: Set the additional RBD options at the host, the pool, or the image level: Syntax 1 rbd_persistent_cache_path - A file folder to cache data that must have direct access (DAX) enabled when using the rwl mode to avoid performance degradation. 2 rbd_persistent_cache_size - The cache size per image, with a minimum cache size of 1 GB. The larger the cache size, the better the performance. Example Additional Resources See the Direct Access for files article on kernel.org for more details on using DAX. 2.20. Checking persistent write log cache status You can check the status of the Persistent Write Log (PWL) cache. The cache is used when an exclusive lock is acquired, and when the exclusive-lock is released, the persistent write log cache is closed. The cache status shows information about the cache size, location, type, and other cache-related information. Updates to the cache status are done when the cache is opened and closed. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the monitor node. A running process with PWL cache enabled. Procedure View the PWL cache status: Syntax Example 2.21. Flushing persistent write log cache You can flush the cache file with the rbd command, specifying persistent-cache flush , the pool name, and the image name before discarding the persistent write log (PWL) cache. The flush command can explicitly write cache files back to the OSDs. If there is a cache interruption or the application dies unexpectedly, all the entries in the cache are flushed to the OSDs so that you can manually flush the data and then invalidate the cache. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the monitor node. PWL cache is enabled. Procedure Flush the PWL cache: Syntax Example Additional Resources See the Discarding persistent write log cache section in the Red Hat Ceph Storage Block Device Guide for more details. 2.22. Discarding persistent write log cache You might need to manually discard the Persistent Write Log (PWL) cache, for example, if the data in the cache has expired. You can discard a cache file for an image by using the rbd persistent-cache invalidate command. The command removes the cache metadata for the specified image, disables the cache feature, and deletes the local cache file, if it exists. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the monitor node. PWL cache is enabled. Procedure Discard PWL cache: Syntax Example 2.23. Monitoring performance of Ceph Block Devices using the command-line interface Starting with Red Hat Ceph Storage 4.1, a performance metrics gathering framework is integrated within the Ceph OSD and Manager components. This framework provides a built-in method to generate and process performance metrics upon which other Ceph Block Device performance monitoring solutions are built. A new Ceph Manager module, rbd_support , aggregates the performance metrics when enabled. The rbd command has two new actions: iotop and iostat . Note The initial use of these actions can take around 30 seconds to populate the data fields. Prerequisites User-level access to a Ceph Monitor node. Procedure Ensure the rbd_support Ceph Manager module is enabled: Example To display an "iotop"-style of images: Example Note The write ops, read-ops, write-bytes, read-bytes, write-latency, and read-latency columns can be sorted dynamically by using the right and left arrow keys. To display an "iostat"-style of images: Example Note The output from this command can be in JSON or XML format, and then can be sorted using other command-line tools. 2.24. Additional Resources See Chapter 8, The rbd kernel module for details on mapping and unmapping block devices.
[ "rbd help COMMAND SUBCOMMAND", "rbd help snap list", "ceph osd pool create POOL_NAME PG_NUM ceph osd pool application enable POOL_NAME rbd rbd pool init -p POOL_NAME", "ceph osd pool create pool1 ceph osd pool application enable pool1 rbd rbd pool init -p pool1", "rbd create IMAGE_NAME --size MEGABYTES --pool POOL_NAME", "rbd create image1 --size 1024 --pool pool1", "rbd ls", "rbd ls POOL_NAME", "rbd ls pool1", "rbd --image IMAGE_NAME info", "rbd --image image1 info", "rbd --image IMAGE_NAME -p POOL_NAME info", "rbd --image image1 -p pool1 info", "rbd resize --image IMAGE_NAME --size SIZE", "rbd resize --image image1 --size 1024", "rbd resize --image IMAGE_NAME --size SIZE --allow-shrink", "rbd resize --image image1 --size 1024 --allow-shrink", "rbd resize --image POOL_NAME / IMAGE_NAME --size SIZE", "rbd resize --image pool1/image1 --size 1024", "rbd resize --image POOL_NAME / IMAGE_NAME --size SIZE --allow-shrink", "rbd resize --image pool1/image1 --size 1024 --allow-shrink", "rbd rm IMAGE_NAME", "rbd rm image1", "rbd rm IMAGE_NAME -p POOL_NAME", "rbd rm image1 -p pool1", "rbd trash mv [ POOL_NAME /] IMAGE_NAME", "rbd trash mv pool1/image1", "rbd trash rm [ POOL_NAME /] IMAGE_ID", "rbd trash rm pool1/d35ed01706a0", "rbd trash restore [ POOL_NAME /] IMAGE_ID", "rbd trash restore pool1/d35ed01706a0", "rbd trash purge POOL_NAME", "rbd trash purge pool1 Removing images: 100% complete...done.", "rbd trash purge schedule add --pool POOL_NAME INTERVAL", "rbd trash purge schedule add --pool pool1 10m", "rbd trash purge schedule ls --pool POOL_NAME", "rbd trash purge schedule ls --pool pool1 every 10m", "rbd trash purge schedule status POOL NAMESPACE SCHEDULE TIME pool1 2021-08-02 11:50:00", "rbd trash purge schedule remove --pool POOL_NAME INTERVAL", "rbd trash purge schedule remove --pool pool1 10m", "rbd --image POOL_NAME / IMAGE_NAME info", "rbd --image pool1/image1 info", "rbd feature enable POOL_NAME / IMAGE_NAME FEATURE_NAME", "rbd feature enable pool1/image1 exclusive-lock", "rbd object-map rebuild POOL_NAME / IMAGE_NAME", "rbd feature disable POOL_NAME / IMAGE_NAME FEATURE_NAME", "rbd feature disable pool1/image1 fast-diff", "rbd image-meta set POOL_NAME / IMAGE_NAME KEY VALUE", "rbd image-meta set pool1/image1 last_update 2021-06-06", "rbd image-meta get POOL_NAME / IMAGE_NAME KEY", "rbd image-meta get pool1/image1 last_update", "rbd image-meta list POOL_NAME / IMAGE_NAME", "rbd image-meta list pool1/image1", "rbd image-meta remove POOL_NAME / IMAGE_NAME KEY", "rbd image-meta remove pool1/image1 last_update", "rbd config image set POOL_NAME / IMAGE_NAME PARAMETER VALUE", "rbd config image set pool1/image1 rbd_cache false", "rbd migration prepare SOURCE_IMAGE TARGET_IMAGE", "rbd migration prepare pool1/image1 pool2/image2", "rbd status TARGET_IMAGE", "rbd status pool2/image2 Watchers: none Migration: source: pool1/image1 (5e2cba2f62e) destination: pool2/image2 (5e2ed95ed806) state: prepared", "rbd migration execute TARGET_IMAGE", "rbd migration execute pool2/image2", "rbd status pool2/image2 Watchers: watcher=1.2.3.4:0/3695551461 client.123 cookie=123 Migration: source: pool1/image1 (5e2cba2f62e) destination: pool2/image2 (5e2ed95ed806) state: executed", "rbd migration commit TARGET_IMAGE", "rbd migration commit pool2/image2", "rbd migration commit pool2/image2 --force", "rbd export volumes/ VOLUME_NAME - | rbd import --image-format 2 - volumes_new/ VOLUME_NAME", "rbd export volumes/volume-3c4c63e3-3208-436f-9585-fee4e2a3de16 - | rbd import --image-format 2 - volumes_new/volume-3c4c63e3-3208-436f-9585-fee4e2a3de16", "rbd export volume/ VOLUME_NAME FILE_PATH rbd import --image-format 2 FILE_PATH volumes_new/ VOLUME_NAME", "rbd export volumes/volume-3c4c63e3-3208-436f-9585-fee4e2a3de16 <path of export file> rbd import --image-format 2 <path> volumes_new/volume-3c4c63e3-3208-436f-9585-fee4e2a3de16", "rbd map POOLNAME / IMAGE_NAME -- OPT1 VAL1 -- OPT2 VAL2", "foo/bar1 id=admin,keyring=/etc/ceph/ceph.client.admin.keyring foo/bar2 id=admin,keyring=/etc/ceph/ceph.client.admin.keyring,options='lock_on_read,queue_depth=1024'", "systemctl enable rbdmap.service", "ceph config set client rbd_persistent_cache_mode CACHE_MODE ceph config set client rbd_plugins pwl_cache", "ceph config set client rbd_persistent_cache_mode ssd ceph config set client rbd_plugins pwl_cache", "rbd config pool set POOL_NAME rbd_persistent_cache_mode CACHE_MODE rbd config pool set POOL_NAME rbd_plugins pwl_cache", "rbd config pool set pool1 rbd_persistent_cache_mode ssd rbd config pool set pool1 rbd_plugins pwl_cache", "rbd config image set image set POOL_NAME / IMAGE_NAME rbd_persistent_cache_mode CACHE_MODE rbd config image set image set POOL_NAME / IMAGE_NAME rbd_plugins pwl_cache", "rbd config image set pool1/image1 rbd_persistent_cache_mode ssd rbd config image set pool1/image1 rbd_plugins pwl_cache", "rbd_persistent_cache_mode CACHE_MODE rbd_plugins pwl_cache rbd_persistent_cache_path / PATH_TO_DAX_ENABLED_FOLDER / WRITE_BACK_CACHE_FOLDER 1 rbd_persistent_cache_size PERSISTENT_CACHE_SIZE 2", "rbd_cache false rbd_persistent_cache_mode rwl rbd_plugins pwl_cache rbd_persistent_cache_path /mnt/pmem/cache/ rbd_persistent_cache_size 1073741824", "rbd status POOL_NAME / IMAGE_NAME", "rbd status pool1/image1 Watchers: watcher=10.10.0.102:0/1061883624 client.25496 cookie=140338056493088 Persistent cache state: host: host02 path: /mnt/nvme0/rbd-pwl.rbd.101e5824ad9a.pool size: 1 GiB mode: ssd stats_timestamp: Mon Apr 18 13:26:32 2022 present: true empty: false clean: false allocated: 509 MiB cached: 501 MiB dirty: 338 MiB free: 515 MiB hits_full: 1450 / 61% hits_partial: 0 / 0% misses: 924 hit_bytes: 192 MiB / 66% miss_bytes: 97 MiB", "rbd persistent-cache flush POOL_NAME / IMAGE_NAME", "rbd persistent-cache flush pool1/image1", "rbd persistent-cache invalidate POOL_NAME / IMAGE_NAME", "rbd persistent-cache invalidate pool1/image1", "ceph mgr module ls { \"always_on_modules\": [ \"balancer\", \"crash\", \"devicehealth\", \"orchestrator\", \"pg_autoscaler\", \"progress\", \"rbd_support\", <-- \"status\", \"telemetry\", \"volumes\" }", "[user@mon ~]USD rbd perf image iotop", "[user@mon ~]USD rbd perf image iostat" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/block_device_guide/ceph-block-devices
Chapter 11. PMML model examples
Chapter 11. PMML model examples PMML defines an XML schema that enables PMML models to be used between different PMML-compliant platforms. The PMML specification enables multiple software platforms to work with the same file for authoring, testing, and production execution, assuming producer and consumer conformance are met. The following are examples of PMML Regression, Scorecard, Tree, Mining, and Clustering models. These examples illustrate the supported models that you can integrate with your decision services in Red Hat Decision Manager. For more PMML examples, see the DMG PMML Sample Files page. Example PMML Regression model <PMML version="4.2" xsi:schemaLocation="http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.dmg.org/PMML-4_2"> <Header copyright="JBoss"/> <DataDictionary numberOfFields="5"> <DataField dataType="double" name="fld1" optype="continuous"/> <DataField dataType="double" name="fld2" optype="continuous"/> <DataField dataType="string" name="fld3" optype="categorical"> <Value value="x"/> <Value value="y"/> </DataField> <DataField dataType="double" name="fld4" optype="continuous"/> <DataField dataType="double" name="fld5" optype="continuous"/> </DataDictionary> <RegressionModel algorithmName="linearRegression" functionName="regression" modelName="LinReg" normalizationMethod="logit" targetFieldName="fld4"> <MiningSchema> <MiningField name="fld1"/> <MiningField name="fld2"/> <MiningField name="fld3"/> <MiningField name="fld4" usageType="predicted"/> <MiningField name="fld5" usageType="target"/> </MiningSchema> <RegressionTable intercept="0.5"> <NumericPredictor coefficient="5" exponent="2" name="fld1"/> <NumericPredictor coefficient="2" exponent="1" name="fld2"/> <CategoricalPredictor coefficient="-3" name="fld3" value="x"/> <CategoricalPredictor coefficient="3" name="fld3" value="y"/> <PredictorTerm coefficient="0.4"> <FieldRef field="fld1"/> <FieldRef field="fld2"/> </PredictorTerm> </RegressionTable> </RegressionModel> </PMML> Example PMML Scorecard model <PMML version="4.2" xsi:schemaLocation="http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.dmg.org/PMML-4_2"> <Header copyright="JBoss"/> <DataDictionary numberOfFields="4"> <DataField name="param1" optype="continuous" dataType="double"/> <DataField name="param2" optype="continuous" dataType="double"/> <DataField name="overallScore" optype="continuous" dataType="double" /> <DataField name="finalscore" optype="continuous" dataType="double" /> </DataDictionary> <Scorecard modelName="ScorecardCompoundPredicate" useReasonCodes="true" isScorable="true" functionName="regression" baselineScore="15" initialScore="0.8" reasonCodeAlgorithm="pointsAbove"> <MiningSchema> <MiningField name="param1" usageType="active" invalidValueTreatment="asMissing"> </MiningField> <MiningField name="param2" usageType="active" invalidValueTreatment="asMissing"> </MiningField> <MiningField name="overallScore" usageType="target"/> <MiningField name="finalscore" usageType="predicted"/> </MiningSchema> <Characteristics> <Characteristic name="ch1" baselineScore="50" reasonCode="reasonCh1"> <Attribute partialScore="20"> <SimplePredicate field="param1" operator="lessThan" value="20"/> </Attribute> <Attribute partialScore="100"> <CompoundPredicate booleanOperator="and"> <SimplePredicate field="param1" operator="greaterOrEqual" value="20"/> <SimplePredicate field="param2" operator="lessOrEqual" value="25"/> </CompoundPredicate> </Attribute> <Attribute partialScore="200"> <CompoundPredicate booleanOperator="and"> <SimplePredicate field="param1" operator="greaterOrEqual" value="20"/> <SimplePredicate field="param2" operator="greaterThan" value="25"/> </CompoundPredicate> </Attribute> </Characteristic> <Characteristic name="ch2" reasonCode="reasonCh2"> <Attribute partialScore="10"> <CompoundPredicate booleanOperator="or"> <SimplePredicate field="param2" operator="lessOrEqual" value="-5"/> <SimplePredicate field="param2" operator="greaterOrEqual" value="50"/> </CompoundPredicate> </Attribute> <Attribute partialScore="20"> <CompoundPredicate booleanOperator="and"> <SimplePredicate field="param2" operator="greaterThan" value="-5"/> <SimplePredicate field="param2" operator="lessThan" value="50"/> </CompoundPredicate> </Attribute> </Characteristic> </Characteristics> </Scorecard> </PMML> Example PMML Tree model <PMML version="4.2" xsi:schemaLocation="http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.dmg.org/PMML-4_2"> <Header copyright="JBOSS"/> <DataDictionary numberOfFields="5"> <DataField dataType="double" name="fld1" optype="continuous"/> <DataField dataType="double" name="fld2" optype="continuous"/> <DataField dataType="string" name="fld3" optype="categorical"> <Value value="true"/> <Value value="false"/> </DataField> <DataField dataType="string" name="fld4" optype="categorical"> <Value value="optA"/> <Value value="optB"/> <Value value="optC"/> </DataField> <DataField dataType="string" name="fld5" optype="categorical"> <Value value="tgtX"/> <Value value="tgtY"/> <Value value="tgtZ"/> </DataField> </DataDictionary> <TreeModel functionName="classification" modelName="TreeTest"> <MiningSchema> <MiningField name="fld1"/> <MiningField name="fld2"/> <MiningField name="fld3"/> <MiningField name="fld4"/> <MiningField name="fld5" usageType="predicted"/> </MiningSchema> <Node score="tgtX"> <True/> <Node score="tgtX"> <SimplePredicate field="fld4" operator="equal" value="optA"/> <Node score="tgtX"> <CompoundPredicate booleanOperator="surrogate"> <SimplePredicate field="fld1" operator="lessThan" value="30.0"/> <SimplePredicate field="fld2" operator="greaterThan" value="20.0"/> </CompoundPredicate> <Node score="tgtX"> <SimplePredicate field="fld2" operator="lessThan" value="40.0"/> </Node> <Node score="tgtZ"> <SimplePredicate field="fld2" operator="greaterOrEqual" value="10.0"/> </Node> </Node> <Node score="tgtZ"> <CompoundPredicate booleanOperator="or"> <SimplePredicate field="fld1" operator="greaterOrEqual" value="60.0"/> <SimplePredicate field="fld1" operator="lessOrEqual" value="70.0"/> </CompoundPredicate> <Node score="tgtZ"> <SimpleSetPredicate booleanOperator="isNotIn" field="fld4"> <Array type="string">optA optB</Array> </SimpleSetPredicate> </Node> </Node> </Node> <Node score="tgtY"> <CompoundPredicate booleanOperator="or"> <SimplePredicate field="fld4" operator="equal" value="optA"/> <SimplePredicate field="fld4" operator="equal" value="optC"/> </CompoundPredicate> <Node score="tgtY"> <CompoundPredicate booleanOperator="and"> <SimplePredicate field="fld1" operator="greaterThan" value="10.0"/> <SimplePredicate field="fld1" operator="lessThan" value="50.0"/> <SimplePredicate field="fld4" operator="equal" value="optA"/> <SimplePredicate field="fld2" operator="lessThan" value="100.0"/> <SimplePredicate field="fld3" operator="equal" value="false"/> </CompoundPredicate> </Node> <Node score="tgtZ"> <CompoundPredicate booleanOperator="and"> <SimplePredicate field="fld4" operator="equal" value="optC"/> <SimplePredicate field="fld2" operator="lessThan" value="30.0"/> </CompoundPredicate> </Node> </Node> </Node> </TreeModel> </PMML> Example PMML Mining model (modelChain) <PMML version="4.2" xsi:schemaLocation="http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.dmg.org/PMML-4_2"> <Header> <Application name="Drools-PMML" version="7.0.0-SNAPSHOT" /> </Header> <DataDictionary numberOfFields="7"> <DataField name="age" optype="continuous" dataType="double" /> <DataField name="occupation" optype="categorical" dataType="string"> <Value value="SKYDIVER" /> <Value value="ASTRONAUT" /> <Value value="PROGRAMMER" /> <Value value="TEACHER" /> <Value value="INSTRUCTOR" /> </DataField> <DataField name="residenceState" optype="categorical" dataType="string"> <Value value="AP" /> <Value value="KN" /> <Value value="TN" /> </DataField> <DataField name="validLicense" optype="categorical" dataType="boolean" /> <DataField name="overallScore" optype="continuous" dataType="double" /> <DataField name="grade" optype="categorical" dataType="string"> <Value value="A" /> <Value value="B" /> <Value value="C" /> <Value value="D" /> <Value value="F" /> </DataField> <DataField name="qualificationLevel" optype="categorical" dataType="string"> <Value value="Unqualified" /> <Value value="Barely" /> <Value value="Well" /> <Value value="Over" /> </DataField> </DataDictionary> <MiningModel modelName="SampleModelChainMine" functionName="classification"> <MiningSchema> <MiningField name="age" /> <MiningField name="occupation" /> <MiningField name="residenceState" /> <MiningField name="validLicense" /> <MiningField name="overallScore" /> <MiningField name="qualificationLevel" usageType="target"/> </MiningSchema> <Segmentation multipleModelMethod="modelChain"> <Segment id="1"> <True /> <Scorecard modelName="Sample Score 1" useReasonCodes="true" isScorable="true" functionName="regression" baselineScore="0.0" initialScore="0.345"> <MiningSchema> <MiningField name="age" usageType="active" invalidValueTreatment="asMissing" /> <MiningField name="occupation" usageType="active" invalidValueTreatment="asMissing" /> <MiningField name="residenceState" usageType="active" invalidValueTreatment="asMissing" /> <MiningField name="validLicense" usageType="active" invalidValueTreatment="asMissing" /> <MiningField name="overallScore" usageType="predicted" /> </MiningSchema> <Output> <OutputField name="calculatedScore" displayName="Final Score" dataType="double" feature="predictedValue" targetField="overallScore" /> </Output> <Characteristics> <Characteristic name="AgeScore" baselineScore="0.0" reasonCode="ABZ"> <Extension name="cellRef" value="USDBUSD8" /> <Attribute partialScore="10.0"> <Extension name="cellRef" value="USDCUSD10" /> <SimplePredicate field="age" operator="lessOrEqual" value="5" /> </Attribute> <Attribute partialScore="30.0" reasonCode="CX1"> <Extension name="cellRef" value="USDCUSD11" /> <CompoundPredicate booleanOperator="and"> <SimplePredicate field="age" operator="greaterOrEqual" value="5" /> <SimplePredicate field="age" operator="lessThan" value="12" /> </CompoundPredicate> </Attribute> <Attribute partialScore="40.0" reasonCode="CX2"> <Extension name="cellRef" value="USDCUSD12" /> <CompoundPredicate booleanOperator="and"> <SimplePredicate field="age" operator="greaterOrEqual" value="13" /> <SimplePredicate field="age" operator="lessThan" value="44" /> </CompoundPredicate> </Attribute> <Attribute partialScore="25.0"> <Extension name="cellRef" value="USDCUSD13" /> <SimplePredicate field="age" operator="greaterOrEqual" value="45" /> </Attribute> </Characteristic> <Characteristic name="OccupationScore" baselineScore="0.0"> <Extension name="cellRef" value="USDBUSD16" /> <Attribute partialScore="-10.0" reasonCode="CX2"> <Extension name="description" value="skydiving is a risky occupation" /> <Extension name="cellRef" value="USDCUSD18" /> <SimpleSetPredicate field="occupation" booleanOperator="isIn"> <Array n="2" type="string">SKYDIVER ASTRONAUT</Array> </SimpleSetPredicate> </Attribute> <Attribute partialScore="10.0"> <Extension name="cellRef" value="USDCUSD19" /> <SimpleSetPredicate field="occupation" booleanOperator="isIn"> <Array n="2" type="string">TEACHER INSTRUCTOR</Array> </SimpleSetPredicate> </Attribute> <Attribute partialScore="5.0"> <Extension name="cellRef" value="USDCUSD20" /> <SimplePredicate field="occupation" operator="equal" value="PROGRAMMER" /> </Attribute> </Characteristic> <Characteristic name="ResidenceStateScore" baselineScore="0.0" reasonCode="RES"> <Extension name="cellRef" value="USDBUSD22" /> <Attribute partialScore="-10.0"> <Extension name="cellRef" value="USDCUSD24" /> <SimplePredicate field="residenceState" operator="equal" value="AP" /> </Attribute> <Attribute partialScore="10.0"> <Extension name="cellRef" value="USDCUSD25" /> <SimplePredicate field="residenceState" operator="equal" value="KN" /> </Attribute> <Attribute partialScore="5.0"> <Extension name="cellRef" value="USDCUSD26" /> <SimplePredicate field="residenceState" operator="equal" value="TN" /> </Attribute> </Characteristic> <Characteristic name="ValidLicenseScore" baselineScore="0.0"> <Extension name="cellRef" value="USDBUSD28" /> <Attribute partialScore="1.0" reasonCode="LX00"> <Extension name="cellRef" value="USDCUSD30" /> <SimplePredicate field="validLicense" operator="equal" value="true" /> </Attribute> <Attribute partialScore="-1.0" reasonCode="LX00"> <Extension name="cellRef" value="USDCUSD31" /> <SimplePredicate field="validLicense" operator="equal" value="false" /> </Attribute> </Characteristic> </Characteristics> </Scorecard> </Segment> <Segment id="2"> <True /> <TreeModel modelName="SampleTree" functionName="classification" missingValueStrategy="lastPrediction" noTrueChildStrategy="returnLastPrediction"> <MiningSchema> <MiningField name="age" usageType="active" /> <MiningField name="validLicense" usageType="active" /> <MiningField name="calculatedScore" usageType="active" /> <MiningField name="qualificationLevel" usageType="predicted" /> </MiningSchema> <Output> <OutputField name="qualification" displayName="Qualification Level" dataType="string" feature="predictedValue" targetField="qualificationLevel" /> </Output> <Node score="Well" id="1"> <True/> <Node score="Barely" id="2"> <CompoundPredicate booleanOperator="and"> <SimplePredicate field="age" operator="greaterOrEqual" value="16" /> <SimplePredicate field="validLicense" operator="equal" value="true" /> </CompoundPredicate> <Node score="Barely" id="3"> <SimplePredicate field="calculatedScore" operator="lessOrEqual" value="50.0" /> </Node> <Node score="Well" id="4"> <CompoundPredicate booleanOperator="and"> <SimplePredicate field="calculatedScore" operator="greaterThan" value="50.0" /> <SimplePredicate field="calculatedScore" operator="lessOrEqual" value="60.0" /> </CompoundPredicate> </Node> <Node score="Over" id="5"> <SimplePredicate field="calculatedScore" operator="greaterThan" value="60.0" /> </Node> </Node> <Node score="Unqualified" id="6"> <CompoundPredicate booleanOperator="surrogate"> <SimplePredicate field="age" operator="lessThan" value="16" /> <SimplePredicate field="calculatedScore" operator="lessOrEqual" value="40.0" /> <True /> </CompoundPredicate> </Node> </Node> </TreeModel> </Segment> </Segmentation> </MiningModel> </PMML> Example PMML Clustering model <?xml version="1.0" encoding="UTF-8"?> <PMML version="4.1" xmlns="http://www.dmg.org/PMML-4_1"> <Header> <Application name="KNIME" version="2.8.0"/> </Header> <DataDictionary numberOfFields="5"> <DataField name="sepal_length" optype="continuous" dataType="double"> <Interval closure="closedClosed" leftMargin="4.3" rightMargin="7.9"/> </DataField> <DataField name="sepal_width" optype="continuous" dataType="double"> <Interval closure="closedClosed" leftMargin="2.0" rightMargin="4.4"/> </DataField> <DataField name="petal_length" optype="continuous" dataType="double"> <Interval closure="closedClosed" leftMargin="1.0" rightMargin="6.9"/> </DataField> <DataField name="petal_width" optype="continuous" dataType="double"> <Interval closure="closedClosed" leftMargin="0.1" rightMargin="2.5"/> </DataField> <DataField name="class" optype="categorical" dataType="string"/> </DataDictionary> <ClusteringModel modelName="SingleIrisKMeansClustering" functionName="clustering" modelClass="centerBased" numberOfClusters="4"> <MiningSchema> <MiningField name="sepal_length" invalidValueTreatment="asIs"/> <MiningField name="sepal_width" invalidValueTreatment="asIs"/> <MiningField name="petal_length" invalidValueTreatment="asIs"/> <MiningField name="petal_width" invalidValueTreatment="asIs"/> <MiningField name="class" usageType="predicted"/> </MiningSchema> <ComparisonMeasure kind="distance"> <squaredEuclidean/> </ComparisonMeasure> <ClusteringField field="sepal_length" compareFunction="absDiff"/> <ClusteringField field="sepal_width" compareFunction="absDiff"/> <ClusteringField field="petal_length" compareFunction="absDiff"/> <ClusteringField field="petal_width" compareFunction="absDiff"/> <Cluster name="virginica" size="32"> <Array n="4" type="real">6.9125000000000005 3.099999999999999 5.846874999999999 2.1312499999999996</Array> </Cluster> <Cluster name="versicolor" size="41"> <Array n="4" type="real">6.23658536585366 2.8585365853658535 4.807317073170731 1.6219512195121943</Array> </Cluster> <Cluster name="setosa" size="50"> <Array n="4" type="real">5.005999999999999 3.4180000000000006 1.464 0.2439999999999999</Array> </Cluster> <Cluster name="unknown" size="27"> <Array n="4" type="real">5.529629629629629 2.6222222222222222 3.940740740740741 1.2185185185185188</Array> </Cluster> </ClusteringModel> </PMML>
[ "<PMML version=\"4.2\" xsi:schemaLocation=\"http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"http://www.dmg.org/PMML-4_2\"> <Header copyright=\"JBoss\"/> <DataDictionary numberOfFields=\"5\"> <DataField dataType=\"double\" name=\"fld1\" optype=\"continuous\"/> <DataField dataType=\"double\" name=\"fld2\" optype=\"continuous\"/> <DataField dataType=\"string\" name=\"fld3\" optype=\"categorical\"> <Value value=\"x\"/> <Value value=\"y\"/> </DataField> <DataField dataType=\"double\" name=\"fld4\" optype=\"continuous\"/> <DataField dataType=\"double\" name=\"fld5\" optype=\"continuous\"/> </DataDictionary> <RegressionModel algorithmName=\"linearRegression\" functionName=\"regression\" modelName=\"LinReg\" normalizationMethod=\"logit\" targetFieldName=\"fld4\"> <MiningSchema> <MiningField name=\"fld1\"/> <MiningField name=\"fld2\"/> <MiningField name=\"fld3\"/> <MiningField name=\"fld4\" usageType=\"predicted\"/> <MiningField name=\"fld5\" usageType=\"target\"/> </MiningSchema> <RegressionTable intercept=\"0.5\"> <NumericPredictor coefficient=\"5\" exponent=\"2\" name=\"fld1\"/> <NumericPredictor coefficient=\"2\" exponent=\"1\" name=\"fld2\"/> <CategoricalPredictor coefficient=\"-3\" name=\"fld3\" value=\"x\"/> <CategoricalPredictor coefficient=\"3\" name=\"fld3\" value=\"y\"/> <PredictorTerm coefficient=\"0.4\"> <FieldRef field=\"fld1\"/> <FieldRef field=\"fld2\"/> </PredictorTerm> </RegressionTable> </RegressionModel> </PMML>", "<PMML version=\"4.2\" xsi:schemaLocation=\"http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"http://www.dmg.org/PMML-4_2\"> <Header copyright=\"JBoss\"/> <DataDictionary numberOfFields=\"4\"> <DataField name=\"param1\" optype=\"continuous\" dataType=\"double\"/> <DataField name=\"param2\" optype=\"continuous\" dataType=\"double\"/> <DataField name=\"overallScore\" optype=\"continuous\" dataType=\"double\" /> <DataField name=\"finalscore\" optype=\"continuous\" dataType=\"double\" /> </DataDictionary> <Scorecard modelName=\"ScorecardCompoundPredicate\" useReasonCodes=\"true\" isScorable=\"true\" functionName=\"regression\" baselineScore=\"15\" initialScore=\"0.8\" reasonCodeAlgorithm=\"pointsAbove\"> <MiningSchema> <MiningField name=\"param1\" usageType=\"active\" invalidValueTreatment=\"asMissing\"> </MiningField> <MiningField name=\"param2\" usageType=\"active\" invalidValueTreatment=\"asMissing\"> </MiningField> <MiningField name=\"overallScore\" usageType=\"target\"/> <MiningField name=\"finalscore\" usageType=\"predicted\"/> </MiningSchema> <Characteristics> <Characteristic name=\"ch1\" baselineScore=\"50\" reasonCode=\"reasonCh1\"> <Attribute partialScore=\"20\"> <SimplePredicate field=\"param1\" operator=\"lessThan\" value=\"20\"/> </Attribute> <Attribute partialScore=\"100\"> <CompoundPredicate booleanOperator=\"and\"> <SimplePredicate field=\"param1\" operator=\"greaterOrEqual\" value=\"20\"/> <SimplePredicate field=\"param2\" operator=\"lessOrEqual\" value=\"25\"/> </CompoundPredicate> </Attribute> <Attribute partialScore=\"200\"> <CompoundPredicate booleanOperator=\"and\"> <SimplePredicate field=\"param1\" operator=\"greaterOrEqual\" value=\"20\"/> <SimplePredicate field=\"param2\" operator=\"greaterThan\" value=\"25\"/> </CompoundPredicate> </Attribute> </Characteristic> <Characteristic name=\"ch2\" reasonCode=\"reasonCh2\"> <Attribute partialScore=\"10\"> <CompoundPredicate booleanOperator=\"or\"> <SimplePredicate field=\"param2\" operator=\"lessOrEqual\" value=\"-5\"/> <SimplePredicate field=\"param2\" operator=\"greaterOrEqual\" value=\"50\"/> </CompoundPredicate> </Attribute> <Attribute partialScore=\"20\"> <CompoundPredicate booleanOperator=\"and\"> <SimplePredicate field=\"param2\" operator=\"greaterThan\" value=\"-5\"/> <SimplePredicate field=\"param2\" operator=\"lessThan\" value=\"50\"/> </CompoundPredicate> </Attribute> </Characteristic> </Characteristics> </Scorecard> </PMML>", "<PMML version=\"4.2\" xsi:schemaLocation=\"http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"http://www.dmg.org/PMML-4_2\"> <Header copyright=\"JBOSS\"/> <DataDictionary numberOfFields=\"5\"> <DataField dataType=\"double\" name=\"fld1\" optype=\"continuous\"/> <DataField dataType=\"double\" name=\"fld2\" optype=\"continuous\"/> <DataField dataType=\"string\" name=\"fld3\" optype=\"categorical\"> <Value value=\"true\"/> <Value value=\"false\"/> </DataField> <DataField dataType=\"string\" name=\"fld4\" optype=\"categorical\"> <Value value=\"optA\"/> <Value value=\"optB\"/> <Value value=\"optC\"/> </DataField> <DataField dataType=\"string\" name=\"fld5\" optype=\"categorical\"> <Value value=\"tgtX\"/> <Value value=\"tgtY\"/> <Value value=\"tgtZ\"/> </DataField> </DataDictionary> <TreeModel functionName=\"classification\" modelName=\"TreeTest\"> <MiningSchema> <MiningField name=\"fld1\"/> <MiningField name=\"fld2\"/> <MiningField name=\"fld3\"/> <MiningField name=\"fld4\"/> <MiningField name=\"fld5\" usageType=\"predicted\"/> </MiningSchema> <Node score=\"tgtX\"> <True/> <Node score=\"tgtX\"> <SimplePredicate field=\"fld4\" operator=\"equal\" value=\"optA\"/> <Node score=\"tgtX\"> <CompoundPredicate booleanOperator=\"surrogate\"> <SimplePredicate field=\"fld1\" operator=\"lessThan\" value=\"30.0\"/> <SimplePredicate field=\"fld2\" operator=\"greaterThan\" value=\"20.0\"/> </CompoundPredicate> <Node score=\"tgtX\"> <SimplePredicate field=\"fld2\" operator=\"lessThan\" value=\"40.0\"/> </Node> <Node score=\"tgtZ\"> <SimplePredicate field=\"fld2\" operator=\"greaterOrEqual\" value=\"10.0\"/> </Node> </Node> <Node score=\"tgtZ\"> <CompoundPredicate booleanOperator=\"or\"> <SimplePredicate field=\"fld1\" operator=\"greaterOrEqual\" value=\"60.0\"/> <SimplePredicate field=\"fld1\" operator=\"lessOrEqual\" value=\"70.0\"/> </CompoundPredicate> <Node score=\"tgtZ\"> <SimpleSetPredicate booleanOperator=\"isNotIn\" field=\"fld4\"> <Array type=\"string\">optA optB</Array> </SimpleSetPredicate> </Node> </Node> </Node> <Node score=\"tgtY\"> <CompoundPredicate booleanOperator=\"or\"> <SimplePredicate field=\"fld4\" operator=\"equal\" value=\"optA\"/> <SimplePredicate field=\"fld4\" operator=\"equal\" value=\"optC\"/> </CompoundPredicate> <Node score=\"tgtY\"> <CompoundPredicate booleanOperator=\"and\"> <SimplePredicate field=\"fld1\" operator=\"greaterThan\" value=\"10.0\"/> <SimplePredicate field=\"fld1\" operator=\"lessThan\" value=\"50.0\"/> <SimplePredicate field=\"fld4\" operator=\"equal\" value=\"optA\"/> <SimplePredicate field=\"fld2\" operator=\"lessThan\" value=\"100.0\"/> <SimplePredicate field=\"fld3\" operator=\"equal\" value=\"false\"/> </CompoundPredicate> </Node> <Node score=\"tgtZ\"> <CompoundPredicate booleanOperator=\"and\"> <SimplePredicate field=\"fld4\" operator=\"equal\" value=\"optC\"/> <SimplePredicate field=\"fld2\" operator=\"lessThan\" value=\"30.0\"/> </CompoundPredicate> </Node> </Node> </Node> </TreeModel> </PMML>", "<PMML version=\"4.2\" xsi:schemaLocation=\"http://www.dmg.org/PMML-4_2 http://www.dmg.org/v4-2-1/pmml-4-2.xsd\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns=\"http://www.dmg.org/PMML-4_2\"> <Header> <Application name=\"Drools-PMML\" version=\"7.0.0-SNAPSHOT\" /> </Header> <DataDictionary numberOfFields=\"7\"> <DataField name=\"age\" optype=\"continuous\" dataType=\"double\" /> <DataField name=\"occupation\" optype=\"categorical\" dataType=\"string\"> <Value value=\"SKYDIVER\" /> <Value value=\"ASTRONAUT\" /> <Value value=\"PROGRAMMER\" /> <Value value=\"TEACHER\" /> <Value value=\"INSTRUCTOR\" /> </DataField> <DataField name=\"residenceState\" optype=\"categorical\" dataType=\"string\"> <Value value=\"AP\" /> <Value value=\"KN\" /> <Value value=\"TN\" /> </DataField> <DataField name=\"validLicense\" optype=\"categorical\" dataType=\"boolean\" /> <DataField name=\"overallScore\" optype=\"continuous\" dataType=\"double\" /> <DataField name=\"grade\" optype=\"categorical\" dataType=\"string\"> <Value value=\"A\" /> <Value value=\"B\" /> <Value value=\"C\" /> <Value value=\"D\" /> <Value value=\"F\" /> </DataField> <DataField name=\"qualificationLevel\" optype=\"categorical\" dataType=\"string\"> <Value value=\"Unqualified\" /> <Value value=\"Barely\" /> <Value value=\"Well\" /> <Value value=\"Over\" /> </DataField> </DataDictionary> <MiningModel modelName=\"SampleModelChainMine\" functionName=\"classification\"> <MiningSchema> <MiningField name=\"age\" /> <MiningField name=\"occupation\" /> <MiningField name=\"residenceState\" /> <MiningField name=\"validLicense\" /> <MiningField name=\"overallScore\" /> <MiningField name=\"qualificationLevel\" usageType=\"target\"/> </MiningSchema> <Segmentation multipleModelMethod=\"modelChain\"> <Segment id=\"1\"> <True /> <Scorecard modelName=\"Sample Score 1\" useReasonCodes=\"true\" isScorable=\"true\" functionName=\"regression\" baselineScore=\"0.0\" initialScore=\"0.345\"> <MiningSchema> <MiningField name=\"age\" usageType=\"active\" invalidValueTreatment=\"asMissing\" /> <MiningField name=\"occupation\" usageType=\"active\" invalidValueTreatment=\"asMissing\" /> <MiningField name=\"residenceState\" usageType=\"active\" invalidValueTreatment=\"asMissing\" /> <MiningField name=\"validLicense\" usageType=\"active\" invalidValueTreatment=\"asMissing\" /> <MiningField name=\"overallScore\" usageType=\"predicted\" /> </MiningSchema> <Output> <OutputField name=\"calculatedScore\" displayName=\"Final Score\" dataType=\"double\" feature=\"predictedValue\" targetField=\"overallScore\" /> </Output> <Characteristics> <Characteristic name=\"AgeScore\" baselineScore=\"0.0\" reasonCode=\"ABZ\"> <Extension name=\"cellRef\" value=\"USDBUSD8\" /> <Attribute partialScore=\"10.0\"> <Extension name=\"cellRef\" value=\"USDCUSD10\" /> <SimplePredicate field=\"age\" operator=\"lessOrEqual\" value=\"5\" /> </Attribute> <Attribute partialScore=\"30.0\" reasonCode=\"CX1\"> <Extension name=\"cellRef\" value=\"USDCUSD11\" /> <CompoundPredicate booleanOperator=\"and\"> <SimplePredicate field=\"age\" operator=\"greaterOrEqual\" value=\"5\" /> <SimplePredicate field=\"age\" operator=\"lessThan\" value=\"12\" /> </CompoundPredicate> </Attribute> <Attribute partialScore=\"40.0\" reasonCode=\"CX2\"> <Extension name=\"cellRef\" value=\"USDCUSD12\" /> <CompoundPredicate booleanOperator=\"and\"> <SimplePredicate field=\"age\" operator=\"greaterOrEqual\" value=\"13\" /> <SimplePredicate field=\"age\" operator=\"lessThan\" value=\"44\" /> </CompoundPredicate> </Attribute> <Attribute partialScore=\"25.0\"> <Extension name=\"cellRef\" value=\"USDCUSD13\" /> <SimplePredicate field=\"age\" operator=\"greaterOrEqual\" value=\"45\" /> </Attribute> </Characteristic> <Characteristic name=\"OccupationScore\" baselineScore=\"0.0\"> <Extension name=\"cellRef\" value=\"USDBUSD16\" /> <Attribute partialScore=\"-10.0\" reasonCode=\"CX2\"> <Extension name=\"description\" value=\"skydiving is a risky occupation\" /> <Extension name=\"cellRef\" value=\"USDCUSD18\" /> <SimpleSetPredicate field=\"occupation\" booleanOperator=\"isIn\"> <Array n=\"2\" type=\"string\">SKYDIVER ASTRONAUT</Array> </SimpleSetPredicate> </Attribute> <Attribute partialScore=\"10.0\"> <Extension name=\"cellRef\" value=\"USDCUSD19\" /> <SimpleSetPredicate field=\"occupation\" booleanOperator=\"isIn\"> <Array n=\"2\" type=\"string\">TEACHER INSTRUCTOR</Array> </SimpleSetPredicate> </Attribute> <Attribute partialScore=\"5.0\"> <Extension name=\"cellRef\" value=\"USDCUSD20\" /> <SimplePredicate field=\"occupation\" operator=\"equal\" value=\"PROGRAMMER\" /> </Attribute> </Characteristic> <Characteristic name=\"ResidenceStateScore\" baselineScore=\"0.0\" reasonCode=\"RES\"> <Extension name=\"cellRef\" value=\"USDBUSD22\" /> <Attribute partialScore=\"-10.0\"> <Extension name=\"cellRef\" value=\"USDCUSD24\" /> <SimplePredicate field=\"residenceState\" operator=\"equal\" value=\"AP\" /> </Attribute> <Attribute partialScore=\"10.0\"> <Extension name=\"cellRef\" value=\"USDCUSD25\" /> <SimplePredicate field=\"residenceState\" operator=\"equal\" value=\"KN\" /> </Attribute> <Attribute partialScore=\"5.0\"> <Extension name=\"cellRef\" value=\"USDCUSD26\" /> <SimplePredicate field=\"residenceState\" operator=\"equal\" value=\"TN\" /> </Attribute> </Characteristic> <Characteristic name=\"ValidLicenseScore\" baselineScore=\"0.0\"> <Extension name=\"cellRef\" value=\"USDBUSD28\" /> <Attribute partialScore=\"1.0\" reasonCode=\"LX00\"> <Extension name=\"cellRef\" value=\"USDCUSD30\" /> <SimplePredicate field=\"validLicense\" operator=\"equal\" value=\"true\" /> </Attribute> <Attribute partialScore=\"-1.0\" reasonCode=\"LX00\"> <Extension name=\"cellRef\" value=\"USDCUSD31\" /> <SimplePredicate field=\"validLicense\" operator=\"equal\" value=\"false\" /> </Attribute> </Characteristic> </Characteristics> </Scorecard> </Segment> <Segment id=\"2\"> <True /> <TreeModel modelName=\"SampleTree\" functionName=\"classification\" missingValueStrategy=\"lastPrediction\" noTrueChildStrategy=\"returnLastPrediction\"> <MiningSchema> <MiningField name=\"age\" usageType=\"active\" /> <MiningField name=\"validLicense\" usageType=\"active\" /> <MiningField name=\"calculatedScore\" usageType=\"active\" /> <MiningField name=\"qualificationLevel\" usageType=\"predicted\" /> </MiningSchema> <Output> <OutputField name=\"qualification\" displayName=\"Qualification Level\" dataType=\"string\" feature=\"predictedValue\" targetField=\"qualificationLevel\" /> </Output> <Node score=\"Well\" id=\"1\"> <True/> <Node score=\"Barely\" id=\"2\"> <CompoundPredicate booleanOperator=\"and\"> <SimplePredicate field=\"age\" operator=\"greaterOrEqual\" value=\"16\" /> <SimplePredicate field=\"validLicense\" operator=\"equal\" value=\"true\" /> </CompoundPredicate> <Node score=\"Barely\" id=\"3\"> <SimplePredicate field=\"calculatedScore\" operator=\"lessOrEqual\" value=\"50.0\" /> </Node> <Node score=\"Well\" id=\"4\"> <CompoundPredicate booleanOperator=\"and\"> <SimplePredicate field=\"calculatedScore\" operator=\"greaterThan\" value=\"50.0\" /> <SimplePredicate field=\"calculatedScore\" operator=\"lessOrEqual\" value=\"60.0\" /> </CompoundPredicate> </Node> <Node score=\"Over\" id=\"5\"> <SimplePredicate field=\"calculatedScore\" operator=\"greaterThan\" value=\"60.0\" /> </Node> </Node> <Node score=\"Unqualified\" id=\"6\"> <CompoundPredicate booleanOperator=\"surrogate\"> <SimplePredicate field=\"age\" operator=\"lessThan\" value=\"16\" /> <SimplePredicate field=\"calculatedScore\" operator=\"lessOrEqual\" value=\"40.0\" /> <True /> </CompoundPredicate> </Node> </Node> </TreeModel> </Segment> </Segmentation> </MiningModel> </PMML>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <PMML version=\"4.1\" xmlns=\"http://www.dmg.org/PMML-4_1\"> <Header> <Application name=\"KNIME\" version=\"2.8.0\"/> </Header> <DataDictionary numberOfFields=\"5\"> <DataField name=\"sepal_length\" optype=\"continuous\" dataType=\"double\"> <Interval closure=\"closedClosed\" leftMargin=\"4.3\" rightMargin=\"7.9\"/> </DataField> <DataField name=\"sepal_width\" optype=\"continuous\" dataType=\"double\"> <Interval closure=\"closedClosed\" leftMargin=\"2.0\" rightMargin=\"4.4\"/> </DataField> <DataField name=\"petal_length\" optype=\"continuous\" dataType=\"double\"> <Interval closure=\"closedClosed\" leftMargin=\"1.0\" rightMargin=\"6.9\"/> </DataField> <DataField name=\"petal_width\" optype=\"continuous\" dataType=\"double\"> <Interval closure=\"closedClosed\" leftMargin=\"0.1\" rightMargin=\"2.5\"/> </DataField> <DataField name=\"class\" optype=\"categorical\" dataType=\"string\"/> </DataDictionary> <ClusteringModel modelName=\"SingleIrisKMeansClustering\" functionName=\"clustering\" modelClass=\"centerBased\" numberOfClusters=\"4\"> <MiningSchema> <MiningField name=\"sepal_length\" invalidValueTreatment=\"asIs\"/> <MiningField name=\"sepal_width\" invalidValueTreatment=\"asIs\"/> <MiningField name=\"petal_length\" invalidValueTreatment=\"asIs\"/> <MiningField name=\"petal_width\" invalidValueTreatment=\"asIs\"/> <MiningField name=\"class\" usageType=\"predicted\"/> </MiningSchema> <ComparisonMeasure kind=\"distance\"> <squaredEuclidean/> </ComparisonMeasure> <ClusteringField field=\"sepal_length\" compareFunction=\"absDiff\"/> <ClusteringField field=\"sepal_width\" compareFunction=\"absDiff\"/> <ClusteringField field=\"petal_length\" compareFunction=\"absDiff\"/> <ClusteringField field=\"petal_width\" compareFunction=\"absDiff\"/> <Cluster name=\"virginica\" size=\"32\"> <Array n=\"4\" type=\"real\">6.9125000000000005 3.099999999999999 5.846874999999999 2.1312499999999996</Array> </Cluster> <Cluster name=\"versicolor\" size=\"41\"> <Array n=\"4\" type=\"real\">6.23658536585366 2.8585365853658535 4.807317073170731 1.6219512195121943</Array> </Cluster> <Cluster name=\"setosa\" size=\"50\"> <Array n=\"4\" type=\"real\">5.005999999999999 3.4180000000000006 1.464 0.2439999999999999</Array> </Cluster> <Cluster name=\"unknown\" size=\"27\"> <Array n=\"4\" type=\"real\">5.529629629629629 2.6222222222222222 3.940740740740741 1.2185185185185188</Array> </Cluster> </ClusteringModel> </PMML>" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/pmml-examples-ref_pmml-models
4.2. Configuring Single Sign-On for Virtual Machines
4.2. Configuring Single Sign-On for Virtual Machines Configuring single sign-on, also known as password delegation, allows you to automatically log in to a virtual machine using the credentials you use to log in to the VM Portal. Single sign-on can be used on both Red Hat Enterprise Linux and Windows virtual machines. Note Single sign-on is not supported for virtual machines running Red Hat Enterprise Linux 8.0. Important If single sign-on to the VM Portal is enabled, single sign-on to virtual machines will not be possible. With single sign-on to the VM Portal enabled, the VM Portal does not need to accept a password, thus the password cannot be delegated to sign in to virtual machines. 4.2.1. Configuring Single Sign-On for Red Hat Enterprise Linux Virtual Machines Using IPA (IdM) To configure single sign-on for Red Hat Enterprise Linux virtual machines using GNOME and KDE graphical desktop environments and IPA (IdM) servers, you must install the ovirt-guest-agent package on the virtual machine and install the packages associated with your window manager. Important The following procedure assumes that you have a working IPA configuration and that the IPA domain is already joined to the Manager. You must also ensure that the clocks on the Manager, the virtual machine and the system on which IPA (IdM) is hosted are synchronized using NTP. Note Single sign-on with IPA (IdM) is deprecated for virtual machines running Red Hat Enterprise Linux version 7 or earlier and unsupported for virtual machines running Red Hat Enterprise Linux 8 or Windows operating systems. Configuring Single Sign-On for Red Hat Enterprise Linux Virtual Machines Log in to the Red Hat Enterprise Linux virtual machine. Enable the repository: For Red Hat Enterprise Linux 6: # subscription-manager repos --enable=rhel-6-server-rhv-4-agent-rpms For Red Hat Enterprise Linux 7: # subscription-manager repos --enable=rhel-7-server-rh-common-rpms Download and install the guest agent, single sign-on, and IPA packages: # yum install ovirt-guest-agent-common ovirt-guest-agent-pam-module ovirt-guest-agent-gdm-plugin ipa-client Run the following command and follow the prompts to configure ipa-client and join the virtual machine to the domain: # ipa-client-install --permit --mkhomedir Note In environments that use DNS obfuscation, this command should be: # ipa-client-install --domain= FQDN --server= FQDN For Red Hat Enterprise Linux 7.2 and later: # authconfig --enablenis --update Note Red Hat Enterprise Linux 7.2 has a new version of the System Security Services Daemon (SSSD), which introduces configuration that is incompatible with the Red Hat Virtualization Manager guest agent single sign-on implementation. This command ensures that single sign-on works. Fetch the details of an IPA user: # getent passwd ipa-user Record the IPA user's UID and GID: ipa-user :*:936600010:936600001::/home/ ipa-user :/bin/sh Create a home directory for the IPA user: # mkdir /home/ ipa-user Assign ownership of the directory to the IPA user: # chown 936600010:936600001 /home/ ipa-user Log in to the VM Portal using the user name and password of a user configured to use single sign-on and connect to the console of the virtual machine. You will be logged in automatically. 4.2.2. Configuring single sign-on for Windows virtual machines To configure single sign-on for Windows virtual machines, the Windows guest agent must be installed on the guest virtual machine. The virtio-win ISO image provides this agent. If the virtio-win _version .iso image is not available in your storage domain, contact your system administrator. Procedure Select the Windows virtual machine. Ensure the machine is powered up. On the virtual machine, locate the CD drive and open the CD. Launch virtio-win-guest-tools . Click Options Select Install oVirt Guest Agent . Click OK . Click Install . When the installation completes, you are prompted to restart the machine to apply the changes. Log in to the VM Portal using the user name and password of a user configured to use single sign-on and connect to the console of the virtual machine. You will be logged in automatically. 4.2.3. Disabling Single Sign-on for Virtual Machines The following procedure explains how to disable single sign-on for a virtual machine. Disabling Single Sign-On for Virtual Machines Select a virtual machine and click Edit . Click the Console tab. Select the Disable Single Sign On check box. Click OK .
[ "subscription-manager repos --enable=rhel-6-server-rhv-4-agent-rpms", "subscription-manager repos --enable=rhel-7-server-rh-common-rpms", "yum install ovirt-guest-agent-common ovirt-guest-agent-pam-module ovirt-guest-agent-gdm-plugin ipa-client", "ipa-client-install --permit --mkhomedir", "ipa-client-install --domain= FQDN --server= FQDN", "authconfig --enablenis --update", "getent passwd ipa-user", "ipa-user :*:936600010:936600001::/home/ ipa-user :/bin/sh", "mkdir /home/ ipa-user", "chown 936600010:936600001 /home/ ipa-user" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/sect-configuring_single_sign-on_for_virtual_machines
Chapter 29. Deploying and testing the IT order case project
Chapter 29. Deploying and testing the IT order case project After you create and define all components of the new IT_Orders_New case project, deploy and test the new project. Prerequisites You have a running KIE Server instance connected to Business Central. For more information see Installing and configuring Red Hat Process Automation Manager on Red Hat JBoss EAP 7.4 . You have created a new case in Business Central. For more information, see Chapter 25, Creating a new IT_Orders case project . You have created the data objects. For more information, see Chapter 26, Data objects . You have created the Place order sub-process. For more information, see Section 27.1, "Creating the Place order sub-process" . You have designed the orderhardware case definition. For more information, see Chapter 27, Designing the case definition . Procedure In Business Central, go to Menu Design Projects and click IT_Orders_New . Click Deploy . Go to Menu Manage Process Definitions Manage Process Instances New Process Instance . Go to Menu Deploy and click Execution Servers and verify that a new container is deployed and started. Use the Case Management Showcase application to start a new case instance. For instructions about using the Showcase application, see Using the Showcase application for case management .
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/getting_started_with_red_hat_process_automation_manager/case-management-deploy-test-proc
probe::nfs.aop.set_page_dirty
probe::nfs.aop.set_page_dirty Name probe::nfs.aop.set_page_dirty - NFS client marking page as dirty Synopsis nfs.aop.set_page_dirty Values __page the address of page page_flag page flags Description This probe attaches to the generic __set_page_dirty_nobuffers function. Thus, this probe is going to fire on many other file systems in addition to the NFS client.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfs-aop-set-page-dirty
Chapter 56. Compiler and Tools
Chapter 56. Compiler and Tools Memory consumption of applications using libcurl grows with each TLS connection The Network Security Services (NSS) PK11_DestroyGenericObject() function does not release resources allocated by PK11_CreateGenericObject() early enough. Consequently, the memory allocated by applications using the libcurl package can grow with each TLS connection. To work around this problem: Re-use existing TLS connections where possible or Use certificates and keys from the NSS database instead of loading them from files directly using libcurl (BZ# 1510247 ) OProfile and perf can not sample events on 2nd generation Intel Xeon Phi processors when NMI watchdog is disabled Due to a performance counter hardware error, sampling performance events with the default hardware event CPU_CLK_UNHALTED may fail on 2nd generation Intel Xeon Phi processors. As a consequence, the OProfile and perf tools fail to receive any samples when the NMI watchdog is disabled. To work around this problem, enable NMI watchdog before running the perf or operf command: Note that this workaround allows only the selected tool to work correctly, but not the NMI watchdog, because it is based on the NMI watchdog using the erroneous counter. (BZ#1536004) ksh with the KEYBD trap mishandles multibyte characters The Korn Shell (KSH) is unable to correctly handle multibyte characters when the KEYBD trap is enabled. Consequently, when the user enters, for example, Japanese characters, ksh displays an incorrect string. To work around this problem, disable the KEYBD trap in the /etc/kshrc file by commenting out the following line: For more details, see a related Knowledgebase solution . (BZ# 1503922 )
[ "echo 1 > /proc/sys/kernel/nmi_watchdog operf some_examined_program opreport", "trap keybd_trap KEYBD" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/known_issues_compiler_and_tools
25.11. Additional Resources
25.11. Additional Resources Refer to Section 24.7, "Additional Resources" for more information about the Apache HTTP Server. 25.11.1. Useful Websites http://www.modssl.org/ - The mod_ssl website is the definitive source for information about mod_ssl . The website includes a wealth of documentation, including a User Manual at http://www.modssl.org/docs/ .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Apache_HTTP_Secure_Server_Configuration-Additional_Resources
Building applications
Building applications OpenShift Container Platform 4.18 Creating and managing applications on OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "oc new-project <project_name> --description=\"<description>\" --display-name=\"<display_name>\"", "oc new-project hello-openshift --description=\"This is an example project\" --display-name=\"Hello OpenShift\"", "oc get projects", "oc project <project_name>", "apiVersion: operator.openshift.io/v1 kind: Console metadata: name: cluster spec: customization: projectAccess: availableClusterRoles: - admin - edit - view", "oc project <project_name> 1", "oc status", "oc delete project <project_name> 1", "oc new-project <project> --as=<user> --as-group=system:authenticated --as-group=system:authenticated:oauth", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "oc create -f template.yaml -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>", "oc describe clusterrolebinding.rbac self-provisioners", "Name: self-provisioners Labels: <none> Annotations: rbac.authorization.kubernetes.io/autoupdate=true Role: Kind: ClusterRole Name: self-provisioner Subjects: Kind Name Namespace ---- ---- --------- Group system:authenticated:oauth", "oc patch clusterrolebinding.rbac self-provisioners -p '{\"subjects\": null}'", "oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth", "oc edit clusterrolebinding.rbac self-provisioners", "apiVersion: authorization.openshift.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"false\"", "oc patch clusterrolebinding.rbac self-provisioners -p '{ \"metadata\": { \"annotations\": { \"rbac.authorization.kubernetes.io/autoupdate\": \"false\" } } }'", "oc new-project test", "Error from server (Forbidden): You may not request a new project via this API.", "You may not request a new project via this API.", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: <message_string>", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestMessage: To request a project, contact your system administrator at [email protected].", "oc create -f <filename>", "oc create -f <filename> -n <project>", "kind: \"ImageStream\" apiVersion: \"image.openshift.io/v1\" metadata: name: \"ruby\" creationTimestamp: null spec: tags: - name: \"2.6\" annotations: description: \"Build and run Ruby 2.6 applications\" iconClass: \"icon-ruby\" tags: \"builder,ruby\" 1 supports: \"ruby:2.6,ruby\" version: \"2.6\"", "oc process -f <filename> -l name=otherLabel", "oc process --parameters -f <filename>", "oc process --parameters -n <project> <template_name>", "oc process --parameters -n openshift rails-postgresql-example", "NAME DESCRIPTION GENERATOR VALUE SOURCE_REPOSITORY_URL The URL of the repository with your application source code https://github.com/sclorg/rails-ex.git SOURCE_REPOSITORY_REF Set this to a branch name, tag or other ref of your repository if you are not using the default branch CONTEXT_DIR Set this to the relative path to your project if it is not in the root of your repository APPLICATION_DOMAIN The exposed hostname that will route to the Rails service rails-postgresql-example.openshiftapps.com GITHUB_WEBHOOK_SECRET A secret string used to configure the GitHub webhook expression [a-zA-Z0-9]{40} SECRET_KEY_BASE Your secret key for verifying the integrity of signed cookies expression [a-z0-9]{127} APPLICATION_USER The application user that is used within the sample application to authorize access on pages openshift APPLICATION_PASSWORD The application password that is used within the sample application to authorize access on pages secret DATABASE_SERVICE_NAME Database service name postgresql POSTGRESQL_USER database username expression user[A-Z0-9]{3} POSTGRESQL_PASSWORD database password expression [a-zA-Z0-9]{8} POSTGRESQL_DATABASE database name root POSTGRESQL_MAX_CONNECTIONS database max connections 10 POSTGRESQL_SHARED_BUFFERS database shared buffers 12MB", "oc process -f <filename>", "oc process <template_name>", "oc process -f <filename> | oc create -f -", "oc process <template> | oc create -f -", "oc process -f my-rails-postgresql -p POSTGRESQL_USER=bob -p POSTGRESQL_DATABASE=mydatabase", "oc process -f my-rails-postgresql -p POSTGRESQL_USER=bob -p POSTGRESQL_DATABASE=mydatabase | oc create -f -", "cat postgres.env POSTGRESQL_USER=bob POSTGRESQL_DATABASE=mydatabase", "oc process -f my-rails-postgresql --param-file=postgres.env", "sed s/bob/alice/ postgres.env | oc process -f my-rails-postgresql --param-file=-", "oc edit template <template>", "oc get templates -n openshift", "apiVersion: template.openshift.io/v1 kind: Template metadata: name: redis-template annotations: description: \"Description\" iconClass: \"icon-redis\" tags: \"database,nosql\" objects: - apiVersion: v1 kind: Pod metadata: name: redis-master spec: containers: - env: - name: REDIS_PASSWORD value: USD{REDIS_PASSWORD} image: dockerfile/redis name: master ports: - containerPort: 6379 protocol: TCP parameters: - description: Password used for Redis authentication from: '[A-Z0-9]{8}' generate: expression name: REDIS_PASSWORD labels: redis: master", "kind: Template apiVersion: template.openshift.io/v1 metadata: name: cakephp-mysql-example 1 annotations: openshift.io/display-name: \"CakePHP MySQL Example (Ephemeral)\" 2 description: >- An example CakePHP application with a MySQL database. For more information about using this template, including OpenShift considerations, see https://github.com/sclorg/cakephp-ex/blob/master/README.md. WARNING: Any data stored will be lost upon pod destruction. Only use this template for testing.\" 3 openshift.io/long-description: >- This template defines resources needed to develop a CakePHP application, including a build configuration, application DeploymentConfig, and database DeploymentConfig. The database is stored in non-persistent storage, so this configuration should be used for experimental purposes only. 4 tags: \"quickstart,php,cakephp\" 5 iconClass: icon-php 6 openshift.io/provider-display-name: \"Red Hat, Inc.\" 7 openshift.io/documentation-url: \"https://github.com/sclorg/cakephp-ex\" 8 openshift.io/support-url: \"https://access.redhat.com\" 9 message: \"Your admin credentials are USD{ADMIN_USERNAME}:USD{ADMIN_PASSWORD}\" 10", "kind: \"Template\" apiVersion: \"v1\" labels: template: \"cakephp-mysql-example\" 1 app: \"USD{NAME}\" 2", "parameters: - name: USERNAME description: \"The user name for Joe\" value: joe", "parameters: - name: PASSWORD description: \"The random user password\" generate: expression from: \"[a-zA-Z0-9]{12}\"", "parameters: - name: singlequoted_example generate: expression from: '[\\A]{10}' - name: doublequoted_example generate: expression from: \"[\\\\A]{10}\"", "{ \"parameters\": [ { \"name\": \"json_example\", \"generate\": \"expression\", \"from\": \"[\\\\A]{10}\" } ] }", "kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: cakephp-mysql-example annotations: description: Defines how to build the application spec: source: type: Git git: uri: \"USD{SOURCE_REPOSITORY_URL}\" 1 ref: \"USD{SOURCE_REPOSITORY_REF}\" contextDir: \"USD{CONTEXT_DIR}\" - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: replicas: \"USD{{REPLICA_COUNT}}\" 2 parameters: - name: SOURCE_REPOSITORY_URL 3 displayName: Source Repository URL 4 description: The URL of the repository with your application source code 5 value: https://github.com/sclorg/cakephp-ex.git 6 required: true 7 - name: GITHUB_WEBHOOK_SECRET description: A secret string used to configure the GitHub webhook generate: expression 8 from: \"[a-zA-Z0-9]{40}\" 9 - name: REPLICA_COUNT description: Number of replicas to run value: \"2\" required: true message: \"... The GitHub webhook secret is USD{GITHUB_WEBHOOK_SECRET} ...\" 10", "kind: \"Template\" apiVersion: \"v1\" metadata: name: my-template objects: - kind: \"Service\" 1 apiVersion: \"v1\" metadata: name: \"cakephp-mysql-example\" annotations: description: \"Exposes and load balances the application pods\" spec: ports: - name: \"web\" port: 8080 targetPort: 8080 selector: name: \"cakephp-mysql-example\"", "kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: ConfigMap apiVersion: v1 metadata: name: my-template-config annotations: template.openshift.io/expose-username: \"{.data['my\\\\.username']}\" data: my.username: foo - kind: Secret apiVersion: v1 metadata: name: my-template-config-secret annotations: template.openshift.io/base64-expose-password: \"{.data['password']}\" stringData: password: <password> - kind: Service apiVersion: v1 metadata: name: my-template-service annotations: template.openshift.io/expose-service_ip_port: \"{.spec.clusterIP}:{.spec.ports[?(.name==\\\"web\\\")].port}\" spec: ports: - name: \"web\" port: 8080 - kind: Route apiVersion: route.openshift.io/v1 metadata: name: my-template-route annotations: template.openshift.io/expose-uri: \"http://{.spec.host}{.spec.path}\" spec: path: mypath", "{ \"credentials\": { \"username\": \"foo\", \"password\": \"YmFy\", \"service_ip_port\": \"172.30.12.34:8080\", \"uri\": \"http://route-test.router.default.svc.cluster.local/mypath\" } }", "\"template.alpha.openshift.io/wait-for-ready\": \"true\"", "kind: Template apiVersion: template.openshift.io/v1 metadata: name: my-template objects: - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: annotations: # wait-for-ready used on BuildConfig ensures that template instantiation # will fail immediately if build fails template.alpha.openshift.io/wait-for-ready: \"true\" spec: - kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: annotations: template.alpha.openshift.io/wait-for-ready: \"true\" spec: - kind: Service apiVersion: v1 metadata: name: spec:", "oc get -o yaml all > <yaml_filename>", "oc get csv", "oc policy add-role-to-user edit <user> -n <target_project>", "oc new-app /<path to source code>", "oc new-app https://github.com/sclorg/cakephp-ex", "oc new-app https://github.com/youruser/yourprivaterepo --source-secret=yoursecret", "oc new-app https://github.com/sclorg/s2i-ruby-container.git --context-dir=2.0/test/puma-test-app", "oc new-app https://github.com/openshift/ruby-hello-world.git#beta4", "oc new-app /home/user/code/myapp --strategy=docker", "oc new-app myproject/my-ruby~https://github.com/openshift/ruby-hello-world.git", "oc new-app openshift/ruby-20-centos7:latest~/home/user/code/my-ruby-app", "oc new-app mysql", "oc new-app myregistry:5000/example/myimage", "oc new-app my-stream:v1", "oc create -f examples/sample-app/application-template-stibuild.json", "oc new-app ruby-helloworld-sample", "oc new-app -f examples/sample-app/application-template-stibuild.json", "oc new-app ruby-helloworld-sample -p ADMIN_USERNAME=admin -p ADMIN_PASSWORD=mypassword", "ADMIN_USERNAME=admin ADMIN_PASSWORD=mypassword", "oc new-app ruby-helloworld-sample --param-file=helloworld.params", "oc new-app openshift/postgresql-92-centos7 -e POSTGRESQL_USER=user -e POSTGRESQL_DATABASE=db -e POSTGRESQL_PASSWORD=password", "POSTGRESQL_USER=user POSTGRESQL_DATABASE=db POSTGRESQL_PASSWORD=password", "oc new-app openshift/postgresql-92-centos7 --env-file=postgresql.env", "cat postgresql.env | oc new-app openshift/postgresql-92-centos7 --env-file=-", "oc new-app openshift/ruby-23-centos7 --build-env HTTP_PROXY=http://myproxy.net:1337/ --build-env GEM_HOME=~/.gem", "HTTP_PROXY=http://myproxy.net:1337/ GEM_HOME=~/.gem", "oc new-app openshift/ruby-23-centos7 --build-env-file=ruby.env", "cat ruby.env | oc new-app openshift/ruby-23-centos7 --build-env-file=-", "oc new-app https://github.com/openshift/ruby-hello-world -l name=hello-world", "oc new-app https://github.com/openshift/ruby-hello-world -o yaml > myapp.yaml", "vi myapp.yaml", "oc create -f myapp.yaml", "oc new-app https://github.com/openshift/ruby-hello-world --name=myapp", "oc new-app https://github.com/openshift/ruby-hello-world -n myproject", "oc new-app https://github.com/openshift/ruby-hello-world mysql", "oc new-app ruby+mysql", "oc new-app ruby~https://github.com/openshift/ruby-hello-world mysql --group=ruby+mysql", "oc new-app --search php", "oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=Legacy --name=test", "oc new-app --image=registry.redhat.io/ubi8/httpd-24:latest --import-mode=PreserveOriginal --name=test", "sudo yum install -y postgresql postgresql-server postgresql-devel", "sudo postgresql-setup initdb", "sudo systemctl start postgresql.service", "sudo -u postgres createuser -s rails", "gem install rails", "Successfully installed rails-4.3.0 1 gem installed", "rails new rails-app --database=postgresql", "cd rails-app", "gem 'pg'", "bundle install", "default: &default adapter: postgresql encoding: unicode pool: 5 host: localhost username: rails password: <password>", "rake db:create", "rails generate controller welcome index", "root 'welcome#index'", "rails server", "<% user = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? \"root\" : ENV[\"POSTGRESQL_USER\"] %> <% password = ENV.key?(\"POSTGRESQL_ADMIN_PASSWORD\") ? ENV[\"POSTGRESQL_ADMIN_PASSWORD\"] : ENV[\"POSTGRESQL_PASSWORD\"] %> <% db_service = ENV.fetch(\"DATABASE_SERVICE_NAME\",\"\").upcase %> default: &default adapter: postgresql encoding: unicode # For details on connection pooling, see rails configuration guide # http://guides.rubyonrails.org/configuring.html#database-pooling pool: <%= ENV[\"POSTGRESQL_MAX_CONNECTIONS\"] || 5 %> username: <%= user %> password: <%= password %> host: <%= ENV[\"#{db_service}_SERVICE_HOST\"] %> port: <%= ENV[\"#{db_service}_SERVICE_PORT\"] %> database: <%= ENV[\"POSTGRESQL_DATABASE\"] %>", "ls -1", "app bin config config.ru db Gemfile Gemfile.lock lib log public Rakefile README.rdoc test tmp vendor", "git init", "git add .", "git commit -m \"initial commit\"", "git remote add origin [email protected]:<namespace/repository-name>.git", "git push", "oc new-project rails-app --description=\"My Rails application\" --display-name=\"Rails Application\"", "oc new-app postgresql -e POSTGRESQL_DATABASE=db_name -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password", "-e POSTGRESQL_ADMIN_PASSWORD=admin_pw", "oc get pods --watch", "oc new-app path/to/source/code --name=rails-app -e POSTGRESQL_USER=username -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=db_name -e DATABASE_SERVICE_NAME=postgresql", "oc get dc rails-app -o json", "env\": [ { \"name\": \"POSTGRESQL_USER\", \"value\": \"username\" }, { \"name\": \"POSTGRESQL_PASSWORD\", \"value\": \"password\" }, { \"name\": \"POSTGRESQL_DATABASE\", \"value\": \"db_name\" }, { \"name\": \"DATABASE_SERVICE_NAME\", \"value\": \"postgresql\" } ],", "oc logs -f build/rails-app-1", "oc get pods", "oc rsh <frontend_pod_id>", "RAILS_ENV=production bundle exec rake db:migrate", "oc expose service rails-app --hostname=www.example.com", "curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-amd64 -o /usr/local/bin/helm", "curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-s390x -o /usr/local/bin/helm", "curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-linux-ppc64le -o /usr/local/bin/helm", "chmod +x /usr/local/bin/helm", "helm version", "version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}", "curl -L https://mirror.openshift.com/pub/openshift-v4/clients/helm/latest/helm-darwin-amd64 -o /usr/local/bin/helm", "chmod +x /usr/local/bin/helm", "helm version", "version.BuildInfo{Version:\"v3.0\", GitCommit:\"b31719aab7963acf4887a1c1e6d5e53378e34d93\", GitTreeState:\"clean\", GoVersion:\"go1.13.4\"}", "oc new-project vault", "helm repo add openshift-helm-charts https://charts.openshift.io/", "\"openshift-helm-charts\" has been added to your repositories", "helm repo update", "helm install example-vault openshift-helm-charts/hashicorp-vault", "NAME: example-vault LAST DEPLOYED: Fri Mar 11 12:02:12 2022 NAMESPACE: vault STATUS: deployed REVISION: 1 NOTES: Thank you for installing HashiCorp Vault!", "helm list", "NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION example-vault vault 1 2022-03-11 12:02:12.296226673 +0530 IST deployed vault-0.19.0 1.9.2", "oc new-project nodejs-ex-k", "git clone https://github.com/redhat-developer/redhat-helm-charts", "cd redhat-helm-charts/alpha/nodejs-ex-k/", "apiVersion: v2 1 name: nodejs-ex-k 2 description: A Helm chart for OpenShift 3 icon: https://static.redhat.com/libs/redhat/brand-assets/latest/corp/logo.svg 4 version: 0.2.1 5", "helm lint", "[INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed", "cd ..", "helm install nodejs-chart nodejs-ex-k", "helm list", "NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION nodejs-chart nodejs-ex-k 1 2019-12-05 15:06:51.379134163 -0500 EST deployed nodejs-0.1.0 1.16.0", "apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <name> spec: # optional name that might be used by console # name: <chart-display-name> connectionConfig: url: <helm-chart-repository-url>", "cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF", "apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: <name> spec: url: https://my.chart-repo.org/stable # optional name that might be used by console name: <chart-repo-display-name> # optional and only needed for UI purposes description: <My private chart repo> # required: chart repository URL connectionConfig: url: <helm-chart-repository-url>", "cat <<EOF | oc apply --namespace my-namespace -f - apiVersion: helm.openshift.io/v1beta1 kind: ProjectHelmChartRepository metadata: name: azure-sample-repo spec: name: azure-sample-repo connectionConfig: url: https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs EOF", "projecthelmchartrepository.helm.openshift.io/azure-sample-repo created", "oc get projecthelmchartrepositories --namespace my-namespace", "NAME AGE azure-sample-repo 1m", "oc create configmap helm-ca-cert --from-file=ca-bundle.crt=/path/to/certs/ca.crt -n openshift-config", "oc create secret tls helm-tls-configs --cert=/path/to/certs/client.crt --key=/path/to/certs/client.key -n openshift-config", "cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: <helm-repository> spec: name: <helm-repository> connectionConfig: url: <URL for the Helm repository> tlsConfig: name: helm-tls-configs ca: name: helm-ca-cert EOF", "cat <<EOF | kubectl apply -f - apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer rules: - apiGroups: [\"\"] resources: [\"configmaps\"] resourceNames: [\"helm-ca-cert\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"secrets\"] resourceNames: [\"helm-tls-configs\"] verbs: [\"get\"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: namespace: openshift-config name: helm-chartrepos-tls-conf-viewer subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: 'system:authenticated' roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: helm-chartrepos-tls-conf-viewer EOF", "cat <<EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: azure-sample-repo spec: connectionConfig: url:https://raw.githubusercontent.com/Azure-Samples/helm-charts/master/docs disabled: true EOF", "spec: connectionConfig: url: <url-of-the-repositoru-to-be-disabled> disabled: true", "apiVersion: apps/v1 kind: ReplicaSet metadata: name: frontend-1 labels: tier: frontend spec: replicas: 3 selector: 1 matchLabels: 2 tier: frontend matchExpressions: 3 - {key: tier, operator: In, values: [frontend]} template: metadata: labels: tier: frontend spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always", "apiVersion: v1 kind: ReplicationController metadata: name: frontend-1 spec: replicas: 1 1 selector: 2 name: frontend template: 3 metadata: labels: 4 name: frontend 5 spec: containers: - image: openshift/hello-openshift name: helloworld ports: - containerPort: 8080 protocol: TCP restartPolicy: Always", "apiVersion: apps/v1 kind: Deployment metadata: name: hello-openshift spec: replicas: 1 selector: matchLabels: app: hello-openshift template: metadata: labels: app: hello-openshift spec: containers: - name: hello-openshift image: openshift/hello-openshift:latest ports: - containerPort: 80", "apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: frontend spec: replicas: 5 selector: name: frontend template: { ... } triggers: - type: ConfigChange 1 - imageChangeParams: automatic: true containerNames: - helloworld from: kind: ImageStreamTag name: hello-openshift:latest type: ImageChange 2 strategy: type: Rolling 3", "oc rollout pause deployments/<name>", "oc rollout latest dc/<name>", "oc rollout history dc/<name>", "oc rollout history dc/<name> --revision=1", "oc describe dc <name>", "oc rollout retry dc/<name>", "oc rollout undo dc/<name>", "oc set triggers dc/<name> --auto", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: <container_name> image: 'image' command: - '<command>' args: - '<argument_1>' - '<argument_2>' - '<argument_3>'", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: template: spec: containers: - name: example-spring-boot image: 'image' command: - java args: - '-jar' - /opt/app-root/springboots2idemo.jar", "oc logs -f dc/<name>", "oc logs --version=1 dc/<name>", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ConfigChange\"", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: triggers: - type: \"ImageChange\" imageChangeParams: automatic: true 1 from: kind: \"ImageStreamTag\" name: \"origin-ruby-sample:latest\" namespace: \"myproject\" containerNames: - \"helloworld\"", "oc set triggers dc/<dc_name> --from-image=<project>/<image>:<tag> -c <container_name>", "kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: limits: cpu: \"100m\" 1 memory: \"256Mi\" 2 ephemeral-storage: \"1Gi\" 3", "kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: type: \"Recreate\" resources: requests: 1 cpu: \"100m\" memory: \"256Mi\" ephemeral-storage: \"1Gi\"", "oc scale dc frontend --replicas=3", "apiVersion: v1 kind: Pod metadata: name: my-pod spec: nodeSelector: disktype: ssd", "oc edit dc/<deployment_config>", "apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: example-dc spec: securityContext: {} serviceAccount: <service_account> serviceAccountName: <service_account>", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling rollingParams: updatePeriodSeconds: 1 1 intervalSeconds: 1 2 timeoutSeconds: 120 3 maxSurge: \"20%\" 4 maxUnavailable: \"10%\" 5 pre: {} 6 post: {}", "oc new-app quay.io/openshifttest/deployment-example:latest", "oc expose svc/deployment-example", "oc scale dc/deployment-example --replicas=3", "oc tag deployment-example:v2 deployment-example:latest", "oc describe dc deployment-example", "kind: Deployment apiVersion: apps/v1 metadata: name: hello-openshift spec: strategy: type: Recreate recreateParams: 1 pre: {} 2 mid: {} post: {}", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Custom customParams: image: organization/strategy command: [ \"command\", \"arg1\" ] environment: - name: ENV_1 value: VALUE_1", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example-dc spec: strategy: type: Rolling customParams: command: - /bin/sh - -c - | set -e openshift-deploy --until=50% echo Halfway there openshift-deploy echo Complete", "Started deployment #2 --> Scaling up custom-deployment-2 from 0 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-2 up to 1 --> Reached 50% (currently 50%) Halfway there --> Scaling up custom-deployment-2 from 1 to 2, scaling down custom-deployment-1 from 2 to 0 (keep 2 pods available, don't exceed 3 pods) Scaling custom-deployment-1 down to 1 Scaling custom-deployment-2 up to 2 Scaling custom-deployment-1 down to 0 --> Success Complete", "pre: failurePolicy: Abort execNewPod: {} 1", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: frontend spec: template: metadata: labels: name: frontend spec: containers: - name: helloworld image: openshift/origin-ruby-sample replicas: 5 selector: name: frontend strategy: type: Rolling rollingParams: pre: failurePolicy: Abort execNewPod: containerName: helloworld 1 command: [ \"/usr/bin/command\", \"arg1\", \"arg2\" ] 2 env: 3 - name: CUSTOM_VAR1 value: custom_value1 volumes: - data 4", "oc set deployment-hook dc/frontend --pre -c helloworld -e CUSTOM_VAR1=custom_value1 --volumes data --failure-policy=abort -- /usr/bin/command arg1 arg2", "oc new-app openshift/deployment-example:v1 --name=example-blue", "oc new-app openshift/deployment-example:v2 --name=example-green", "oc expose svc/example-blue --name=bluegreen-example", "oc patch route/bluegreen-example -p '{\"spec\":{\"to\":{\"name\":\"example-green\"}}}'", "oc new-app openshift/deployment-example --name=ab-example-a", "oc new-app openshift/deployment-example:v2 --name=ab-example-b", "oc expose svc/ab-example-a", "oc edit route <route_name>", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-alternate-service annotations: haproxy.router.openshift.io/balance: roundrobin spec: host: ab-example.my-project.my-domain to: kind: Service name: ab-example-a weight: 10 alternateBackends: - kind: Service name: ab-example-b weight: 15", "oc set route-backends ROUTENAME [--zero|--equal] [--adjust] SERVICE=WEIGHT[%] [...] [options]", "oc set route-backends ab-example ab-example-a=198 ab-example-b=2", "oc set route-backends ab-example", "NAME KIND TO WEIGHT routes/ab-example Service ab-example-a 198 (99%) routes/ab-example Service ab-example-b 2 (1%)", "oc annotate routes/<route-name> haproxy.router.openshift.io/balance=roundrobin", "oc set route-backends ab-example --adjust ab-example-a=200 ab-example-b=10", "oc set route-backends ab-example --adjust ab-example-b=5%", "oc set route-backends ab-example --adjust ab-example-b=+15%", "oc set route-backends ab-example --equal", "oc new-app openshift/deployment-example --name=ab-example-a --as-deployment-config=true --labels=ab-example=true --env=SUBTITLE\\=shardA", "oc delete svc/ab-example-a", "oc expose deployment ab-example-a --name=ab-example --selector=ab-example\\=true", "oc expose service ab-example", "oc new-app openshift/deployment-example:v2 --name=ab-example-b --labels=ab-example=true SUBTITLE=\"shard B\" COLOR=\"red\" --as-deployment-config=true", "oc delete svc/ab-example-b", "oc scale dc/ab-example-a --replicas=0", "oc scale dc/ab-example-a --replicas=1; oc scale dc/ab-example-b --replicas=0", "oc edit dc/ab-example-a", "oc edit dc/ab-example-b", "apiVersion: v1 kind: ResourceQuota metadata: name: core-object-counts spec: hard: configmaps: \"10\" 1 persistentvolumeclaims: \"4\" 2 replicationcontrollers: \"20\" 3 secrets: \"10\" 4 services: \"10\" 5 services.loadbalancers: \"2\" 6", "apiVersion: v1 kind: ResourceQuota metadata: name: openshift-object-counts spec: hard: openshift.io/imagestreams: \"10\" 1", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources spec: hard: pods: \"4\" 1 requests.cpu: \"1\" 2 requests.memory: 1Gi 3 limits.cpu: \"2\" 4 limits.memory: 2Gi 5", "apiVersion: v1 kind: ResourceQuota metadata: name: besteffort spec: hard: pods: \"1\" 1 scopes: - BestEffort 2", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-long-running spec: hard: pods: \"4\" 1 limits.cpu: \"4\" 2 limits.memory: \"2Gi\" 3 scopes: - NotTerminating 4", "apiVersion: v1 kind: ResourceQuota metadata: name: compute-resources-time-bound spec: hard: pods: \"2\" 1 limits.cpu: \"1\" 2 limits.memory: \"1Gi\" 3 scopes: - Terminating 4", "apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7 requests.ephemeral-storage: 2Gi 8 limits.ephemeral-storage: 4Gi 9", "oc create -f <file> [-n <project_name>]", "oc create -f core-object-counts.yaml -n demoproject", "oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota> 1", "oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4", "resourcequota \"test\" created", "oc describe quota test", "Name: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4", "oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'", "openshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu 0 0", "apiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1", "oc create -f gpu-quota.yaml", "resourcequota/gpu-quota created", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1", "apiVersion: v1 kind: Pod metadata: generateName: gpu-pod- namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: \"compute,utility\" - name: NVIDIA_REQUIRE_CUDA value: \"cuda>=5.0\" command: [\"sleep\"] args: [\"infinity\"] resources: limits: nvidia.com/gpu: 1", "oc create -f gpu-pod.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m", "oc describe quota gpu-quota -n nvidia", "Name: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1", "oc create -f gpu-pod.yaml", "Error from server (Forbidden): error when creating \"gpu-pod.yaml\": pods \"gpu-pod-f7z2w\" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1", "oc get quota -n demoproject", "NAME AGE REQUEST LIMIT besteffort 4s pods: 1/2 compute-resources-time-bound 10m pods: 0/2 limits.cpu: 0/1, limits.memory: 0/1Gi core-object-counts 109s configmaps: 2/10, persistentvolumeclaims: 1/4, replicationcontrollers: 1/20, secrets: 9/10, services: 2/10", "oc describe quota core-object-counts -n demoproject", "Name: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "- apiVersion: v1 kind: ResourceQuota metadata: name: storage-consumption namespace: USD{PROJECT_NAME} spec: hard: persistentvolumeclaims: \"10\" 1 requests.storage: \"50Gi\" 2 gold.storageclass.storage.k8s.io/requests.storage: \"10Gi\" 3 silver.storageclass.storage.k8s.io/requests.storage: \"20Gi\" 4 silver.storageclass.storage.k8s.io/persistentvolumeclaims: \"5\" 5 bronze.storageclass.storage.k8s.io/requests.storage: \"0\" 6 bronze.storageclass.storage.k8s.io/persistentvolumeclaims: \"0\" 7", "oc create -f template.yaml -n openshift-config", "oc get templates -n openshift-config", "oc edit template <project_request_template> -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: project-request", "oc new-project <project_name>", "oc get resourcequotas", "oc describe resourcequotas <resource_quota_name>", "oc create clusterquota for-user --project-annotation-selector openshift.io/requester=<user_name> --hard pods=10 --hard secrets=20", "apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: name: for-user spec: quota: 1 hard: pods: \"10\" secrets: \"20\" selector: annotations: 2 openshift.io/requester: <user_name> labels: null 3 status: namespaces: 4 - namespace: ns-one status: hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\" total: 5 hard: pods: \"10\" secrets: \"20\" used: pods: \"1\" secrets: \"9\"", "oc create clusterresourcequota for-name \\ 1 --project-label-selector=name=frontend \\ 2 --hard=pods=10 --hard=secrets=20", "apiVersion: quota.openshift.io/v1 kind: ClusterResourceQuota metadata: creationTimestamp: null name: for-name spec: quota: hard: pods: \"10\" secrets: \"20\" selector: annotations: null labels: matchLabels: name: frontend", "oc describe AppliedClusterResourceQuota", "Name: for-user Namespace: <none> Created: 19 hours ago Labels: <none> Annotations: <none> Label Selector: <null> AnnotationSelector: map[openshift.io/requester:<user-name>] Resource Used Hard -------- ---- ---- pods 1 10 secrets 9 20", "kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2", "apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4", "apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "SPECIAL_LEVEL_KEY=very log_level=INFO", "apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "very charm", "apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never", "very", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never", "very", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 readinessProbe: 3 exec: 4 command: 5 - cat - /tmp/healthy", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 httpGet: 4 scheme: HTTPS 5 path: /healthz port: 8080 6 httpHeaders: - name: X-Custom-Header value: Awesome startupProbe: 7 httpGet: 8 path: /healthz port: 8080 9 failureThreshold: 30 10 periodSeconds: 10 11", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: goproxy-app 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 exec: 4 command: 5 - /bin/bash - '-c' - timeout 60 /opt/eap/bin/livenessProbe.sh periodSeconds: 10 6 successThreshold: 1 7 failureThreshold: 3 8", "kind: Deployment apiVersion: apps/v1 metadata: labels: test: health-check name: my-application spec: template: spec: containers: - resources: {} readinessProbe: 1 tcpSocket: port: 8080 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3 terminationMessagePath: /dev/termination-log name: ruby-ex livenessProbe: 2 tcpSocket: port: 8080 initialDelaySeconds: 15 timeoutSeconds: 1 periodSeconds: 10 successThreshold: 1 failureThreshold: 3", "apiVersion: v1 kind: Pod metadata: labels: test: health-check name: my-application spec: containers: - name: my-container 1 args: image: registry.k8s.io/goproxy:0.1 2 livenessProbe: 3 tcpSocket: 4 port: 8080 5 initialDelaySeconds: 15 6 periodSeconds: 20 7 timeoutSeconds: 10 8 readinessProbe: 9 httpGet: 10 host: my-host 11 scheme: HTTPS 12 path: /healthz port: 8080 13 startupProbe: 14 exec: 15 command: 16 - cat - /tmp/healthy failureThreshold: 30 17 periodSeconds: 20 18 timeoutSeconds: 10 19", "oc create -f <file-name>.yaml", "oc describe pod my-application", "Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 9s default-scheduler Successfully assigned openshift-logging/liveness-exec to ip-10-0-143-40.ec2.internal Normal Pulling 2s kubelet, ip-10-0-143-40.ec2.internal pulling image \"registry.k8s.io/liveness\" Normal Pulled 1s kubelet, ip-10-0-143-40.ec2.internal Successfully pulled image \"registry.k8s.io/liveness\" Normal Created 1s kubelet, ip-10-0-143-40.ec2.internal Created container Normal Started 1s kubelet, ip-10-0-143-40.ec2.internal Started container", "oc describe pod pod1", ". Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled <unknown> Successfully assigned aaa/liveness-http to ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Normal AddedInterface 47s multus Add eth0 [10.129.2.11/23] Normal Pulled 46s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 773.406244ms Normal Pulled 28s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 233.328564ms Normal Created 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Created container liveness Normal Started 10s (x3 over 46s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Started container liveness Warning Unhealthy 10s (x6 over 34s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Liveness probe failed: HTTP probe failed with statuscode: 500 Normal Killing 10s (x2 over 28s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Container liveness failed liveness probe, will be restarted Normal Pulling 10s (x3 over 47s) kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Pulling image \"registry.k8s.io/liveness\" Normal Pulled 10s kubelet, ci-ln-37hz77b-f76d1-wdpjv-worker-b-snzrj Successfully pulled image \"registry.k8s.io/liveness\" in 244.116568ms", "oc adm prune <object_type> <options>", "oc adm prune groups --sync-config=path/to/sync/config [<options>]", "oc adm prune groups --sync-config=ldap-sync-config.yaml", "oc adm prune groups --sync-config=ldap-sync-config.yaml --confirm", "oc adm prune deployments [<options>]", "oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m", "oc adm prune deployments --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm", "oc adm prune builds [<options>]", "oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m", "oc adm prune builds --orphans --keep-complete=5 --keep-failed=1 --keep-younger-than=60m --confirm", "spec: schedule: 0 0 * * * 1 suspend: false 2 keepTagRevisions: 3 3 keepYoungerThanDuration: 60m 4 keepYoungerThan: 3600000000000 5 resources: {} 6 affinity: {} 7 nodeSelector: {} 8 tolerations: [] 9 successfulJobsHistoryLimit: 3 10 failedJobsHistoryLimit: 3 11 status: observedGeneration: 2 12 conditions: 13 - type: Available status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Ready message: \"Periodic image pruner has been created.\" - type: Scheduled status: \"True\" lastTransitionTime: 2019-10-09T03:13:45 reason: Scheduled message: \"Image pruner job has been scheduled.\" - type: Failed staus: \"False\" lastTransitionTime: 2019-10-09T03:13:45 reason: Succeeded message: \"Most recent image pruning job succeeded.\"", "oc create -f <filename>.yaml", "kind: List apiVersion: v1 items: - apiVersion: v1 kind: ServiceAccount metadata: name: pruner namespace: openshift-image-registry - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: openshift-image-registry-pruner roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:image-pruner subjects: - kind: ServiceAccount name: pruner namespace: openshift-image-registry - apiVersion: batch/v1 kind: CronJob metadata: name: image-pruner namespace: openshift-image-registry spec: schedule: \"0 0 * * *\" concurrencyPolicy: Forbid successfulJobsHistoryLimit: 1 failedJobsHistoryLimit: 3 jobTemplate: spec: template: spec: restartPolicy: OnFailure containers: - image: \"quay.io/openshift/origin-cli:4.1\" resources: requests: cpu: 1 memory: 1Gi terminationMessagePolicy: FallbackToLogsOnError command: - oc args: - adm - prune - images - --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt - --keep-tag-revisions=5 - --keep-younger-than=96h - --confirm=true name: image-pruner serviceAccountName: pruner", "oc adm prune images [<options>]", "oc rollout restart deployment/image-registry -n openshift-image-registry", "oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m", "oc adm prune images --prune-over-size-limit", "oc adm prune images --keep-tag-revisions=3 --keep-younger-than=60m --confirm", "oc adm prune images --prune-over-size-limit --confirm", "oc get is -n <namespace> -o go-template='{{range USDisi, USDis := .items}}{{range USDti, USDtag := USDis.status.tags}}' '{{range USDii, USDitem := USDtag.items}}{{if eq USDitem.image \"sha256:<hash>\"}}{{USDis.metadata.name}}:{{USDtag.tag}} at position {{USDii}} out of {{len USDtag.items}}\\n' '{{end}}{{end}}{{end}}{{end}}'", "myapp:v2 at position 4 out of 5 myapp:v2.1 at position 2 out of 2 myapp:v2.1-may-2016 at position 0 out of 1", "error: error communicating with registry: Get https://172.30.30.30:5000/healthz: http: server gave HTTP response to HTTPS client", "error: error communicating with registry: Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\" error: error communicating with registry: [Get https://172.30.30.30:5000/healthz: x509: certificate signed by unknown authority, Get http://172.30.30.30:5000/healthz: malformed HTTP response \"\\x15\\x03\\x01\\x00\\x02\\x02\"]", "error: error communicating with registry: Get https://172.30.30.30:5000/: x509: certificate signed by unknown authority", "oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":true}}' --type=merge", "service_account=USD(oc get -n openshift-image-registry -o jsonpath='{.spec.template.spec.serviceAccountName}' deploy/image-registry)", "oc adm policy add-cluster-role-to-user system:image-pruner -z USD{service_account} -n openshift-image-registry", "oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=check'", "oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c 'REGISTRY_LOG_LEVEL=info /usr/bin/dockerregistry -prune=check'", "time=\"2017-06-22T11:50:25.066156047Z\" level=info msg=\"start prune (dry-run mode)\" distribution_version=\"v2.4.1+unknown\" kubernetes_version=v1.6.1+USDFormat:%hUSD openshift_version=unknown time=\"2017-06-22T11:50:25.092257421Z\" level=info msg=\"Would delete blob: sha256:00043a2a5e384f6b59ab17e2c3d3a3d0a7de01b2cabeb606243e468acc663fa5\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092395621Z\" level=info msg=\"Would delete blob: sha256:0022d49612807cb348cabc562c072ef34d756adfe0100a61952cbcb87ee6578a\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:25.092492183Z\" level=info msg=\"Would delete blob: sha256:0029dd4228961086707e53b881e25eba0564fa80033fbbb2e27847a28d16a37c\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.673946639Z\" level=info msg=\"Would delete blob: sha256:ff7664dfc213d6cc60fd5c5f5bb00a7bf4a687e18e1df12d349a1d07b2cf7663\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674024531Z\" level=info msg=\"Would delete blob: sha256:ff7a933178ccd931f4b5f40f9f19a65be5eeeec207e4fad2a5bafd28afbef57e\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 time=\"2017-06-22T11:50:26.674675469Z\" level=info msg=\"Would delete blob: sha256:ff9b8956794b426cc80bb49a604a0b24a1553aae96b930c6919a6675db3d5e06\" go.version=go1.7.5 instance.id=b097121c-a864-4e0c-ad6c-cc25f8fdf5a6 Would delete 13374 blobs Would free up 2.835 GiB of disk space Use -prune=delete to actually delete the data", "oc -n openshift-image-registry exec pod/image-registry-3-vhndw -- /bin/sh -c '/usr/bin/dockerregistry -prune=delete'", "Deleted 13374 blobs Freed up 2.835 GiB of disk space", "oc patch configs.imageregistry.operator.openshift.io/cluster -p '{\"spec\":{\"readOnly\":false}}' --type=merge", "oc idle <service>", "oc idle --resource-names-file <filename>", "oc scale --replicas=1 dc <dc_name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html-single/building_applications/index
Chapter 2. Developing and deploying a Node.js application
Chapter 2. Developing and deploying a Node.js application You can create new Node.js applications and deploy them to OpenShift. 2.1. Developing a Node.js application For a basic Node.js application, you must create a JavaScript file containing Node.js methods. Prerequisites npm installed. Procedure Create a new directory myApp , and navigate to it. USD mkdir myApp USD cd MyApp This is the root directory for the application. Initialize your application with npm . The rest of this example assumes the entry point is app.js , which you are prompted to set when running npm init . USD cd myApp USD npm init Create the entry point in a new file called app.js . Example app.js const http = require('http'); const server = http.createServer((request, response) => { response.statusCode = 200; response.setHeader('Content-Type', 'application/json'); const greeting = {content: 'Hello, World!'}; response.write(JSON.stringify(greeting)); response.end(); }); server.listen(8080, () => { console.log('Server running at http://localhost:8080'); }); Start your application. USD node app.js Server running at http://localhost:8080 Using curl or your browser, verify your application is running at http://localhost:8080 . USD curl http://localhost:8080 {"content":"Hello, World!"} Additional information The Node.js runtime provides the core Node.js API which is documented in the Node.js API documentation . 2.2. Deploying a Node.js application to Openshift To deploy your Node.js application to OpenShift, add nodeshift to the application, configure the package.json file and then deploy using nodeshift . 2.2.1. Preparing Node.js application for OpenShift deployment To prepare a Node.js application for OpenShift deployment, you must perform the following steps: Add nodeshift to the application. Add openshift and start entries to the package.json file. Prerequisites npm installed. Procedure Add nodeshift to your application. USD npm install nodeshift --save-dev Add the openshift and start entries to the scripts section in package.json . { "name": "myApp", "version": "1.0.0", "description": "", "main": "app.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1", "openshift": "nodeshift --expose --dockerImage=registry.access.redhat.com/rhscl/ubi8/nodejs-12", "start": "node app.js", ... } ... } The openshift script uses nodeshift to deploy the application to OpenShift. Note Universal base images and RHEL images are available for Node.js. See the Node.js release notes for more information on image names. Optional : Add a files section in package.json . { "name": "myApp", "version": "1.0.0", "description": "", "main": "app.js", "scripts": { ... }, "files": [ "package.json", "app.js" ] ... } The files section tells nodeshift what files and directories to include when deploying to OpenShift. nodeshift uses the node-tar module to create a tar file based on the files and directories you list in the files section. This tar file is used when nodeshift deploys your application to OpenShift. If the files section is not specified, nodeshift will send the entire current directory, excluding: node_modules/ .git/ tmp/ It is recommended that you include a files section in package.json to avoid including unnecessary files when deploying to OpenShift. 2.2.2. Deploying a Node.js application to OpenShift You can deploy a Node.js application to OpenShift using nodeshift . Prerequisites The oc CLI client installed. npm installed. Ensure all the ports used by your application are correctly exposed when configuring your routes. Procedure Log in to your OpenShift instance with the oc client. USD oc login ... Use nodeshift to deploy the application to OpenShift. USD npm run openshift 2.3. Deploying a Node.js application to stand-alone Red Hat Enterprise Linux You can deploy a Node.js application to stand-alone Red Hat Enterprise Linux using npm . Prerequisites A Node.js application. npm 6.14.8 installed RHEL 7 or RHEL 8 installed. Node.js installed Procedure If you have specified additional dependencies in the package.json file of your project, ensure that you install them before running your applications. USD npm install Deploy the application from the application's root directory. USD node app.js Server running at http://localhost:8080 Verification steps Use curl or your browser to verify your application is running at http://localhost:8080 USD curl http://localhost:8080
[ "mkdir myApp cd MyApp", "cd myApp npm init", "const http = require('http'); const server = http.createServer((request, response) => { response.statusCode = 200; response.setHeader('Content-Type', 'application/json'); const greeting = {content: 'Hello, World!'}; response.write(JSON.stringify(greeting)); response.end(); }); server.listen(8080, () => { console.log('Server running at http://localhost:8080'); });", "node app.js Server running at http://localhost:8080", "curl http://localhost:8080 {\"content\":\"Hello, World!\"}", "npm install nodeshift --save-dev", "{ \"name\": \"myApp\", \"version\": \"1.0.0\", \"description\": \"\", \"main\": \"app.js\", \"scripts\": { \"test\": \"echo \\\"Error: no test specified\\\" && exit 1\", \"openshift\": \"nodeshift --expose --dockerImage=registry.access.redhat.com/rhscl/ubi8/nodejs-12\", \"start\": \"node app.js\", } }", "{ \"name\": \"myApp\", \"version\": \"1.0.0\", \"description\": \"\", \"main\": \"app.js\", \"scripts\": { }, \"files\": [ \"package.json\", \"app.js\" ] }", "oc login", "npm run openshift", "npm install", "node app.js Server running at http://localhost:8080", "curl http://localhost:8080" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_node.js/22/html/node.js_runtime_guide/developing-and-deploying-a-nodejs-application_introduction-to-application-development-with-runtime
Chapter 2. GFS2 Configuration and Operational Considerations
Chapter 2. GFS2 Configuration and Operational Considerations The Global File System 2 (GFS2) file system allows several computers ("nodes") in a cluster to cooperatively share the same storage. To achieve this cooperation and maintain data consistency among the nodes, the nodes employ a cluster-wide locking scheme for file system resources. This locking scheme uses communication protocols such as TCP/IP to exchange locking information. You can improve performance by following the recommendations described in this chapter, including recommendations for creating, using, and maintaining a GFS2 file system. Important Make sure that your deployment of Red Hat High Availability Add-On meets your needs and can be supported. Consult with an authorized Red Hat representative to verify your configuration prior to deployment. 2.1. Formatting Considerations This section provides recommendations for how to format your GFS2 file system to optimize performance. 2.1.1. File System Size: Smaller is Better GFS2 is based on a 64-bit architecture, which can theoretically accommodate an 8 EB file system. However, the current supported maximum size of a GFS2 file system for 64-bit hardware is 100 TB and the current supported maximum size of a GFS2 file system for 32-bit hardware is 16 TB. Note that even though GFS2 large file systems are possible, that does not mean they are recommended. The rule of thumb with GFS2 is that smaller is better: it is better to have 10 1TB file systems than one 10TB file system. There are several reasons why you should keep your GFS2 file systems small: Less time is required to back up each file system. Less time is required if you need to check the file system with the fsck.gfs2 command. Less memory is required if need to check the file system with the fsck.gfs2 command. In addition, fewer resource groups to maintain mean better performance. Of course, if you make your GFS2 file system too small, you might run out of space, and that has its own consequences. You should consider your own use cases before deciding on a size. 2.1.2. Block Size: Default (4K) Blocks Are Preferred As of the Red Hat Enterprise Linux 6 release, the mkfs.gfs2 command attempts to estimate an optimal block size based on device topology. In general, 4K blocks are the preferred block size because 4K is the default page size (memory) for Linux. Unlike some other file systems, GFS2 does most of its operations using 4K kernel buffers. If your block size is 4K, the kernel has to do less work to manipulate the buffers. It is recommended that you use the default block size, which should yield the highest performance. You may need to use a different block size only if you require efficient storage of many very small files. 2.1.3. Number of Journals: One for Each Node that Mounts GFS2 requires one journal for each node in the cluster that needs to mount the file system. For example, if you have a 16-node cluster but need to mount only the file system from two nodes, you need only two journals. If you need to mount from a third node, you can always add a journal with the gfs2_jadd command. With GFS2, you can add journals on the fly. 2.1.4. Journal Size: Default (128MB) Is Usually Optimal When you run the mkfs.gfs2 command to create a GFS2 file system, you may specify the size of the journals. If you do not specify a size, it will default to 128MB, which should be optimal for most applications. Some system administrators might think that 128MB is excessive and be tempted to reduce the size of the journal to the minimum of 8MB or a more conservative 32MB. While that might work, it can severely impact performance. Like many journaling file systems, every time GFS2 writes metadata, the metadata is committed to the journal before it is put into place. This ensures that if the system crashes or loses power, you will recover all of the metadata when the journal is automatically replayed at mount time. However, it does not take much file system activity to fill an 8MB journal, and when the journal is full, performance slows because GFS2 has to wait for writes to the storage. It is generally recommended to use the default journal size of 128MB. If your file system is very small (for example, 5GB), having a 128MB journal might be impractical. If you have a larger file system and can afford the space, using 256MB journals might improve performance. 2.1.5. Size and Number of Resource Groups When a GFS2 file system is created with the mkfs.gfs2 command, it divides the storage into uniform slices known as resource groups. It attempts to estimate an optimal resource group size (ranging from 32MB to 2GB). You can override the default with the -r option of the mkfs.gfs2 command. Your optimal resource group size depends on how you will use the file system. Consider how full it will be and whether or not it will be severely fragmented. You should experiment with different resource group sizes to see which results in optimal performance. It is a best practice to experiment with a test cluster before deploying GFS2 into full production. If your file system has too many resource groups (each of which is too small), block allocations can waste too much time searching tens of thousands (or hundreds of thousands) of resource groups for a free block. The more full your file system, the more resource groups that will be searched, and every one of them requires a cluster-wide lock. This leads to slow performance. If, however, your file system has too few resource groups (each of which is too big), block allocations might contend more often for the same resource group lock, which also impacts performance. For example, if you have a 10GB file system that is carved up into five resource groups of 2GB, the nodes in your cluster will fight over those five resource groups more often than if the same file system were carved into 320 resource groups of 32MB. The problem is exacerbated if your file system is nearly full because every block allocation might have to look through several resource groups before it finds one with a free block. GFS2 tries to mitigate this problem in two ways: First, when a resource group is completely full, it remembers that and tries to avoid checking it for future allocations (until a block is freed from it). If you never delete files, contention will be less severe. However, if your application is constantly deleting blocks and allocating new blocks on a file system that is mostly full, contention will be very high and this will severely impact performance. Second, when new blocks are added to an existing file (for example, appending) GFS2 will attempt to group the new blocks together in the same resource group as the file. This is done to increase performance: on a spinning disk, seeks take less time when they are physically close together. The worst-case scenario is when there is a central directory in which all the nodes create files because all of the nodes will constantly fight to lock the same resource group.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/ch-considerations
Appendix A. Additional resources for tested deployment models
Appendix A. Additional resources for tested deployment models This appendix provides a reference for the additional resources relevant to the tested deployment models outlined in Tested deployment models. For additional information about each of the tested topologies described in this document, see the test-topologies GitHub repo . For questions around IBM cloud specific configurations or issues, see IBM support .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/tested_deployment_models/appendix-topology-resources
Chapter 15. Deleting applications
Chapter 15. Deleting applications You can delete applications created in your project. 15.1. Deleting applications using the Developer perspective You can delete an application and all of its associated components using the Topology view in the Developer perspective: Click the application you want to delete to see the side panel with the resource details of the application. Click the Actions drop-down menu displayed on the upper right of the panel, and select Delete Application to see a confirmation dialog box. Enter the name of the application and click Delete to delete it. You can also right-click the application you want to delete and click Delete Application to delete it.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/building_applications/odc-deleting-applications
Chapter 1. Telemetry data collection
Chapter 1. Telemetry data collection The telemetry data collection feature helps in collecting and analyzing the telemetry data to improve your experience with Red Hat Developer Hub. This feature is enabled by default. Important As an administrator, you can disable the telemetry data collection feature based on your needs. For example, in an air-gapped environment, you can disable this feature to avoid needless outbound requests affecting the responsiveness of the RHDH application. For more details, see the Disabling telemetry data collection in RHDH section. Red Hat collects and analyzes the following data: Events of page visits and clicks on links or buttons. System-related information, for example, locale, timezone, user agent including browser and OS details. Page-related information, for example, title, category, extension name, URL, path, referrer, and search parameters. Anonymized IP addresses, recorded as 0.0.0.0 . Anonymized username hashes, which are unique identifiers used solely to identify the number of unique users of the RHDH application. With RHDH, you can customize the telemetry data collection feature and the telemetry Segment source configuration based on your needs. 1.1. Disabling telemetry data collection in RHDH To disable telemetry data collection, you must disable the analytics-provider-segment plugin either using the Helm Chart or the Red Hat Developer Hub Operator configuration. 1.1.1. Disabling telemetry data collection using the Operator You can disable the telemetry data collection feature by using the Operator. Prerequisites You have logged in as an administrator in the OpenShift Container Platform web console. You have installed Red Hat Developer Hub on OpenShift Container Platform using the Operator. Procedure Perform one of the following steps: If you have created the dynamic-plugins-rhdh ConfigMap file and not configured the analytics-provider-segment plugin, add the plugin to the list of plugins and set its plugins.disabled parameter to true . If you have created the dynamic-plugins-rhdh ConfigMap file and configured the analytics-provider-segment plugin, search the plugin in the list of plugins and set its plugins.disabled parameter to true . If you have not created the ConfigMap file, create it with the following YAML code: kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: true Set the value of the dynamicPluginsConfigMapName parameter to the name of the ConfigMap file in your Backstage custom resource: # ... spec: application: dynamicPluginsConfigMapName: dynamic-plugins-rhdh # ... Save the configuration changes. 1.1.2. Disabling telemetry data collection using the Helm Chart You can disable the telemetry data collection feature by using the Helm Chart. Prerequisites You have logged in as an administrator in the OpenShift Container Platform web console. You have installed Red Hat Developer Hub on OpenShift Container Platform using the Helm Chart. Procedure In the Developer perspective of the OpenShift Container Platform web console, go to the Helm view to see the list of Helm releases. Click the overflow menu on the Helm release that you want to use and select Upgrade . Note You can also create a new Helm release by clicking the Create button and edit the configuration to disable telemetry. Use either the Form view or YAML view to edit the Helm configuration: Using Form view Expand Root Schema global Dynamic plugins configuration. List of dynamic plugins that should be installed in the backstage application . Click the Add list of dynamic plugins that should be installed in the backstage application. link. Perform one of the following steps: If you have not configured the plugin, add the following value in the Package specification of the dynamic plugin to install. It should be usable by the npm pack command. field: ./dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment If you have configured the plugin, find the Package specification of the dynamic plugin to install. It should be usable by the npm pack command. field with the ./dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment value. Select the Disable the plugin checkbox. Click Upgrade . Using YAML view Perform one of the following steps: If you have not configured the plugin, add the following YAML code in your values.yaml Helm configuration file: # ... global: dynamic: plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: true # ... If you have configured the plugin, search it in your Helm configuration and set the value of the plugins.disabled parameter to true . Click Upgrade . 1.2. Enabling telemetry data collection in RHDH The telemetry data collection feature is enabled by default. However, if you have disabled the feature and want to re-enable it, you must enable the analytics-provider-segment plugin either by using the Helm Chart or the Red Hat Developer Hub Operator configuration. 1.2.1. Enabling telemetry data collection using the Operator You can enable the telemetry data collection feature by using the Operator. Prerequisites You have logged in as an administrator in the OpenShift Container Platform web console. You have installed Red Hat Developer Hub on OpenShift Container Platform using the Operator. Procedure Perform one of the following steps: If you have created the dynamic-plugins-rhdh ConfigMap file and not configured the analytics-provider-segment plugin, add the plugin to the list of plugins and set its plugins.disabled parameter to false . If you have created the dynamic-plugins-rhdh ConfigMap file and configured the analytics-provider-segment plugin, search the plugin in the list of plugins and set its plugins.disabled parameter to false . If you have not created the ConfigMap file, create it with the following YAML code: kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: false Set the value of the dynamicPluginsConfigMapName parameter to the name of the ConfigMap file in your Backstage custom resource: # ... spec: application: dynamicPluginsConfigMapName: dynamic-plugins-rhdh # ... Save the configuration changes. 1.2.2. Enabling telemetry data collection using the Helm Chart You can enable the telemetry data collection feature by using the Helm Chart. Prerequisites You have logged in as an administrator in the OpenShift Container Platform web console. You have installed Red Hat Developer Hub on OpenShift Container Platform using the Helm Chart. Procedure In the Developer perspective of the OpenShift Container Platform web console, go to the Helm view to see the list of Helm releases. Click the overflow menu on the Helm release that you want to use and select Upgrade . Note You can also create a new Helm release by clicking the Create button and edit the configuration to enable telemetry. Use either the Form view or YAML view to edit the Helm configuration: Using Form view Expand Root Schema global Dynamic plugins configuration. List of dynamic plugins that should be installed in the backstage application . Click the Add list of dynamic plugins that should be installed in the backstage application. link. Perform one of the following steps: If you have not configured the plugin, add the following value in the Package specification of the dynamic plugin to install. It should be usable by the npm pack command. field: ./dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment If you have configured the plugin, find the Package specification of the dynamic plugin to install. It should be usable by the npm pack command. field with the ./dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment value. Clear the Disable the plugin checkbox. Click Upgrade . Using YAML view Perform one of the following steps: If you have not configured the plugin, add the following YAML code in your Helm configuration file: # ... global: dynamic: plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: false # ... If you have configured the plugin, search it in your Helm configuration and set the value of the plugins.disabled parameter to false . Click Upgrade . 1.3. Customizing telemetry Segment source The analytics-provider-segment plugin sends the collected telemetry data to Red Hat by default. However, you can configure a new Segment source that receives telemetry data based on your needs. For configuration, you need a unique Segment write key that points to the Segment source. Note By configuring a new Segment source, you can collect and analyze the same set of data that is mentioned in the Telemetry data collection section. You might also require to create your own telemetry data collection notice for your application users. 1.3.1. Customizing telemetry Segment source using the Operator You can configure integration with your Segment source by using the Operator. Prerequisites You have logged in as an administrator in the OpenShift Container Platform web console. You have installed Red Hat Developer Hub on OpenShift Container Platform using the Operator. Procedure Add the following YAML code in your Backstage custom resource (CR): # ... spec: application: extraEnvs: envs: - name: SEGMENT_WRITE_KEY value: <segment_key> 1 # ... 1 Replace <segment_key> with a unique identifier for your Segment source. Save the configuration changes. 1.3.2. Customizing telemetry Segment source using the Helm Chart You can configure integration with your Segment source by using the Helm Chart. Prerequisites You have logged in as an administrator in the OpenShift Container Platform web console. You have installed Red Hat Developer Hub on OpenShift Container Platform using the Helm Chart. Procedure In the Developer perspective of the OpenShift Container Platform web console, go to the Helm view to see the list of Helm releases. Click the overflow menu on the Helm release that you want to use and select Upgrade . Use either the Form view or YAML view to edit the Helm configuration: Using Form view Expand Root Schema Backstage Chart Schema Backstage Parameters Backstage container environment variables . Click the Add Backstage container environment variables link. Enter the name and value of the Segment key. Click Upgrade . Using YAML view Add the following YAML code in your Helm configuration file: # ... upstream: backstage: extraEnvVars: - name: SEGMENT_WRITE_KEY value: <segment_key> 1 # ... 1 Replace <segment_key> with a unique identifier for your Segment source. Click Upgrade .
[ "kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: true", "spec: application: dynamicPluginsConfigMapName: dynamic-plugins-rhdh", "global: dynamic: plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: true", "kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: false", "spec: application: dynamicPluginsConfigMapName: dynamic-plugins-rhdh", "global: dynamic: plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: false", "spec: application: extraEnvs: envs: - name: SEGMENT_WRITE_KEY value: <segment_key> 1", "upstream: backstage: extraEnvVars: - name: SEGMENT_WRITE_KEY value: <segment_key> 1" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/telemetry_data_collection/assembly-rhdh-telemetry
Chapter 5. Maintenance procedures
Chapter 5. Maintenance procedures 5.1. Updating the OS and HA cluster components Please refer to Recommended Practices for Applying Software Updates to a RHEL High Availability or Resilient Storage Cluster , for more information. 5.2. Updating the SAP HANA instances If the SAP HANA System Replication setup is managed using the HA cluster configuration described in this document, some additional steps are needed in addition to the actual process of updating the SAP HANA instances before and after the update. Execute the following steps: Put the SAPHana resource in unmanaged mode. [root]# pcs resource unmanage SAPHana_RH1_02-clone Update the SAP HANA instances using the procedure provided by SAP. When the update of the SAP HANA instances has been completed and it has been verified that SAP HANA System Replication is working again, the status of the SAPHana resource needs to be refreshed to make sure the cluster is aware of the current state of the SAP HANA System Replication setup. [root]# pcs resource refresh SAPHana_RH1_02-clone When the HA cluster has correctly picked up the current status of the SAP HANA System Replication setup, put the SAPHana resource back into managed mode so that the HA cluster will be able to react to any issues in the SAP HANA System Replication setup again. [root]# pcs resource manage SAPHana_RH1_02-clone 5.3. Manually moving SAPHana resource to another node (SAP HANA System Replication takeover by HA cluster) A manual takeover of SAP HANA System Replication can be triggered by moving the promotable clone resource: [root]# pcs resource move SAPHana_RH1_02-clone Note pcs-0.10.8-1.el8 or later is required for this command to work correctly. Please refer to The pcs resource move command fails for a promotable clone unless "--master" is specified , for more information. With each pcs resource move command invocation, the HA cluster creates a location constraint to cause the resource to move. Please refer to Is there a way to manage constraints when running pcs resource move? , for more information. This constraint must be removed after it has been verified that the SAP HANA System Replication takeover has been completed in order to allow the HA cluster to manage the former primary SAP HANA instance again. To remove the constraint created by pcs resource move , use the following command: [root]# pcs resource clear SAPHana_RH1_02-clone Note What happens to the former SAP HANA primary instance after the takeover has been completed and the constraint has been removed depends on the setting of the AUTOMATED_REGISTER parameter of the SAPHana resource: If Automated_REGISTER=true , then the former SAP HANA primary instance will be registered as the new secondary and SAP HANA System Replication will become active again. If AUTOMATED_REGISTER=false , then it is up to the operator to decide what should happen with the former SAP HANA primary instance after the takeover.
[ "pcs resource unmanage SAPHana_RH1_02-clone", "pcs resource refresh SAPHana_RH1_02-clone", "pcs resource manage SAPHana_RH1_02-clone", "pcs resource move SAPHana_RH1_02-clone", "pcs resource clear SAPHana_RH1_02-clone" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/automating_sap_hana_scale-up_system_replication_using_the_rhel_ha_add-on/asmb_maintenance_proc_automating-sap-hana-scale-up-system-replication
Chapter 3. Alternative provisioning network methods
Chapter 3. Alternative provisioning network methods This section contains information about other methods that you can use to configure the provisioning network to accommodate routed spine-leaf with composable networks. 3.1. VLAN Provisioning network In this example, the director deploys new overcloud nodes through the provisioning network and uses a VLAN tunnel across the L3 topology. For more information, see Figure 3.1, "VLAN provisioning network topology" . If you use a VLAN provisioning network, the director DHCP servers can send DHCPOFFER broadcasts to any leaf. To establish this tunnel, trunk a VLAN between the Top-of-Rack (ToR) leaf switches. In the following diagram, the StorageLeaf networks are presented to the Ceph storage and Compute nodes; the NetworkLeaf represents an example of any network that you want to compose. Figure 3.1. VLAN provisioning network topology 3.2. VXLAN Provisioning network In this example, the director deploys new overcloud nodes through the provisioning network and uses a VXLAN tunnel to span across the layer 3 topology. For more information, see Figure 3.2, "VXLAN provisioning network topology" . If you use a VXLAN provisioning network, the director DHCP servers can send DHCPOFFER broadcasts to any leaf. To establish this tunnel, configure VXLAN endpoints on the Top-of-Rack (ToR) leaf switches. Figure 3.2. VXLAN provisioning network topology
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/spine_leaf_networking/assembly_alternative-provisioning-network-methods
Providing Feedback on Red Hat Documentation
Providing Feedback on Red Hat Documentation We appreciate your input on our documentation. Please let us know how we could make it better. You can submit feedback by filing a ticket in Bugzilla: Navigate to the Bugzilla website. In the Component field, use Documentation . In the Description field, enter your suggestion for improvement. Include a link to the relevant parts of the documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/upgrading_and_updating_red_hat_satellite/providing-feedback-on-red-hat-documentation_upgrade-guide
Chapter 6. Running containerized Tempest
Chapter 6. Running containerized Tempest This section contains information about running tempest from a container on the undercloud. You can run tempest against the overcloud or the undercloud. Containerized tempest requires the same resources as non-containerized tempest. 6.1. Preparing the Tempest container Complete the following steps to download and configure your tempest container: Change to the /home/stack directory: Download the tempest container: This container includes all tempest plugins. Running tempest tests globally with this container includes tests for plugins. For example, if you run the tempest run --regex '(*.)' command, tempest runs all plugin tests. These tempest tests fail if your deployment does not contain configuration for all plugins. Run the tempest list-plugins command to view all installed plugins. To exclude tests, you must include the tests that you want to exclude in a blacklist file. For more information, see Chapter 5, Using Tempest . Create directories to use for exchanging data between the host machine and the container: Copy the necessary files to the container_tempest directory. This directory is the file source for the container: List the available container images: Create an alias to facilitate easier command entry. Ensure that you use absolute paths when mounting the directories: To get a list of available tempest plugins in the container, run the following command: 6.2. Running containerized Tempest inside the container Create a tempest script that you can execute within the container to generate the tempest.conf file and run the tempest tests. The script performs the following actions: Set the exit status for the command set -e . Source the overcloudrc file if you want to run tempest against the overcloud. Source the stackrc file if you want to run tempest against the undercloud. Run tempest init to create a tempest workspace. Use the shared directory so that the files are also accessible from the host. Change directory to tempest_workspace Export the TEMPESTCONF environment variable for ease of use at a later stage. Execute discover-tempest-config to generate the tempest.conf file. For more information about the options that you can include in the discover-tempest-config command, run discover-tempest-config --help . Set --out to home/stack/tempest_workspace/tempest.conf so that the tempest.conf file is accessible from the host machine. Set --deployer-input to point to the tempest-deployer-input.conf file in the shared directory. Run tempest tests. This example script runs the smoke test tempest run --smoke . If you already have a tempest.conf file and you want only to run the tempest tests, omit TEMPESTCONF from the script and replace it with a command to copy your tempest.conf file from the container_tempest directory to the tempest_workspace/etc directory: Set executable privileges on the tempest_script.sh script: Run the tempest script from the container using the alias that you created in a step: Inspect the .stestr directory for information about the test results. If you want to rerun the tempest tests, you must first remove and recreate the tempest workspace: 6.3. Running Containerized Tempest outside the container The container generates or retrieves the tempest.conf file and runs tests. You can perform these operations from outside the container: Source the overcloudrc file if you want to run tempest against the overcloud. Source the stackrc file if you want to run tempest against the undercloud: Run tempest init to create a tempest workspace. Use the shared directory so that the files are also accessible from the host: Generate the tempest.conf file: For more information about the options that you can include in the discover-tempest-config command, run discover-tempest-config --help . Execute tempest tests. For example, run the following command to execute the tempest smoke test using the alias you created in a step: Inspect the .stestr directory for information about the test results. If you want to rerun the tempest tests, you must first remove and recreate the tempest workspace:
[ "cd /home/stack", "podman pull registry.redhat.io/rhosp-rhel8/openstack-tempest:16.0", "mkdir container_tempest tempest_workspace", "cp stackrc overcloudrc tempest-deployer-input.conf container_tempest", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.redhat.io/rhosp-rhel8/openstack-tempest latest 881f7ac24d8f 10 days ago 641 MB", "alias podman-tempest=\"podman run -i --privileged=true -v \"USD(pwd)\"/container_tempest:/home/stack/container_tempest:z -v \"USD(pwd)\"/tempest_workspace:/home/stack/tempest_workspace:z registry.redhat.io/rhosp-rhel8/openstack-tempest:16.0 /bin/bash\"", "podman-tempest -c \"rpm -qa | grep tempest\"", "cat <<'EOF'>> /home/stack/container_tempest/tempest_script.sh set -e source /home/stack/container_tempest/overcloudrc tempest init /home/stack/tempest_workspace pushd /home/stack/tempest_workspace export TEMPESTCONF=\"/usr/bin/discover-tempest-config\" USDTEMPESTCONF --out /home/stack/tempest_workspace/etc/tempest.conf --deployer-input /home/stack/container_tempest/tempest-deployer-input.conf --debug --create object-storage.reseller_admin ResellerAdmin tempest run --smoke EOF", "cp /home/stack/container_tempest/tempest.conf /home/stack/tempest_workspace/etc/tempest.conf", "chmod +x container_tempest/tempest_script.sh", "podman-tempest -c 'set -e; /home/stack/container_tempest/tempest_script.sh'", "sudo rm -rf /home/stack/tempest_workspace mkdir /home/stack/tempest_workspace", "source /home/stack/container_tempest/overcloudrc", "tempest init /home/stack/tempest_workspace", "discover-tempest-config --out /home/stack/tempest_workspace/tempest.conf --deployer-input /home/stack/container_tempest/tempest-deployer-input-conf --debug --create object-storage.reseller_admin ResellerAdmin", "podman-tempest -c \"tempest run --smoke\"", "sudo rm -rf /home/stack/tempest_workspace mkdir /home/stack/tempest_workspace" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/openstack_integration_test_suite_guide/chap-running-containerized-tempest
9.12.2. Upgrading Using the Installer
9.12.2. Upgrading Using the Installer Note In general, Red Hat recommends that you keep user data on a separate /home partition and perform a fresh installation. For more information on partitions and how to set them up, refer to Section 9.13, "Disk Partitioning Setup" . If you choose to upgrade your system using the installation program, any software not provided by Red Hat Enterprise Linux that conflicts with Red Hat Enterprise Linux software is overwritten. Before you begin an upgrade this way, make a list of your system's current packages for later reference: After installation, consult this list to discover which packages you may need to rebuild or retrieve from sources other than Red Hat. , make a backup of any system configuration data: Make a complete backup of any important data before performing an upgrade. Important data may include the contents of your entire /home directory as well as content from services such as an Apache, FTP, or SQL server, or a source code management system. Although upgrades are not destructive, if you perform one improperly there is a small possibility of data loss. Warning Note that the above examples store backup materials in a /home directory. If your /home directory is not a separate partition, you should not follow these examples verbatim! Store your backups on another device such as CD or DVD discs or an external hard disk. For more information on completing the upgrade process later, refer to Section 35.2, "Finishing an Upgrade" .
[ "-qa --qf '%{NAME} %{VERSION}-%{RELEASE} %{ARCH}\\n' > ~/old-pkglist.txt", "su -c 'tar czf /tmp/etc-`date +%F`.tar.gz /etc' su -c 'mv /tmp/etc-*.tar.gz /home'" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/sn-upgrade-tree-x86
Chapter 7. Known issues
Chapter 7. Known issues 7.1. OpenShift Welcome page and Ansible content creator page fail to load There is a known issue affecting workspaces using the Ansible sample and a self-signed TLS certificate. The OpenShift Welcome and the Ansible content creator tabs are empty and the following message appears: "Error loading webview: Error: Could not register service worker: SecurityError: Failed to register a ServiceWorker for scope." There is a workaround available. Workaround Add the self-signed TLS certificate to the browser's trusted root authority by following this procedure . Additional resources CRW-7252 7.2. Issues with starting a new workspace from a URL that points to a branch of a repository that doesn't have a devfile There is a known issue affecting repositories without a devfile.yaml file. If you start a new workspace from a branch of such repository, the default branch (e.g. 'main') is used for project cloning instead of the expected branch. Additional resources CRW-6860 7.3. Refresh token mode causes cyclic reload of the workspace start page There is a known issue when experimental refresh token mode is applied using the CHE_FORCE_REFRESH_PERSONAL_ACCESS_TOKEN property for the GitHub and Microsoft Azure DevOps OAuth providers. This causes the workspace starts to reload the dashboard cyclically, creating a new personal access token on each page restart. The refresh token mode works correctly for 'GitLab' and 'BitBucket' OAuth providers. Additional resources CRW-6859 7.4. FIPS compliance update There's a known issue with FIPS compliance that results in certain cryptographic modules not being FIPS-validated. Below is a list of requirements and limitations for using FIPS with OpenShift Dev Spaces: Required cluster and operator updates Update your Red Hat OpenShift Container Platform installation to the latest z-stream update for 4.11, 4.12, or 4.13 as appropriate. If you do not already have FIPS enabled, you will need to uninstall and reinstall. Once the cluster is up and running, install OpenShift Dev Spaces 3.7.1 (3.7-264) and verify that the latest DevWorkspace operator bundle 0.21.2 (0.21-7) or newer is also installed and updated. See https://catalog.redhat.com/software/containers/devworkspace/devworkspace-operator-bundle/60ec9f48744684587e2186a3 Golang compiler in UDI image The Universal Developer Image (UDI) container includes a golang compiler, which was built without the CGO_ENABLED=1 flag. The check-payload scanner ( https://github.com/openshift/check-payload ) will throw an error, but this can be safely ignored provided that anything you build with this compiler sets the correct flag CGO_ENABLED=1 and does NOT use extldflags -static or -tags no_openssl . The resulting binaries can be scanned and should pass without error. Statically linked binaries You can find statically linked binaries not related to cryptography in these two containers: code-rhel8 idea-rhel8. As they are not related to cryptography, they do not affect FIPS compliance. Helm support for FIPS The UDI container includes the helm binary, which was not compiled with FIPS support. If you are in a FIPS environment do not use helm . Additional resources CRW-4598 7.5. Debugger does not work in the .NET sample Currently, the debugger in Microsoft Visual Studio Code - Open Source does not work in the .NET sample. Workaround Use a different image from the following sources: Custom UBI-9 based Dockerfile devfile.yaml Additional resources CRW-3563
null
https://docs.redhat.com/en/documentation/red_hat_openshift_dev_spaces/3.16/html/3.16.1_release_notes_and_known_issues/known-issues
Chapter 6. Fixed security issues
Chapter 6. Fixed security issues This section lists security issues fixed in Red Hat Developer Hub 1.3. 6.1. Red Hat Developer Hub 1.3.5 6.1.1. Red Hat Developer Hub dependency updates CVE-2025-22150 A flaw was found in the undici package for Node.js. Undici uses Math.random() to choose the boundary for a multipart/form-data request. It is known that the output of Math.random() can be predicted if several of its generated values are known. If an app has a mechanism that sends multipart requests to an attacker-controlled website, it can leak the necessary values. Therefore, an attacker can tamper with the requests going to the backend APIs if certain conditions are met. 6.2. Red Hat Developer Hub 1.3.4 6.2.1. Red Hat Developer Hub dependency updates CVE-2024-45338 A flaw was found in golang.org/x/net/html. This flaw allows an attacker to craft input to the parse functions that would be processed non-linearly with respect to its length, resulting in extremely slow parsing. This issue can cause a denial of service. CVE-2024-52798 A flaw was found in path-to-regexp. A path-to-regexp turns path strings into regular expressions. In certain cases, path-to-regexp will output a regular expression that can be exploited to cause poor performance. CVE-2024-55565 nanoid (aka Nano ID) before 5.0.9 mishandles non-integer values. 3.3.8 is also a fixed version. CVE-2024-56201 A flaw was found in the Jinja2 package. A bug in the Jinja compiler allows an attacker that controls both the content and filename of a template to execute arbitrary Python code, regardless of Jinja's sandbox being used. An attacker needs to be able to control both the filename and the contents of a template. Whether that is the case depends on the type of application using Jinja. This vulnerability impacts users of applications that execute untrusted templates where the template author can also choose the template filename. CVE-2024-56326 A flaw was found in the Jinja package. In affected versions of Jinja, an oversight in how the Jinja sandboxed environment detects calls to str.format allows an attacker that controls the content of a template to execute arbitrary Python code. To exploit the vulnerability, an attacker needs to control the content of a template. Whether that is the case depends on the type of application using Jinja. This vulnerability impacts users of applications that execute untrusted templates. Jinja's sandbox does catch calls to str.format and ensures they don't escape the sandbox. However, storing a reference to a malicious string's format method is possible, then passing that to a filter that calls it. No such filters are built into Jinja but could be present through custom filters in an application. After the fix, such indirect calls are also handled by the sandbox. 6.2.2. RHEL 9 platform RPM updates CVE-2024-9287 A vulnerability has been found in the Python venv module and CLI. Path names provided when creating a virtual environment were not quoted properly, allowing the creator to inject commands into virtual environment "activation" scripts, for example, "source venv/bin/activate". This flaw allows attacker-controlled virtual environments to run commands when the virtual environment is activated. CVE-2024-11168 A flaw was found in Python. The urllib.parse.urlsplit() and urlparse() functions improperly validated bracketed hosts ( [] ), allowing hosts that weren't IPv6 or IPvFuture compliant. This behavior was not conformant to RFC 3986 and was potentially vulnerable to server-side request forgery (SSRF) if a URL is processed by more than one URL parser. CVE-2024-34156 A flaw was found in the encoding/gob package of the Golang standard library. Calling Decoder.Decoding, a message that contains deeply nested structures, can cause a panic due to stack exhaustion. This is a follow-up to CVE-2022-30635. CVE-2024-46713 In the Linux kernel, the following vulnerability has been resolved: perf/aux: Fix AUX buffer serialization CVE-2024-50208 In the Linux kernel, the following vulnerability has been resolved: RDMA/bnxt_re: Fix a bug while setting up Level-2 PBL pages CVE-2024-50252 In the Linux kernel, the following vulnerability has been resolved: mlxsw: spectrum_ipip: Fix memory leak when changing remote IPv6 address CVE-2024-53122 A divide by zero flaw was found in the Linux kernel's Multipath TCP (MPTCP). This issue could allow a remote user to crash the system. 6.3. Red Hat Developer Hub 1.3.3 6.3.1. Red Hat Developer Hub dependency updates CVE-2024-21538 A Regular Expression Denial of Service (ReDoS) vulnerability was found in the cross-spawn package for Node.js. Due to improper input sanitization, an attacker can increase CPU usage and crash the program with a large, specially crafted string. 6.3.2. RHEL 9 platform RPM updates CVE-2024-0450 A flaw was found in the Python/CPython 'zipfile' that can allow a zip-bomb type of attack. An attacker may craft a zip file format, leading to a Denial of Service when processed. CVE-2024-2236 A timing-based side-channel flaw was found in libgcrypt's RSA implementation. This issue may allow a remote attacker to initiate a Bleichenbacher-style attack, which can lead to the decryption of RSA ciphertexts. CVE-2024-3596 A vulnerability in the RADIUS (Remote Authentication Dial-In User Service) protocol allows attackers to forge authentication responses when the Message-Authenticator attribute is not enforced. This issue arises from a cryptographically insecure integrity check using MD5, enabling attackers to spoof UDP-based RADIUS response packets. This can result in unauthorized access by modifying an Access-Reject response to an Access-Accept response, thereby compromising the authentication process. CVE-2024-3727 A flaw was found in the github.com/containers/image library. This flaw allows attackers to trigger unexpected authenticated registry accesses on behalf of a victim user, causing resource exhaustion, local path traversal, and other attacks. CVE-2024-6104 A vulnerability was found in go-retryablehttp. The package may suffer from a lack of input sanitization by not cleaning up URL data when writing to the logs. This issue could expose sensitive authentication information. CVE-2024-8088 A flaw was found in Python's zipfile module. When iterating over the entries of a zip archive, the process can enter into an infinite loop state and become unresponsive. This flaw allows an attacker to craft a malicious ZIP archive, leading to a denial of service from the application consuming the zipfile module. Only applications that handle user-controlled zip archives are affected by this vulnerability. CVE-2024-24788 A flaw was found in the net package of the Go stdlib. When a malformed DNS message is received as a response to a query, the Lookup functions within the net package can get stuck in an infinite loop. This issue can lead to resource exhaustion and denial of service (DoS) conditions. CVE-2024-24791 A flaw was found in Go. The net/http module mishandles specific server responses from HTTP/1.1 client requests. This issue may render a connection invalid and cause a denial of service. CVE-2024-30203 A flaw was found in Emacs. When Emacs is used as an email client, inline MIME attachments are considered to be trusted by default, allowing a crafted LaTeX document to exhaust the disk space or the inodes allocated for the partition where the /tmp directory is located. This issue possibly results in a denial of service. CVE-2024-30204 A flaw was found in Emacs. When Emacs is used as an email client, a preview of a crafted LaTeX document attached to an email can exhaust the disk space or the inodes allocated for the partition where the /tmp directory is located. This issue possibly results in a denial of service. CVE-2024-30205 A flaw was found in Emacs. Org mode considers the content of remote files, such as files opened with TRAMP on remote systems, to be trusted, resulting in arbitrary code execution. CVE-2024-42283 In the Linux kernel, the following vulnerability has been resolved: net: nexthop: Initialize all fields in dumped nexthops CVE-2024-45005 In the Linux kernel, the following vulnerability has been resolved: KVM: s390: fix validity interception issue when gisa is switched off CVE-2024-46824 In the Linux kernel, the following vulnerability has been resolved: iommufd: Require drivers to supply the cache_invalidate_user ops CVE-2024-46858 In the Linux kernel, the following vulnerability has been resolved: mptcp: pm: Fix uaf in __timer_delete_sync CVE-2024-50602 A security issue was found in Expat (libexpat). A crash can be triggered in the XML_ResumeParser function due to XML_StopParser's ability to stop or suspend an unstarted parser, which can lead to a denial of service. 6.4. Red Hat Developer Hub 1.3.1 6.4.1. Red Hat Developer Hub dependency updates CVE-2024-21536 A flaw was found in the http-proxy-middleware package. Affected versions of this package are vulnerable to denial of service (DoS) due to an UnhandledPromiseRejection error thrown by micromatch. This flaw allows an attacker to kill the Node.js process and crash the server by requesting certain paths. CVE-2024-37890 A flaw was found in the Node.js WebSocket library (ws). A request with several headers exceeding the 'server.maxHeadersCount' threshold could be used to crash a ws server, leading to a denial of service. CVE-2024-45590 A flaw was found in body-parser. This vulnerability causes denial of service via a specially crafted payload when the URL encoding is enabled. 6.4.2. RHEL 9 platform RPM updates CVE-2021-47385 In the Linux kernel, the following vulnerability has been resolved: hwmon: (w83792d) Fix NULL pointer dereference by removing unnecessary structure field CVE-2023-28746 A vulnerability was found in some Intel Atom Processor's microcode. This issue may allow a malicious actor to achieve a local information disclosure, impacting the data confidentiality of the targeted system. CVE-2023-52658 CVE-2023-52658 is a vulnerability in the Linux kernel's Mellanox MLX5 driver, specifically related to the switchdev mode. A commit intended to block entering switchdev mode due to namespace inconsistencies inadvertently caused system crashes. To address this, the problematic commit was reverted, restoring stability. Users should update their Linux kernel to a version that includes this reversion to ensure reliable operation. CVE-2024-6232 A regular expression denial of service (ReDos) vulnerability was found in Python's tarfile module. Due to excessive backtracking while tarfile parses headers, an attacker may be able to trigger a denial of service via a specially crafted tar archive. CVE-2024-9355 A vulnerability was found in Golang FIPS OpenSSL. This flaw allows a malicious user to randomly cause an uninitialized buffer length variable with a zeroed buffer to be returned in FIPS mode. It may also be possible to force a false positive match between non-equal hashes when comparing a trusted computed hmac sum to an untrusted input sum if an attacker can send a zeroed buffer in place of a pre-computed sum. It is also possible to force a derived key to be all zeros instead of an unpredictable value. This may have follow-on implications for the Go TLS stack. CVE-2024-27403 In the Linux kernel, the following vulnerability has been resolved: netfilter: nft_flow_offload: reset dst in route object after setting up flow CVE-2024-34156 A flaw was found in the encoding/gob package of the Golang standard library. Calling Decoder.Decoding, a message that contains deeply nested structures, can cause a panic due to stack exhaustion. This is a follow-up to CVE-2022-30635. CVE-2024-35989 This is a vulnerability in the Linux kernel's Data Movement Accelerator (DMA) engine, specifically affecting the Intel Data Streaming Accelerator (IDXD) driver. The issue arises during the removal (rmmod) of the idxd driver on systems with only one active CPU. In such scenarios, the driver's cleanup process attempts to migrate performance monitoring unit (PMU) contexts to another CPU. However, with no other CPUs available, this leads to a kernel oops-a serious error causing the system to crash. CVE-2024-36889 In the Linux kernel, the following vulnerability has been resolved: mptcp: ensure snd_nxt is properly initialized on connect CVE-2024-36978 An out-of-bounds write flaw was found in the Linux kernel's multiq qdisc functionality. This vulnerability allows a local user to crash or potentially escalate their privileges on the system. CVE-2024-38556 In the Linux kernel, the following vulnerability has been resolved: net/mlx5: Add a timeout to acquire the command queue semaphore CVE-2024-39483 In the Linux kernel, the following vulnerability has been resolved: KVM: SVM: WARN on vNMI + NMI window iff NMIs are outright masked CVE-2024-39502 In the Linux kernel, the following vulnerability has been resolved: ionic: fix use after netif_napi_del() CVE-2024-40959 In the Linux kernel, the following vulnerability has been resolved: xfrm6: check ip6_dst_idev() return value in xfrm6_get_saddr() CVE-2024-42079 In the Linux kernel, the following vulnerability has been resolved: gfs2: Fix NULL pointer dereference in gfs2_log_flush CVE-2024-42272 In the Linux kernel, the following vulnerability has been resolved: sched: act_ct: take care of padding in struct zones_ht_key CVE-2024-42284 A flaw was found in Linux kernel tipc. tipc_udp_addr2str() does not return a nonzero value when UDP media address is invalid, which can result in a buffer overflow in tipc_media_addr_printf(). 6.5. Red Hat Developer Hub 1.3.0 6.5.1. Red Hat Developer Hub dependency updates CVE-2024-21529 A flaw was found in the dset package. Affected versions of this package are vulnerable to Prototype Pollution via the dset function due to improper user input sanitization. This vulnerability allows the attacker to inject a malicious object property using the built-in Object property proto , which is recursively assigned to all the objects in the program. CVE-2024-24790 A flaw was found in the Go language standard library net/netip. The method Is*() (IsPrivate(), IsPublic(), etc) doesn't behave properly when working with IPv6 mapped to IPv4 addresses. The unexpected behavior can lead to integrity and confidentiality issues, specifically when these methods are used to control access to resources or data. CVE-2024-24791 A flaw was found in Go. The net/http module mishandles specific server responses from HTTP/1.1 client requests. This issue may render a connection invalid and cause a denial of service. CVE-2024-37891 A flaw was found in urllib3, an HTTP client library for Python. In certain configurations, urllib3 does not treat the Proxy-Authorization HTTP header as one carrying authentication material. This issue results in not stripping the header on cross-origin redirects. CVE-2024-39008 A flaw was found in the fast-loops Node.js package. This flaw allows an attacker to alter the behavior of all objects inheriting from the affected prototype by passing arguments to the objectMergeDeep function crafted with the built-in property: proto . This issue can potentially lead to a denial of service, remote code execution, or Cross-site scripting. CVE-2024-39249 A flaw was found in the async Node.js package. A Regular expression Denial of Service (ReDoS) attack can potentially be triggered via the autoinject function while parsing specially crafted input. CVE-2024-41818 A regular expression denial of service (ReDoS) flaw was found in fast-xml-parser in the currency.js script. By sending a specially crafted regex input, a remote attacker could cause a denial of service condition. CVE-2024-43788 A DOM Clobbering vulnerability was found in Webpack via AutoPublicPathRuntimeModule . DOM Clobbering is a type of code-reuse attack where the attacker first embeds a piece of non-script through seemingly benign HTML markups in the webpage, for example, through a post or comment, and leverages the gadgets (pieces of JS code) living in the existing javascript code to transform it into executable code. This vulnerability can lead to Cross-site scripting (XSS) on websites that include Webpack-generated files and allow users to inject certain scriptless HTML tags with improperly sanitized name or ID attributes. CVE-2024-43799 A flaw was found in the Send library. This vulnerability allows remote code execution via untrusted input passed to the SendStream.redirect() function. CVE-2024-43800 A flaw was found in serve-static. This issue may allow the execution of untrusted code via passing sanitized yet untrusted user input to redirect(). 6.5.2. RHEL 9 platform RPM updates CVE-2023-52439 A flaw was found in the Linux kernel's uio subsystem. A use-after-free memory flaw in the uio_open functionality allows a local user to crash or escalate their privileges on the system. CVE-2023-52884 In the Linux kernel, the following vulnerability has been resolved: Input: cyapa - add missing input core locking to suspend/resume functions CVE-2024-6119 A flaw was found in OpenSSL. Applications performing certificate name checks (e.g., TLS clients checking server certificates) may attempt to read an invalid memory address resulting in abnormal termination of the application process. CVE-2024-26739 A use-after-free flaw was found in net/sched/act_mirred.c in the Linux kernel. This may result in a crash. CVE-2024-26929 A flaw was found in the qla2xxx module in the Linux kernel. Under some conditions, the fcport can be freed twice due to a missing check of whether fcport is allocated, causing a double free and a system crash, resulting in a denial of service. CVE-2024-26930 A vulnerability was found in the Linux kernel. A potential double-free in the pointer ha->vp_map exists in the Linux kernel in drivers/scsi/qla2xxx/qla_os.c. CVE-2024-26931 A flaw was found in the qla2xxx module in the Linux kernel. A NULL pointer dereference can be triggered when the system is under memory stress and the driver cannot allocate memory to handle the error recovery of cable pull, causing a system crash and a denial of service. CVE-2024-26947 A flaw was found in the Linux kernel's ARM memory management functionality, where certain memory layouts cause a kernel panic. This flaw allows an attacker who can specify or alter memory layouts to cause a denial of service. CVE-2024-26991 A flaw was found in the Linux Kernel. A lpage_info overflow can occur when checking attributes. This may lead to a crash. CVE-2024-27022 A flaw was found in the Linux kernel. A race condition can occur when the fork system call is called due to improper locking, triggering a warning, impacting system stability, and resulting in a denial of service. CVE-2024-35895 CVE-2024-35895 addresses a vulnerability in the Linux kernel's Berkeley Packet Filter (BPF) subsystem, specifically within the sockmap feature. The issue arises when BPF tracing programs, which can execute in various interrupt contexts, attempt to delete elements from sockmap or sockhash maps. This operation involves acquiring locks that are not safe for use in hard interrupt contexts, leading to potential deadlocks due to lock inversion. BPF tracing programs may delete elements from sockmap/sockhash maps while running in interrupt contexts where the required locks are not hardirq-safe, causing possible deadlocks. CVE-2024-36016 A vulnerability was found in the Linux kernel's n_gsm driver, affecting the tty subsystem. It occurs when switching between basic and advanced option modes in GSM multiplexing, leading to potential out-of-bounds memory writes. This happens because certain state variables, like gsm->len and gsm->state , are not properly reset during mode changes. The issue could result in memory corruption. CVE-2024-36899 In the Linux kernel, the following vulnerability has been resolved: gpiolib: cdev: Fix use after free in lineinfo_changed_notify CVE-2024-38562 In the Linux kernel, the following vulnerability has been resolved: wifi: nl80211: Avoid address calculations via out of bounds array indexing CVE-2024-38570 In the Linux kernel, the following vulnerability has been resolved: gfs2: Fix potential glock use-after-free on unmount CVE-2024-38573 A NULL pointer dereference flaw was found in cppc_cpufreq_get_rate() in the Linux kernel. This issue may result in a crash. CVE-2024-38601 In the Linux kernel, the following vulnerability has been resolved: ring-buffer: Fix a race between readers and resize checks CVE-2024-38615 In the Linux kernel, the following vulnerability has been resolved: cpufreq: exit() callback is optional CVE-2024-39331 A flaw was found in Emacs. Arbitrary shell commands can be executed without prompting when an Org mode file is opened or when the Org mode is enabled, when Emacs is used as an email client, this issue can be triggered when previewing email attachments. CVE-2024-40984 In the Linux kernel, the following vulnerability has been resolved: ACPICA: Revert "ACPICA: avoid Info: mapping multiple BARs. Your kernel is fine." CVE-2024-41071 An out-of-bounds buffer overflow has been found in the Linux kernel's mac80211 subsystem when scanning for SSIDs. Address calculation using out-of-bounds array indexing could result in an attacker crafting an exploit, resulting in the complete compromise of a system. CVE-2024-42225 A potential flaw was found in the Linux kernel's MediaTek WiFi, where it was reusing uninitialized data. This flaw allows a local user to gain unauthorized access to some data potentially. CVE-2024-42246 In the Linux kernel, the following vulnerability has been resolved: net, sunrpc: Remap EPERM in case of connection failure in xs_tcp_setup_socket CVE-2024-45490 A flaw was found in libexpat's xmlparse.c component. This vulnerability allows an attacker to cause improper handling of XML data by providing a negative length value to the XML_ParseBuffer function. CVE-2024-45491 An issue was found in libexpat's internal dtdCopy function in xmlparse.c, It can have an integer overflow for nDefaultAtts on 32-bit platforms where UINT_MAX equals SIZE_MAX. CVE-2024-45492 A flaw was found in libexpat's internal nextScaffoldPart function in xmlparse.c. It can have an integer overflow for m_groupSize on 32-bit platforms where UINT_MAX equals SIZE_MAX.
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/release_notes/fixed-security-issues
Chapter 34. Scanning Storage Interconnects
Chapter 34. Scanning Storage Interconnects Certain commands allow you to reset, scan, or both reset and scan one or more interconnects, which potentially adds and removes multiple devices in one operation. This type of scan can be disruptive, as it can cause delays while I/O operations time out, and remove devices unexpectedly. Red Hat recommends using interconnect scanning only when necessary . Observe the following restrictions when scanning storage interconnects: All I/O on the effected interconnects must be paused and flushed before executing the procedure, and the results of the scan checked before I/O is resumed. As with removing a device, interconnect scanning is not recommended when the system is under memory pressure. To determine the level of memory pressure, run the vmstat 1 100 command. Interconnect scanning is not recommended if free memory is less than 5% of the total memory in more than 10 samples per 100. Also, interconnect scanning is not recommended if swapping is active (non-zero si and so columns in the vmstat output). The free command can also display the total memory. The following commands can be used to scan storage interconnects: echo "1" > /sys/class/fc_host/host N /issue_lip (Replace N with the host number.) This operation performs a Loop Initialization Protocol ( LIP ), scans the interconnect, and causes the SCSI layer to be updated to reflect the devices currently on the bus. Essentially, an LIP is a bus reset, and causes device addition and removal. This procedure is necessary to configure a new SCSI target on a Fibre Channel interconnect. Note that issue_lip is an asynchronous operation. The command can complete before the entire scan has completed. You must monitor /var/log/messages to determine when issue_lip finishes. The lpfc , qla2xxx , and bnx2fc drivers support issue_lip . For more information about the API capabilities supported by each driver in Red Hat Enterprise Linux, see Table 26.1, "Fibre-Channel API Capabilities" . /usr/bin/rescan-scsi-bus.sh The /usr/bin/rescan-scsi-bus.sh script was introduced in Red Hat Enterprise Linux 5.4. By default, this script scans all the SCSI buses on the system, and updates the SCSI layer to reflect new devices on the bus. The script provides additional options to allow device removal, and the issuing of LIPs. For more information about this script, including known issues, see Chapter 38, Adding/Removing a Logical Unit Through rescan-scsi-bus.sh . echo "- - -" > /sys/class/scsi_host/host h /scan This is the same command as described in Chapter 31, Adding a Storage Device or Path to add a storage device or path. In this case, however, the channel number, SCSI target ID, and LUN values are replaced by wildcards. Any combination of identifiers and wildcards is allowed, so you can make the command as specific or broad as needed. This procedure adds LUNs, but does not remove them. modprobe --remove driver-name , modprobe driver-name Running the modprobe --remove driver-name command followed by the modprobe driver-name command completely re-initializes the state of all interconnects controlled by the driver. Despite being rather extreme, using the described commands can be appropriate in certain situations. The commands can be used, for example, to restart the driver with a different module parameter value.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/scanning-storage-interconnects
Chapter 41. ResourceTemplate schema reference
Chapter 41. ResourceTemplate schema reference Used in: CruiseControlTemplate , EntityOperatorTemplate , JmxTransTemplate , KafkaBridgeTemplate , KafkaClusterTemplate , KafkaConnectTemplate , KafkaExporterTemplate , KafkaMirrorMakerTemplate , KafkaNodePoolTemplate , KafkaUserTemplate , ZookeeperClusterTemplate Property Property type Description metadata MetadataTemplate Metadata applied to the resource.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-resourcetemplate-reference
Chapter 1. Setting up the Apache HTTP web server
Chapter 1. Setting up the Apache HTTP web server 1.1. Introduction to the Apache HTTP web server A web server is a network service that serves content to a client over the web. This typically means web pages, but any other documents can be served as well. Web servers are also known as HTTP servers, as they use the hypertext transport protocol ( HTTP ). The Apache HTTP Server , httpd , is an open source web server developed by the Apache Software Foundation . If you are upgrading from a release of Red Hat Enterprise Linux, you have to update the httpd service configuration accordingly. This section reviews some of the newly added features, and guides you through the update of prior configuration files. 1.2. Notable changes in the Apache HTTP Server The Apache HTTP Server has been updated from version 2.4.6 in RHEL 7 to version 2.4.37 in RHEL 8. This updated version includes several new features, but maintains backwards compatibility with the RHEL 7 version at the level of configuration and Application Binary Interface (ABI) of external modules. New features include: HTTP/2 support is now provided by the mod_http2 package, which is a part of the httpd module. systemd socket activation is supported. See httpd.socket(8) man page for more details. Multiple new modules have been added: mod_proxy_hcheck - a proxy health-check module mod_proxy_uwsgi - a Web Server Gateway Interface (WSGI) proxy mod_proxy_fdpass - provides support for the passing the socket of the client to another process mod_cache_socache - an HTTP cache using, for example, memcache backend mod_md - an ACME protocol SSL/TLS certificate service The following modules now load by default: mod_request mod_macro mod_watchdog A new subpackage, httpd-filesystem , has been added, which contains the basic directory layout for the Apache HTTP Server including the correct permissions for the directories. Instantiated service support, [email protected] has been introduced. See the httpd.service man page for more information. A new httpd-init.service replaces the %post script to create a self-signed mod_ssl key pair. Automated TLS certificate provisioning and renewal using the Automatic Certificate Management Environment (ACME) protocol is now supported with the mod_md package (for use with certificate providers such as Let's Encrypt ). The Apache HTTP Server now supports loading TLS certificates and private keys from hardware security tokens directly from PKCS#11 modules. As a result, a mod_ssl configuration can now use PKCS#11 URLs to identify the TLS private key, and, optionally, the TLS certificate in the SSLCertificateKeyFile and SSLCertificateFile directives. A new ListenFree directive in the /etc/httpd/conf/httpd.conf file is now supported. Similarly to the Listen directive, ListenFree provides information about IP addresses, ports, or IP address-and-port combinations that the server listens to. However, with ListenFree , the IP_FREEBIND socket option is enabled by default. Hence, httpd is allowed to bind to a nonlocal IP address or to an IP address that does not exist yet. This allows httpd to listen on a socket without requiring the underlying network interface or the specified dynamic IP address to be up at the time when httpd is trying to bind to it. Note that the ListenFree directive is currently available only in RHEL 8. For more details on ListenFree , see the following table: Table 1.1. ListenFree directive's syntax, status, and modules Syntax Status Modules ListenFree [IP-address:]portnumber [protocol] MPM event, worker, prefork, mpm_winnt, mpm_netware, mpmt_os2 Other notable changes include: The following modules have been removed: mod_file_cache mod_nss Use mod_ssl as a replacement. For details about migrating from mod_nss , see Section 1.14, "Exporting a private key and certificates from an NSS database to use them in an Apache web server configuration" . mod_perl The default type of the DBM authentication database used by the Apache HTTP Server in RHEL 8 has been changed from SDBM to db5 . The mod_wsgi module for the Apache HTTP Server has been updated to Python 3. WSGI applications are now supported only with Python 3, and must be migrated from Python 2. The multi-processing module (MPM) configured by default with the Apache HTTP Server has changed from a multi-process, forked model (known as prefork ) to a high-performance multi-threaded model, event . Any third-party modules that are not thread-safe need to be replaced or removed. To change the configured MPM, edit the /etc/httpd/conf.modules.d/00-mpm.conf file. See the httpd.service(8) man page for more information. The minimum UID and GID allowed for users by suEXEC are now 1000 and 500, respectively (previously 100 and 100). The /etc/sysconfig/httpd file is no longer a supported interface for setting environment variables for the httpd service. The httpd.service(8) man page has been added for the systemd service. Stopping the httpd service now uses a "graceful stop" by default. The mod_auth_kerb module has been replaced by the mod_auth_gssapi module. 1.3. Updating the configuration To update the configuration files from the Apache HTTP Server version used in Red Hat Enterprise Linux 7, choose one of the following options: If /etc/sysconfig/httpd is used to set environment variables, create a systemd drop-in file instead. If any third-party modules are used, ensure they are compatible with a threaded MPM. If suexec is used, ensure user and group IDs meet the new minimums. You can check the configuration for possible errors by using the following command: 1.4. The Apache configuration files The httpd , by default, reads the configuration files after start. You can see the list of the locations of configuration files in the table below. Table 1.2. The httpd service configuration files Path Description /etc/httpd/conf/httpd.conf The main configuration file. /etc/httpd/conf.d/ An auxiliary directory for configuration files that are included in the main configuration file. /etc/httpd/conf.modules.d/ An auxiliary directory for configuration files which load installed dynamic modules packaged in Red Hat Enterprise Linux. In the default configuration, these configuration files are processed first. Although the default configuration is suitable for most situations, you can use also other configuration options. For any changes to take effect, restart the web server first. To check the configuration for possible errors, type the following at a shell prompt: To make the recovery from mistakes easier, make a copy of the original file before editing it. 1.5. Managing the httpd service This section describes how to start, stop, and restart the httpd service. Prerequisites The Apache HTTP Server is installed. Procedure To start the httpd service, enter: To stop the httpd service, enter: To restart the httpd service, enter: 1.6. Setting up a single-instance Apache HTTP Server You can set up a single-instance Apache HTTP Server to serve static HTML content. Follow the procedure if the web server should provide the same content for all domains associated with the server. If you want to provide different content for different domains, set up name-based virtual hosts. For details, see Configuring Apache name-based virtual hosts . Procedure Install the httpd package: If you use firewalld , open the TCP port 80 in the local firewall: Enable and start the httpd service: Optional: Add HTML files to the /var/www/html/ directory. Note When adding content to /var/www/html/ , files and directories must be readable by the user under which httpd runs by default. The content owner can be the either the root user and root user group, or another user or group of the administrator's choice. If the content owner is the root user and root user group, the files must be readable by other users. The SELinux context for all the files and directories must be httpd_sys_content_t , which is applied by default to all content within the /var/www directory. Verification Connect with a web browser to http:// server_IP_or_host_name / . If the /var/www/html/ directory is empty or does not contain an index.html or index.htm file, Apache displays the Red Hat Enterprise Linux Test Page . If /var/www/html/ contains HTML files with a different name, you can load them by entering the URL to that file, such as http:// server_IP_or_host_name / example.html . Additional resources Apache manual: Installing the Apache HTTP server manual . See the httpd.service(8) man page on your system. 1.7. Configuring Apache name-based virtual hosts Name-based virtual hosts enable Apache to serve different content for different domains that resolve to the IP address of the server. You can set up a virtual host for both the example.com and example.net domain with separate document root directories. Both virtual hosts serve static HTML content. Prerequisites Clients and the web server resolve the example.com and example.net domain to the IP address of the web server. Note that you must manually add these entries to your DNS server. Procedure Install the httpd package: Edit the /etc/httpd/conf/httpd.conf file: Append the following virtual host configuration for the example.com domain: These settings configure the following: All settings in the <VirtualHost *:80> directive are specific for this virtual host. DocumentRoot sets the path to the web content of the virtual host. ServerName sets the domains for which this virtual host serves content. To set multiple domains, add the ServerAlias parameter to the configuration and specify the additional domains separated with a space in this parameter. CustomLog sets the path to the access log of the virtual host. ErrorLog sets the path to the error log of the virtual host. Note Apache uses the first virtual host found in the configuration also for requests that do not match any domain set in the ServerName and ServerAlias parameters. This also includes requests sent to the IP address of the server. Append a similar virtual host configuration for the example.net domain: Create the document roots for both virtual hosts: If you set paths in the DocumentRoot parameters that are not within /var/www/ , set the httpd_sys_content_t context on both document roots: These commands set the httpd_sys_content_t context on the /srv/example.com/ and /srv/example.net/ directory. Note that you must install the policycoreutils-python-utils package to run the restorecon command. If you use firewalld , open port 80 in the local firewall: Enable and start the httpd service: Verification Create a different example file in each virtual host's document root: Use a browser and connect to http://example.com . The web server shows the example file from the example.com virtual host. Use a browser and connect to http://example.net . The web server shows the example file from the example.net virtual host. Additional resources Installing the Apache HTTP Server manual - Virtual Hosts 1.8. Configuring Kerberos authentication for the Apache HTTP web server To perform Kerberos authentication in the Apache HTTP web server, RHEL 8 uses the mod_auth_gssapi Apache module. The Generic Security Services API ( GSSAPI ) is an interface for applications that make requests to use security libraries, such as Kerberos. The gssproxy service allows to implement privilege separation for the httpd server, which optimizes this process from the security point of view. Note The mod_auth_gssapi module replaces the removed mod_auth_kerb module. Prerequisites The httpd and gssproxy packages are installed. The Apache web server is set up and the httpd service is running. 1.8.1. Setting up GSS-Proxy in an IdM environment This procedure describes how to set up GSS-Proxy to perform Kerberos authentication in the Apache HTTP web server. Procedure Enable access to the keytab file of HTTP/<SERVER_NAME>@realm principal by creating the service principal: Retrieve the keytab for the principal stored in the /etc/gssproxy/http.keytab file: This step sets permissions to 400, thus only the root user has access to the keytab file. The apache user does not. Create the /etc/gssproxy/80-httpd.conf file with the following content: Restart and enable the gssproxy service: Additional resources gssproxy(8) man pages on your system gssproxy-mech(8) man pages on your system gssproxy.conf(5) man pages on your system 1.8.2. Configuring Kerberos authentication for a directory shared by the Apache HTTP web server This procedure describes how to configure Kerberos authentication for the /var/www/html/private/ directory. Prerequisites The gssproxy service is configured and running. Procedure Configure the mod_auth_gssapi module to protect the /var/www/html/private/ directory: Create system unit configuration drop-in file: Add the following parameter to the system drop-in file: Reload the systemd configuration: Restart the httpd service: Verification Obtain a Kerberos ticket: Open the URL to the protected directory in a browser. 1.9. Configuring TLS encryption on an Apache HTTP Server By default, Apache provides content to clients using an unencrypted HTTP connection. This section describes how to enable TLS encryption and configure frequently used encryption-related settings on an Apache HTTP Server. Prerequisites The Apache HTTP Server is installed and running. 1.9.1. Adding TLS encryption to an Apache HTTP Server You can enable TLS encryption on an Apache HTTP Server for the example.com domain. Prerequisites The Apache HTTP Server is installed and running. The private key is stored in the /etc/pki/tls/private/example.com.key file. For details about creating a private key and certificate signing request (CSR), as well as how to request a certificate from a certificate authority (CA), see your CA's documentation. Alternatively, if your CA supports the ACME protocol, you can use the mod_md module to automate retrieving and provisioning TLS certificates. The TLS certificate is stored in the /etc/pki/tls/certs/example.com.crt file. If you use a different path, adapt the corresponding steps of the procedure. The CA certificate is stored in the /etc/pki/tls/certs/ca.crt file. If you use a different path, adapt the corresponding steps of the procedure. Clients and the web server resolve the host name of the server to the IP address of the web server. Procedure Install the mod_ssl package: Edit the /etc/httpd/conf.d/ssl.conf file and add the following settings to the <VirtualHost _default_:443> directive: Set the server name: Important The server name must match the entry set in the Common Name field of the certificate. Optional: If the certificate contains additional host names in the Subject Alt Names (SAN) field, you can configure mod_ssl to provide TLS encryption also for these host names. To configure this, add the ServerAliases parameter with corresponding names: Set the paths to the private key, the server certificate, and the CA certificate: For security reasons, configure that only the root user can access the private key file: Warning If the private key was accessed by unauthorized users, revoke the certificate, create a new private key, and request a new certificate. Otherwise, the TLS connection is no longer secure. If you use firewalld , open port 443 in the local firewall: Restart the httpd service: Note If you protected the private key file with a password, you must enter this password each time when the httpd service starts. Verification Use a browser and connect to https:// example.com . Additional resources SSL/TLS Encryption Security considerations for TLS in RHEL 8 1.9.2. Setting the supported TLS protocol versions on an Apache HTTP Server By default, the Apache HTTP Server on RHEL uses the system-wide crypto policy that defines safe default values, which are also compatible with recent browsers. For example, the DEFAULT policy defines that only the TLSv1.2 and TLSv1.3 protocol versions are enabled in apache. You can manually configure which TLS protocol versions your Apache HTTP Server supports. Follow the procedure if your environment requires to enable only specific TLS protocol versions, for example: If your environment requires that clients can also use the weak TLS1 (TLSv1.0) or TLS1.1 protocol. If you want to configure that Apache only supports the TLSv1.2 or TLSv1.3 protocol. Prerequisites TLS encryption is enabled on the server as described in Adding TLS encryption to an Apache HTTP server . Procedure Edit the /etc/httpd/conf/httpd.conf file, and add the following setting to the <VirtualHost> directive for which you want to set the TLS protocol version. For example, to enable only the TLSv1.3 protocol: Restart the httpd service: Verification Use the following command to verify that the server supports TLSv1.3 : Use the following command to verify that the server does not support TLSv1.2 : If the server does not support the protocol, the command returns an error: Optional: Repeat the command for other TLS protocol versions. Additional resources update-crypto-policies(8) man page on your system Using system-wide cryptographic policies . For further details about the SSLProtocol parameter, refer to the mod_ssl documentation in the Apache manual: Installing the Apache HTTP server manual . 1.9.3. Setting the supported ciphers on an Apache HTTP Server By default, the Apache HTTP Server uses the system-wide crypto policy that defines safe default values, which are also compatible with recent browsers. For the list of ciphers the system-wide crypto allows, see the /etc/crypto-policies/back-ends/openssl.config file. You can manually configure which ciphers your Apache HTTP Server supports. Follow the procedure if your environment requires specific ciphers. Prerequisites TLS encryption is enabled on the server as described in Adding TLS encryption to an Apache HTTP server . Procedure Edit the /etc/httpd/conf/httpd.conf file, and add the SSLCipherSuite parameter to the <VirtualHost> directive for which you want to set the TLS ciphers: This example enables only the EECDH+AESGCM , EDH+AESGCM , AES256+EECDH , and AES256+EDH ciphers and disables all ciphers which use the SHA1 and SHA256 message authentication code (MAC). Restart the httpd service: Verification To display the list of ciphers the Apache HTTP Server supports: Install the nmap package: Use the nmap utility to display the supported ciphers: Additional resources update-crypto-policies(8) man page on your system Using system-wide cryptographic policies . SSLCipherSuite 1.10. Configuring TLS client certificate authentication Client certificate authentication enables administrators to allow only users who authenticate using a certificate to access resources on the web server. You can configure client certificate authentication for the /var/www/html/Example/ directory. If the Apache HTTP Server uses the TLS 1.3 protocol, certain clients require additional configuration. For example, in Firefox, set the security.tls.enable_post_handshake_auth parameter in the about:config menu to true . For further details, see Transport Layer Security version 1.3 in Red Hat Enterprise Linux 8 . Prerequisites TLS encryption is enabled on the server as described in Adding TLS encryption to an Apache HTTP server . Procedure Edit the /etc/httpd/conf/httpd.conf file and add the following settings to the <VirtualHost> directive for which you want to configure client authentication: The SSLVerifyClient require setting defines that the server must successfully validate the client certificate before the client can access the content in the /var/www/html/Example/ directory. Restart the httpd service: Verification Use the curl utility to access the https://example.com/Example/ URL without client authentication: The error indicates that the web server requires a client certificate authentication. Pass the client private key and certificate, as well as the CA certificate to curl to access the same URL with client authentication: If the request succeeds, curl displays the index.html file stored in the /var/www/html/Example/ directory. Additional resources mod_ssl configuration 1.11. Securing web applications on a web server using ModSecurity ModSecurity is an open source web application firewall (WAF) supported by various web servers such as Apache, Nginx, and IIS, which reduces security risks in web applications. ModSecurity provides customizable rule sets for configuring your server. The mod_security-crs package contains the core rule set (CRS) with rules against cross-website scripting, bad user agents, SQL injection, Trojans, session hijacking, and other exploits. 1.11.1. Deploying the ModSecurity web-based application firewall for Apache To reduce risks related to running web-based applications on your web server by deploying ModSecurity, install the mod_security and mod_security_crs packages for the Apache HTTP server. The mod_security_crs package provides the core rule set (CRS) for the ModSecurity web-based application firewall (WAF) module. Procedure Install the mod_security , mod_security_crs , and httpd packages: Start the httpd server: Verification Verify that the ModSecurity web-based application firewall is enabled on your Apache HTTP server: Check that the /etc/httpd/modsecurity.d/activated_rules/ directory contains rules provided by mod_security_crs : Additional resources Red Hat JBoss Core Services ModSecurity Guide An introduction to web application firewalls for Linux sysadmins 1.11.2. Adding a custom rule to ModSecurity If the rules contained in the ModSecurity core rule set (CRS) do not fit your scenario and if you want to prevent additional possible attacks, you can add your custom rules to the rule set used by the ModSecurity web-based application firewall. The following example demonstrates the addition of a simple rule. For creating more complex rules, see the reference manual on the ModSecurity Wiki website. Prerequisites ModSecurity for Apache is installed and enabled. Procedure Open the /etc/httpd/conf.d/mod_security.conf file in a text editor of your choice, for example: Add the following example rule after the line starting with SecRuleEngine On : The rule forbids the use of resources to the user if the data parameter contains the evil string. Save the changes, and quit the editor. Restart the httpd server: Verification Create a test .html page: Restart the httpd server: Request test.html without malicious data in the GET variable of the HTTP request: Request test.html with malicious data in the GET variable of the HTTP request: Check the /var/log/httpd/error_log file, and locate the log entry about denying access with the param data containing an evil data message: Additional resources ModSecurity Wiki 1.12. Installing the Apache HTTP Server manual You can install the Apache HTTP Server manual. This manual provides a detailed documentation of, for example: Configuration parameters and directives Performance tuning Authentication settings Modules Content caching Security tips Configuring TLS encryption After installing the manual, you can display it using a web browser. Prerequisites The Apache HTTP Server is installed and running. Procedure Install the httpd-manual package: Optional: By default, all clients connecting to the Apache HTTP Server can display the manual. To restrict access to a specific IP range, such as the 192.0.2.0/24 subnet, edit the /etc/httpd/conf.d/manual.conf file and add the Require ip 192.0.2.0/24 setting to the <Directory "/usr/share/httpd/manual"> directive: Restart the httpd service: Verification To display the Apache HTTP Server manual, connect with a web browser to http:// host_name_or_IP_address /manual/ 1.13. Working with Apache modules The httpd service is a modular application, and you can extend it with a number of Dynamic Shared Objects ( DSO s). Dynamic Shared Objects are modules that you can dynamically load or unload at runtime as necessary. You can find these modules in the /usr/lib64/httpd/modules/ directory. 1.13.1. Loading a DSO module As an administrator, you can choose the functionality to include in the server by configuring which modules the server should load. To load a particular DSO module, use the LoadModule directive. Note that modules provided by a separate package often have their own configuration file in the /etc/httpd/conf.modules.d/ directory. Prerequisites You have installed the httpd package. Procedure Search for the module name in the configuration files in the /etc/httpd/conf.modules.d/ directory: Edit the configuration file in which the module name was found, and uncomment the LoadModule directive of the module: If the module was not found, for example, because a RHEL package does not provide the module, create a configuration file, such as /etc/httpd/conf.modules.d/30-example.conf with the following directive: Restart the httpd service: 1.13.2. Compiling a custom Apache module You can create your own module and build it with the help of the httpd-devel package, which contains the include files, the header files, and the APache eXtenSion ( apxs ) utility required to compile a module. Prerequisites You have the httpd-devel package installed. Procedure Build a custom module with the following command: Verification Load the module the same way as described in Loading a DSO module . 1.14. Exporting a private key and certificates from an NSS database to use them in an Apache web server configuration RHEL 8 no longer provides the mod_nss module for the Apache web server, and Red Hat recommends using the mod_ssl module. If you store your private key and certificates in a Network Security Services (NSS) database, for example, because you migrated the web server from RHEL 7 to RHEL 8, follow this procedure to extract the key and certificates in Privacy Enhanced Mail (PEM) format. You can then use the files in the mod_ssl configuration as described in Configuring TLS encryption on an Apache HTTP server . This procedure assumes that the NSS database is stored in /etc/httpd/alias/ and that you store the exported private key and certificates in the /etc/pki/tls/ directory. Prerequisites The private key, the certificate, and the certificate authority (CA) certificate are stored in an NSS database. Procedure List the certificates in the NSS database: You need the nicknames of the certificates in the steps. To extract the private key, you must temporarily export the key to a PKCS #12 file: Use the nickname of the certificate associated with the private key, to export the key to a PKCS #12 file: Note that you must set a password on the PKCS #12 file. You need this password in the step. Export the private key from the PKCS #12 file: Delete the temporary PKCS #12 file: Set the permissions on /etc/pki/tls/private/server.key to ensure that only the root user can access this file: Use the nickname of the server certificate in the NSS database to export the CA certificate: Set the permissions on /etc/pki/tls/certs/server.crt to ensure that only the root user can access this file: Use the nickname of the CA certificate in the NSS database to export the CA certificate: Follow Configuring TLS encryption on an Apache HTTP server to configure the Apache web server, and: Set the SSLCertificateKeyFile parameter to /etc/pki/tls/private/server.key . Set the SSLCertificateFile parameter to /etc/pki/tls/certs/server.crt . Set the SSLCACertificateFile parameter to /etc/pki/tls/certs/ca.crt . Additional resources certutil(1) , pk12util(1) , and pkcs12(1ssl) man pages on your system 1.15. Additional resources httpd(8) httpd.service(8) httpd.conf(5) apachectl(8) Using GSS-Proxy for Apache httpd operation . Configuring applications to use cryptographic hardware through PKCS #11 .
[ "apachectl configtest Syntax OK", "apachectl configtest Syntax OK", "systemctl start httpd", "systemctl stop httpd", "systemctl restart httpd", "yum install httpd", "firewall-cmd --permanent --add-port=80/tcp firewall-cmd --reload", "systemctl enable --now httpd", "yum install httpd", "<VirtualHost *:80> DocumentRoot \"/var/www/example.com/\" ServerName example.com CustomLog /var/log/httpd/example.com_access.log combined ErrorLog /var/log/httpd/example.com_error.log </VirtualHost>", "<VirtualHost *:80> DocumentRoot \"/var/www/example.net/\" ServerName example.net CustomLog /var/log/httpd/example.net_access.log combined ErrorLog /var/log/httpd/example.net_error.log </VirtualHost>", "mkdir /var/www/example.com/ mkdir /var/www/example.net/", "semanage fcontext -a -t httpd_sys_content_t \"/srv/example.com(/.*)?\" restorecon -Rv /srv/example.com/ semanage fcontext -a -t httpd_sys_content_t \"/srv/example.net(/.\\*)?\" restorecon -Rv /srv/example.net/", "firewall-cmd --permanent --add-port=80/tcp firewall-cmd --reload", "systemctl enable --now httpd", "echo \"vHost example.com\" > /var/www/example.com/index.html echo \"vHost example.net\" > /var/www/example.net/index.html", "ipa service-add HTTP/<SERVER_NAME>", "ipa-getkeytab -s USD(awk '/^server =/ {print USD3}' /etc/ipa/default.conf) -k /etc/gssproxy/http.keytab -p HTTP/USD(hostname -f)", "[service/HTTP] mechs = krb5 cred_store = keytab:/etc/gssproxy/http.keytab cred_store = ccache:/var/lib/gssproxy/clients/krb5cc_%U euid = apache", "systemctl restart gssproxy.service systemctl enable gssproxy.service", "<Location /var/www/html/private> AuthType GSSAPI AuthName \"GSSAPI Login\" Require valid-user </Location>", "systemctl edit httpd.service", "[Service] Environment=GSS_USE_PROXY=1", "systemctl daemon-reload", "systemctl restart httpd.service", "kinit", "yum install mod_ssl", "ServerName example.com", "ServerAlias www.example.com server.example.com", "SSLCertificateKeyFile \"/etc/pki/tls/private/example.com.key\" SSLCertificateFile \"/etc/pki/tls/certs/example.com.crt\" SSLCACertificateFile \"/etc/pki/tls/certs/ca.crt\"", "chown root:root /etc/pki/tls/private/example.com.key chmod 600 /etc/pki/tls/private/example.com.key", "firewall-cmd --permanent --add-port=443/tcp firewall-cmd --reload", "systemctl restart httpd", "SSLProtocol -All TLSv1.3", "systemctl restart httpd", "openssl s_client -connect example.com :443 -tls1_3", "openssl s_client -connect example.com :443 -tls1_2", "140111600609088:error:1409442E:SSL routines:ssl3_read_bytes:tlsv1 alert protocol version:ssl/record/rec_layer_s3.c:1543:SSL alert number 70", "SSLCipherSuite \"EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!SHA1:!SHA256\"", "systemctl restart httpd", "yum install nmap", "nmap --script ssl-enum-ciphers -p 443 example.com PORT STATE SERVICE 443/tcp open https | ssl-enum-ciphers: | TLSv1.2: | ciphers: | TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (ecdh_x25519) - A | TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (dh 2048) - A | TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (ecdh_x25519) - A", "<Directory \"/var/www/html/Example/\"> SSLVerifyClient require </Directory>", "systemctl restart httpd", "curl https://example.com/Example/ curl: (56) OpenSSL SSL_read: error:1409445C:SSL routines:ssl3_read_bytes:tlsv13 alert certificate required, errno 0", "curl --cacert ca.crt --key client.key --cert client.crt https://example.com/Example/", "yum install -y mod_security mod_security_crs httpd", "systemctl restart httpd", "httpd -M | grep security security2_module (shared)", "ls /etc/httpd/modsecurity.d/activated_rules/ REQUEST-921-PROTOCOL-ATTACK.conf REQUEST-930-APPLICATION-ATTACK-LFI.conf", "vi /etc/httpd/conf.d/mod_security.conf", "SecRule ARGS:data \"@contains evil\" \"deny,status:403,msg:'param data contains evil data',id:1\"", "systemctl restart httpd", "echo \"mod_security test\" > /var/www/html/ test .html", "systemctl restart httpd", "curl http://localhost/test.html?data=good mod_security test", "curl localhost/test.html?data=xxxevilxxx <!DOCTYPE HTML PUBLIC \"-//IETF//DTD HTML 2.0//EN\"> <html><head> <title>403 Forbidden</title> </head><body> <h1>Forbidden</h1> <p>You do not have permission to access this resource.</p> </body></html>", "[Wed May 25 08:01:31.036297 2022] [:error] [pid 5839:tid 139874434791168] [client ::1:45658] [client ::1] ModSecurity: Access denied with code 403 (phase 2). String match \"evil\" at ARGS:data. [file \"/etc/httpd/conf.d/mod_security.conf\"] [line \"4\"] [id \"1\"] [msg \"param data contains evil data\"] [hostname \"localhost\"] [uri \"/test.html\"] [unique_id \"Yo4amwIdsBG3yZqSzh2GuwAAAIY\"]", "yum install httpd-manual", "<Directory \"/usr/share/httpd/manual\"> Require ip 192.0.2.0/24 </Directory>", "systemctl restart httpd", "grep mod_ssl.so /etc/httpd/conf.modules.d/ *", "LoadModule ssl_module modules/mod_ssl.so", "LoadModule ssl_module modules/<custom_module>.so", "systemctl restart httpd", "apxs -i -a -c module_name.c", "certutil -d /etc/httpd/alias/ -L Certificate Nickname Trust Attributes SSL,S/MIME,JAR/XPI Example CA C,, Example Server Certificate u,u,u", "pk12util -o /etc/pki/tls/private/export.p12 -d /etc/httpd/alias/ -n \"Example Server Certificate\" Enter password for PKCS12 file: password Re-enter password: password pk12util: PKCS12 EXPORT SUCCESSFUL", "openssl pkcs12 -in /etc/pki/tls/private/export.p12 -out /etc/pki/tls/private/server.key -nocerts -nodes Enter Import Password: password MAC verified OK", "rm /etc/pki/tls/private/export.p12", "chown root:root /etc/pki/tls/private/server.key chmod 0600 /etc/pki/tls/private/server.key", "certutil -d /etc/httpd/alias/ -L -n \"Example Server Certificate\" -a -o /etc/pki/tls/certs/server.crt", "chown root:root /etc/pki/tls/certs/server.crt chmod 0600 /etc/pki/tls/certs/server.crt", "certutil -d /etc/httpd/alias/ -L -n \" Example CA \" -a -o /etc/pki/tls/certs/ca.crt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_different_types_of_servers/setting-apache-http-server_Deploying-different-types-of-servers
Chapter 9. Managing metrics targets
Chapter 9. Managing metrics targets OpenShift Container Platform Monitoring collects metrics from targeted cluster components by scraping data from exposed service endpoints. In the Administrator perspective in the OpenShift Container Platform web console, you can use the Metrics Targets page to view, search, and filter the endpoints that are currently targeted for scraping, which helps you to identify and troubleshoot problems. For example, you can view the current status of targeted endpoints to see when OpenShift Container Platform Monitoring is not able to scrape metrics from a targeted component. The Metrics Targets page shows targets for default OpenShift Container Platform projects and for user-defined projects. 9.1. Accessing the Metrics Targets page in the Administrator perspective You can view the Metrics Targets page in the Administrator perspective in the OpenShift Container Platform web console. Prerequisites You have access to the cluster as an administrator for the project for which you want to view metrics targets. Procedure In the Administrator perspective, select Observe Targets . The Metrics Targets page opens with a list of all service endpoint targets that are being scraped for metrics. 9.2. Searching and filtering metrics targets The list of metrics targets can be long. You can filter and search these targets based on various criteria. In the Administrator perspective, the Metrics Targets page provides details about targets for default OpenShift Container Platform and user-defined projects. This page lists the following information for each target: the service endpoint URL being scraped the ServiceMonitor component being monitored the up or down status of the target the namespace the last scrape time the duration of the last scrape You can filter the list of targets by status and source. The following filtering options are available: Status filters: Up . The target is currently up and being actively scraped for metrics. Down . The target is currently down and not being scraped for metrics. Source filters: Platform . Platform-level targets relate only to default OpenShift Container Platform projects. These projects provide core OpenShift Container Platform functionality. User . User targets relate to user-defined projects. These projects are user-created and can be customized. You can also use the search box to find a target by target name or label. Select Text or Label from the search box menu to limit your search. 9.3. Getting detailed information about a target On the Target details page, you can view detailed information about a metric target. Prerequisites You have access to the cluster as an administrator for the project for which you want to view metrics targets. Procedure To view detailed information about a target in the Administrator perspective : Open the OpenShift Container Platform web console and navigate to Observe Targets . Optional: Filter the targets by status and source by selecting filters in the Filter list. Optional: Search for a target by name or label by using the Text or Label field to the search box. Optional: Sort the targets by clicking one or more of the Endpoint , Status , Namespace , Last Scrape , and Scrape Duration column headers. Click the URL in the Endpoint column for a target to navigate to its Target details page. This page provides information about the target, including: The endpoint URL being scraped for metrics The current Up or Down status of the target A link to the namespace A link to the ServiceMonitor details Labels attached to the target The most recent time that the target was scraped for metrics
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/monitoring/managing-metrics-targets
Chapter 9. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation
Chapter 9. How to use dedicated worker nodes for Red Hat OpenShift Data Foundation Any Red Hat OpenShift Container Platform subscription requires an OpenShift Data Foundation subscription. However, you can save on the OpenShift Container Platform subscription costs if you are using infrastructure nodes to schedule OpenShift Data Foundation resources. It is important to maintain consistency across environments with or without Machine API support. Because of this, it is highly recommended in all cases to have a special category of nodes labeled as either worker or infra or have both roles. See the Section 9.3, "Manual creation of infrastructure nodes" section for more information. 9.1. Anatomy of an Infrastructure node Infrastructure nodes for use with OpenShift Data Foundation have a few attributes. The infra node-role label is required to ensure the node does not consume RHOCP entitlements. The infra node-role label is responsible for ensuring only OpenShift Data Foundation entitlements are necessary for the nodes running OpenShift Data Foundation. Labeled with node-role.kubernetes.io/infra Adding an OpenShift Data Foundation taint with a NoSchedule effect is also required so that the infra node will only schedule OpenShift Data Foundation resources. Tainted with node.ocs.openshift.io/storage="true" The label identifies the RHOCP node as an infra node so that RHOCP subscription cost is not applied. The taint prevents non OpenShift Data Foundation resources to be scheduled on the tainted nodes. Note Adding storage taint on nodes might require toleration handling for the other daemonset pods such as openshift-dns daemonset . For information about how to manage the tolerations, see Knowledgebase article: Openshift-dns daemonsets doesn't include toleration to run on nodes with taints . Example of the taint and labels required on infrastructure node that will be used to run OpenShift Data Foundation services: 9.2. Machine sets for creating Infrastructure nodes If the Machine API is supported in the environment, then labels should be added to the templates for the Machine Sets that will be provisioning the infrastructure nodes. Avoid the anti-pattern of adding labels manually to nodes created by the machine API. Doing so is analogous to adding labels to pods created by a deployment. In both cases, when the pod/node fails, the replacement pod/node will not have the appropriate labels. Note In EC2 environments, you will need three machine sets, each configured to provision infrastructure nodes in a distinct availability zone (such as us-east-2a, us-east-2b, us-east-2c). Currently, OpenShift Data Foundation does not support deploying in more than three availability zones. The following Machine Set template example creates nodes with the appropriate taint and labels required for infrastructure nodes. This will be used to run OpenShift Data Foundation services. Important If you add a taint to the infrastructure nodes, you also need to add tolerations to the taint for other workloads, for example, the fluentd pods. For more information, see the Red Hat Knowledgebase solution Infrastructure Nodes in OpenShift 4 . 9.3. Manual creation of infrastructure nodes Only when the Machine API is not supported in the environment should labels be directly applied to nodes. Manual creation requires that at least 3 RHOCP worker nodes are available to schedule OpenShift Data Foundation services, and that these nodes have sufficient CPU and memory resources. To avoid the RHOCP subscription cost, the following is required: Adding a NoSchedule OpenShift Data Foundation taint is also required so that the infra node will only schedule OpenShift Data Foundation resources and repel any other non-OpenShift Data Foundation workloads. Warning Do not remove the node-role node-role.kubernetes.io/worker="" The removal of the node-role.kubernetes.io/worker="" can cause issues unless changes are made both to the OpenShift scheduler and to MachineConfig resources. If already removed, it should be added again to each infra node. Adding node-role node-role.kubernetes.io/infra="" and OpenShift Data Foundation taint is sufficient to conform to entitlement exemption requirements. 9.4. Taint a node from the user interface This section explains the procedure to taint nodes after the OpenShift Data Foundation deployment. Procedure In the OpenShift Web Console, click Compute Nodes , and then select the node which has to be tainted. In the Details page click on Edit taints . Enter the values in the Key <nodes.openshift.ocs.io/storage>, Value <true> and in the Effect <Noschedule> field. Click Save. Verification steps Follow the steps to verify that the node has tainted successfully: Navigate to Compute Nodes . Select the node to verify its status, and then click on the YAML tab. In the specs section check the values of the following parameters: Additional resources For more information, refer to Creating the OpenShift Data Foundation cluster on VMware vSphere .
[ "spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/worker: \"\" node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"", "template: metadata: creationTimestamp: null labels: machine.openshift.io/cluster-api-cluster: kb-s25vf machine.openshift.io/cluster-api-machine-role: worker machine.openshift.io/cluster-api-machine-type: worker machine.openshift.io/cluster-api-machineset: kb-s25vf-infra-us-west-2a spec: taints: - effect: NoSchedule key: node.ocs.openshift.io/storage value: \"true\" metadata: creationTimestamp: null labels: node-role.kubernetes.io/infra: \"\" cluster.ocs.openshift.io/openshift-storage: \"\"", "label node <node> node-role.kubernetes.io/infra=\"\" label node <node> cluster.ocs.openshift.io/openshift-storage=\"\"", "adm taint node <node> node.ocs.openshift.io/storage=\"true\":NoSchedule", "Taints: Key: node.openshift.ocs.io/storage Value: true Effect: Noschedule" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/how-to-use-dedicated-worker-nodes-for-openshift-data-foundation_osp
Appendix E. Revision History
Appendix E. Revision History Revision History Revision 1.0-10.400 2013-10-31 Rudiger Landmann Rebuild with publican 4.0.0 Revision 1.0-10 2012-07-18 Anthony Towns Rebuild for Publican 3.0 Revision 1.0-0 Wed Apr 01 2009
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/appe-publican-revision_history
7.202. ql2400-firmware
7.202. ql2400-firmware 7.202.1. RHBA-2013:0402 - ql2400-firmware bug fix and enhancement update An updated ql2400-firmware package that fixes multiple bugs and adds various enhancements is now available for Red Hat Enterprise Linux 6. The ql2400-firmware package provides the firmware required to run the QLogic 2400 Series of mass storage adapters. Note This update upgrades the ql2400 firmware to upstream version 5.08.00, which provides a number of bug fixes and enhancements over the version. (BZ#826665) All users of QLogic 2400 Series Fibre Channel adapters are advised to upgrade to this updated package, which fixes these bugs and adds these enhancements.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/ql2400-firmware
Chapter 3. Making Media
Chapter 3. Making Media This chapter describes how to use ISO image files obtained by following the steps in Chapter 2, Downloading Red Hat Enterprise Linux to create bootable physical media, such as a DVD or a USB flash drive. You can then use these media to boot the installation program and start the installation. These steps only apply if you plan to install Red Hat Enterprise Linux on a 64-bit AMD, Intel, or ARM system, or an IBM Power Systems server using physical boot media. For information about installing Red Hat Enterprise Linux on an IBM Z server, see Chapter 16, Booting the Installation on IBM Z . For instructions on how to set up a Preboot Execution Environment (PXE) server to perform a PXE-based installation over a network, see Chapter 24, Preparing for a Network Installation . Note By default, the inst.stage2= boot option is used on the installation media and set to a specific label (for example, inst.stage2=hd:LABEL=RHEL7\x20Server.x86_64 ). If you modify the default label of the file system containing the runtime image, or if using a customized procedure to boot the installation system, you must ensure this option is set to the correct value. See Specifying the Installation Source for details. 3.1. Making an Installation CD or DVD You can make an installation CD or DVD using burning software on your computer and a CD/DVD burner. The exact series of steps that produces an optical disc from an ISO image file varies greatly from computer to computer, depending on the operating system and disc burning software installed. Consult your burning software's documentation for the exact steps needed to burn a CD or DVD from an ISO image file. Note It is possible to use optical discs (CDs and DVDs) to create both minimal boot media and full installation media. However, it is important to note that due to the large size of the full installation ISO image (between 4 and 4.5 GB), only a DVD can be used to create a full installation disc. Minimal boot ISO is roughly 300 MB, allowing it to be burned to either a CD or a DVD. Make sure that your disc burning software is capable of burning discs from image files. Although this is true of most disc burning software, exceptions exist. In particular, note that the disc burning feature built into Windows XP and Windows Vista cannot burn DVDs; and that earlier Windows operating systems did not have any disc burning capability installed by default at all. Therefore, if your computer has a Windows operating system prior to Windows 7 installed on it, you need a separate piece of software for this task. Examples of popular disc burning software for Windows that you might already have on your computer include Nero Burning ROM and Roxio Creator . Most widely used disc burning software for Linux, such as Brasero and K3b , also has the built-in ability to burn discs from ISO image files. On some computers, the option to burn a disc from an ISO file is integrated into a context menu in the file browser. For example, when you right-click an ISO file on a computer with a Linux or UNIX operating system which runs the GNOME desktop, the Nautilus file browser presents you with the option to Write to disk .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/installation_guide/chap-making-media
1.8. Linux Virtual Server
1.8. Linux Virtual Server Linux Virtual Server (LVS) is a set of integrated software components for balancing the IP load across a set of real servers. LVS runs on a pair of equally configured computers: one that is an active LVS router and one that is a backup LVS router. The active LVS router serves two roles: To balance the load across the real servers. To check the integrity of the services on each real server. The backup LVS router monitors the active LVS router and takes over from it in case the active LVS router fails. Figure 1.20, "Components of a Running LVS Cluster" provides an overview of the LVS components and their interrelationship. Figure 1.20. Components of a Running LVS Cluster The pulse daemon runs on both the active and passive LVS routers. On the backup LVS router, pulse sends a heartbeat to the public interface of the active router to make sure the active LVS router is properly functioning. On the active LVS router, pulse starts the lvs daemon and responds to heartbeat queries from the backup LVS router. Once started, the lvs daemon calls the ipvsadm utility to configure and maintain the IPVS (IP Virtual Server) routing table in the kernel and starts a nanny process for each configured virtual server on each real server. Each nanny process checks the state of one configured service on one real server, and tells the lvs daemon if the service on that real server is malfunctioning. If a malfunction is detected, the lvs daemon instructs ipvsadm to remove that real server from the IPVS routing table. If the backup LVS router does not receive a response from the active LVS router, it initiates failover by calling send_arp to reassign all virtual IP addresses to the NIC hardware addresses ( MAC address) of the backup LVS router, sends a command to the active LVS router via both the public and private network interfaces to shut down the lvs daemon on the active LVS router, and starts the lvs daemon on the backup LVS router to accept requests for the configured virtual servers. To an outside user accessing a hosted service (such as a website or database application), LVS appears as one server. However, the user is actually accessing real servers behind the LVS routers. Because there is no built-in component in LVS to share the data among real servers, you have have two basic options: Synchronize the data across the real servers. Add a third layer to the topology for shared data access. The first option is preferred for servers that do not allow large numbers of users to upload or change data on the real servers. If the real servers allow large numbers of users to modify data, such as an e-commerce website, adding a third layer is preferable. There are many ways to synchronize data among real servers. For example, you can use shell scripts to post updated web pages to the real servers simultaneously. Also, you can use programs such as rsync to replicate changed data across all nodes at a set interval. However, in environments where users frequently upload files or issue database transactions, using scripts or the rsync command for data synchronization does not function optimally. Therefore, for real servers with a high amount of uploads, database transactions, or similar traffic, a three-tiered topology is more appropriate for data synchronization. 1.8.1. Two-Tier LVS Topology Figure 1.21, "Two-Tier LVS Topology" shows a simple LVS configuration consisting of two tiers: LVS routers and real servers. The LVS-router tier consists of one active LVS router and one backup LVS router. The real-server tier consists of real servers connected to the private network. Each LVS router has two network interfaces: one connected to a public network (Internet) and one connected to a private network. A network interface connected to each network allows the LVS routers to regulate traffic between clients on the public network and the real servers on the private network. In Figure 1.21, "Two-Tier LVS Topology" , the active LVS router uses Network Address Translation ( NAT ) to direct traffic from the public network to real servers on the private network, which in turn provide services as requested. The real servers pass all public traffic through the active LVS router. From the perspective of clients on the public network, the LVS router appears as one entity. Figure 1.21. Two-Tier LVS Topology Service requests arriving at an LVS router are addressed to a virtual IP address or VIP. This is a publicly-routable address that the administrator of the site associates with a fully-qualified domain name, such as www.example.com, and which is assigned to one or more virtual servers [1] . Note that a VIP address migrates from one LVS router to the other during a failover, thus maintaining a presence at that IP address, also known as floating IP addresses . VIP addresses may be aliased to the same device that connects the LVS router to the public network. For instance, if eth0 is connected to the Internet, then multiple virtual servers can be aliased to eth0:1 . Alternatively, each virtual server can be associated with a separate device per service. For example, HTTP traffic can be handled on eth0:1 , and FTP traffic can be handled on eth0:2 . Only one LVS router is active at a time. The role of the active LVS router is to redirect service requests from virtual IP addresses to the real servers. The redirection is based on one of eight load-balancing algorithms: Round-Robin Scheduling - Distributes each request sequentially around a pool of real servers. Using this algorithm, all the real servers are treated as equals without regard to capacity or load. Weighted Round-Robin Scheduling - Distributes each request sequentially around a pool of real servers but gives more jobs to servers with greater capacity. Capacity is indicated by a user-assigned weight factor, which is then adjusted up or down by dynamic load information. This is a preferred choice if there are significant differences in the capacity of real servers in a server pool. However, if the request load varies dramatically, a more heavily weighted server may answer more than its share of requests. Least-Connection - Distributes more requests to real servers with fewer active connections. This is a type of dynamic scheduling algorithm, making it a better choice if there is a high degree of variation in the request load. It is best suited for a real server pool where each server node has roughly the same capacity. If the real servers have varying capabilities, weighted least-connection scheduling is a better choice. Weighted Least-Connections (default) - Distributes more requests to servers with fewer active connections relative to their capacities. Capacity is indicated by a user-assigned weight, which is then adjusted up or down by dynamic load information. The addition of weighting makes this algorithm ideal when the real server pool contains hardware of varying capacity. Locality-Based Least-Connection Scheduling - Distributes more requests to servers with fewer active connections relative to their destination IPs. This algorithm is for use in a proxy-cache server cluster. It routes the packets for an IP address to the server for that address unless that server is above its capacity and has a server in its half load, in which case it assigns the IP address to the least loaded real server. Locality-Based Least-Connection Scheduling with Replication Scheduling - Distributes more requests to servers with fewer active connections relative to their destination IPs. This algorithm is also for use in a proxy-cache server cluster. It differs from Locality-Based Least-Connection Scheduling by mapping the target IP address to a subset of real server nodes. Requests are then routed to the server in this subset with the lowest number of connections. If all the nodes for the destination IP are above capacity, it replicates a new server for that destination IP address by adding the real server with the least connections from the overall pool of real servers to the subset of real servers for that destination IP. The most-loaded node is then dropped from the real server subset to prevent over-replication. Source Hash Scheduling - Distributes requests to the pool of real servers by looking up the source IP in a static hash table. This algorithm is for LVS routers with multiple firewalls. Also, the active LVS router dynamically monitors the overall health of the specific services on the real servers through simple send/expect scripts . To aid in detecting the health of services that require dynamic data, such as HTTPS or SSL, you can also call external executables. If a service on a real server malfunctions, the active LVS router stops sending jobs to that server until it returns to normal operation. The backup LVS router performs the role of a standby system. Periodically, the LVS routers exchange heartbeat messages through the primary external public interface and, in a failover situation, the private interface. Should the backup LVS router fail to receive a heartbeat message within an expected interval, it initiates a failover and assumes the role of the active LVS router. During failover, the backup LVS router takes over the VIP addresses serviced by the failed router using a technique known as ARP spoofing - where the backup LVS router announces itself as the destination for IP packets addressed to the failed node. When the failed node returns to active service, the backup LVS router assumes its backup role again. The simple, two-tier configuration in Figure 1.21, "Two-Tier LVS Topology" is suited best for clusters serving data that does not change very frequently - such as static web pages - because the individual real servers do not automatically synchronize data among themselves. [1] A virtual server is a service configured to listen on a specific virtual IP.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_suite_overview/s1-lvs-overview-CSO
Chapter 35. Red Hat Enterprise Linux Atomic Host 7.3.1
Chapter 35. Red Hat Enterprise Linux Atomic Host 7.3.1 35.1. Atomic Host OStree update : New Tree Version: 7.3.1 (hash: 42cfe1ca3305defb16dfd59cd0be5c539f19ea720dba861ed11e13941423ae86) Changes since Tree Version 7.3 (hash: 90c9735becfff1c55c8586ae0f2c904bc0928f042cd4d016e9e0e2edd16e5e97) Updated packages : cockpit-ostree-122-1.el7 ostree-2016.11-1.atomic.el7 rpm-ostree-2016.11-2.atomic.el7 rpm-ostree-client-2016.11-2.atomic.el7 35.2. Extras Updated packages : atomic-1.13.8-1.el7 cockpit-122-3.el7 docker-1.10.3-59.el7 docker-distribution-2.5.1-1.el7 docker-latest-1.12.3-2.el7 etcd3-3.0.14-2.el7 kubernetes-1.3.0-0.3.git86dc49a.el7 oci-register-machine-0-1.10.gitfcdbff0.el7 oci-systemd-hook-0.1.4-7.gita9c551a.el7 skopeo-0.1.17-0.7.git1f655f3.el7 New packages : gomtree-0-0.3.git8c6b32c.el7 The asterisk (*) marks packages which are available for Red Hat Enterprise Linux only. 35.2.1. Container Images Updated : Red Hat Enterprise Linux Container Image (rhel7/rhel) Red Hat Enterprise Linux Atomic Tools Container Image (rhel7/rhel-tools) Red Hat Enterprise Linux Atomic rsyslog Container Image (rhel7/rsyslog) Red Hat Enterprise Linux Atomic sadc Container Image (rhel7/sadc) Red Hat Enterprise Linux Atomic cockpit-ws Container Image (rhel7/cockpit-ws) Red Hat Enterprise Linux Atomic etcd Container Image (rhel7/etcd) Red Hat Enterprise Linux Atomic Kubernetes-controller Container Image (rhel7/kubernetes-controller-mgr) Red Hat Enterprise Linux Atomic Kubernetes-apiserver Container Image (rhel7/kubernetes-apiserver) Red Hat Enterprise Linux Atomic Kubernetes-scheduler Container Image (rhel7/kubernetes-scheduler) Red Hat Enterprise Linux Atomic SSSD Container Image (rhel7/sssd) (Technology Preview) Red Hat Enterprise Linux Atomic openscap Container Image (rhel7/openscap) (Technology Preview) New : Red Hat Enterprise Linux Atomic Identity Management Server Container Image (rhel7/ipa-server) (Technology Preview) 35.3. New Features new gomtree package The gomtree packages contain a command-line tool and a Go library to support the mtree file system hierarchy validation tooling and format. The gomtree packages are necessary for the functionality of the atomic verify command. skopeo-containers moved from atomic packages to skopeo packages The skopeo-containers subpackage which contains configurations files for working with image signatures has now been moved to the skopeo package set. A bug where docker push did not complete on NFS has been fixed A regression was introduced in the docker registry 2.4 where file descriptors weren't closed during blob uploads. This has caused image push failures when the registry was running on top of NFS file system. A new version of upstream docker registry is available with a fix to the leaking file descriptors. As a result, image pushes now succeed on NFS file systems. *Standardizing labels for Docker-formatted containers" *Red Hat is trying to standardize the use of Docker-formatted labels in its images. For details on that subject see: Using Labels In Container Images Cockpit has been rebased to version 122 Most notable changes: Cockpit can now rollback network configuration that would otherwise disconnect an administrator from the system. Unmanaged network devices are now shown. The list of docker containers can be filtered and expanded inline. Cockpit can be a "bastion host" by using the login page to connect to an alternate system through SSH. Only connect to an alternate system if it has a known SSH host key. When connecting to other systems, each SSH connection is run in a separate process. Fixes bugs that prevent the "Logs" page from working in Firefox 49. A network proxy can be used when registering with Red Hat Enterprise Linux. A system can be unregistered when using Red Hat Enterprise Linux subscriptions. The default flags for new VLAN devices have been fixed.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_atomic_host/7/html/release_notes/red_hat_enterprise_linux_atomic_host_7_3_1
25.3.2. Adding an LCS Device
25.3.2. Adding an LCS Device The LAN channel station (LCS) device driver supports 1000Base-T Ethernet on the OSA-Express2 and OSA-Express 3 features. Based on the type of interface being added, the LCS driver assigns one base interface name: eth n for OSA-Express Fast Ethernet and Gigabit Ethernet n is 0 for the first device of that type, 1 for the second, and so on. 25.3.2.1. Dynamically Adding an LCS Device Load the device driver: Use the cio_ignore command to remove the network channels from the list of ignored devices and make them visible to Linux: Replace read_device_bus_id and write_device_bus_id with the two device bus IDs representing a network device. For example: Create the group device: Configure the device. OSA cards can provide up to 16 ports for a single CHPID. By default, the LCS group device uses port 0 . To use a different port, issue a command similar to the following: Replace portno with the port number you want to use. For more information about configuration of the LCS driver, refer to the chapter on LCS in Linux on System z Device Drivers, Features, and Commands on Red Hat Enterprise Linux 6 . Set the device online: To find out what network device name has been assigned, enter the command:
[ "modprobe lcs", "cio_ignore -r read_device_bus_id , write_device_bus_id", "cio_ignore -r 0.0.09a0,0.0.09a1", "echo read_device_bus_id , write_device_bus_id > /sys/bus/ccwgroup/drivers/lcs/group", "echo portno > /sys/bus/ccwgroup/drivers/lcs/device_bus_id/portno", "echo 1 > /sys/bus/ccwgroup/drivers/lcs/read_device_bus_id/online", "ls -l /sys/bus/ccwgroup/drivers/lcs/ read_device_bus_ID /net/ drwxr-xr-x 4 root root 0 2010-04-22 16:54 eth1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ap-s390info-adding_a_network_device-lcs_device
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_eclipse_temurin_21.0.6/making-open-source-more-inclusive
Chapter 4. Red Hat build of Keycloak Node.js adapter
Chapter 4. Red Hat build of Keycloak Node.js adapter Red Hat build of Keycloak provides a Node.js adapter built on top of Connect to protect server-side JavaScript apps - the goal was to be flexible enough to integrate with frameworks like Express.js . The adapter uses OpenID Connect protocol under the covers. You can take a look at the Secure applications and services with OpenID Connect chapter for the more generic information about OpenID Connect endpoints and capabilities. To use the Node.js adapter, first you must create a client for your application in the Red Hat build of Keycloak Admin Console. The adapter supports public, confidential, and bearer-only access type. Which one to choose depends on the use-case scenario. Once the client is created, click Action at the top right and choose Download adapter config . For Format, choose *Keycloak OIDC JSON and click Download . The downloaded keycloak.json file is at the root folder of your project. 4.1. Installation Assuming you have already installed Node.js , create a folder for your application: Use npm init command to create a package.json for your application. Now add the Red Hat build of Keycloak connect adapter in the dependencies list: "dependencies": { "keycloak-connect": "file:keycloak-connect-26.0.10.tgz" } 4.2. Usage Instantiate a Keycloak class The Keycloak class provides a central point for configuration and integration with your application. The simplest creation involves no arguments. In the root directory of your project create a file called server.js and add the following code: const session = require('express-session'); const Keycloak = require('keycloak-connect'); const memoryStore = new session.MemoryStore(); const keycloak = new Keycloak({ store: memoryStore }); Install the express-session dependency: To start the server.js script, add the following command in the 'scripts' section of the package.json : Now we have the ability to run our server with following command: By default, this will locate a file named keycloak.json alongside the main executable of your application, in our case on the root folder, to initialize Red Hat build of Keycloak specific settings such as public key, realm name, various URLs. In that case a Red Hat build of Keycloak deployment is necessary to access Red Hat build of Keycloak admin console. Please visit links on how to deploy a Red Hat build of Keycloak admin console with Podman or Docker Now we are ready to obtain the keycloak.json file by visiting the Red Hat build of Keycloak Admin Console clients (left sidebar) choose your client Installation Format Option Keycloak OIDC JSON Download Paste the downloaded file on the root folder of our project. Instantiation with this method results in all the reasonable defaults being used. As alternative, it's also possible to provide a configuration object, rather than the keycloak.json file: const kcConfig = { clientId: 'myclient', bearerOnly: true, serverUrl: 'http://localhost:8080', realm: 'myrealm', realmPublicKey: 'MIIBIjANB...' }; const keycloak = new Keycloak({ store: memoryStore }, kcConfig); Applications can also redirect users to their preferred identity provider by using: const keycloak = new Keycloak({ store: memoryStore, idpHint: myIdP }, kcConfig); Configuring a web session store If you want to use web sessions to manage server-side state for authentication, you need to initialize the Keycloak(... ) with at least a store parameter, passing in the actual session store that express-session is using. const session = require('express-session'); const memoryStore = new session.MemoryStore(); // Configure session app.use( session({ secret: 'mySecret', resave: false, saveUninitialized: true, store: memoryStore, }) ); const keycloak = new Keycloak({ store: memoryStore }); Passing a custom scope value By default, the scope value openid is passed as a query parameter to Red Hat build of Keycloak's login URL, but you can add an additional custom value: const keycloak = new Keycloak({ scope: 'offline_access' }); 4.3. Installing middleware Once instantiated, install the middleware into your connect-capable app: In order to do so, first we have to install Express: then require Express in our project as outlined below: const express = require('express'); const app = express(); and configure Keycloak middleware in Express, by adding at the code below: app.use( keycloak.middleware() ); Last but not least, let's set up our server to listen for HTTP requests on port 3000 by adding the following code to main.js : app.listen(3000, function () { console.log('App listening on port 3000'); }); 4.4. Configuration for proxies If the application is running behind a proxy that terminates an SSL connection Express must be configured per the express behind proxies guide. Using an incorrect proxy configuration can result in invalid redirect URIs being generated. Example configuration: const app = express(); app.set( 'trust proxy', true ); app.use( keycloak.middleware() ); 4.5. Protecting resources Simple authentication To enforce that a user must be authenticated before accessing a resource, simply use a no-argument version of keycloak.protect() : app.get( '/complain', keycloak.protect(), complaintHandler ); Role-based authorization To secure a resource with an application role for the current app: app.get( '/special', keycloak.protect('special'), specialHandler ); To secure a resource with an application role for a different app: app.get( '/extra-special', keycloak.protect('other-app:special'), extraSpecialHandler ); To secure a resource with a realm role: app.get( '/admin', keycloak.protect( 'realm:admin' ), adminHandler ); Resource-Based Authorization Resource-Based Authorization allows you to protect resources, and their specific methods/actions, * * based on a set of policies defined in Keycloak, thus externalizing authorization from your application. This is achieved by exposing a keycloak.enforcer method which you can use to protect resources.* app.get('/apis/me', keycloak.enforcer('user:profile'), userProfileHandler); The keycloak-enforcer method operates in two modes, depending on the value of the response_mode configuration option. app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), userProfileHandler); If response_mode is set to token , permissions are obtained from the server on behalf of the subject represented by the bearer token that was sent to your application. In this case, a new access token is issued by Keycloak with the permissions granted by the server. If the server did not respond with a token with the expected permissions, the request is denied. When using this mode, you should be able to obtain the token from the request as follows: app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), function (req, res) { const token = req.kauth.grant.access_token.content; const permissions = token.authorization ? token.authorization.permissions : undefined; // show user profile }); Prefer this mode when your application is using sessions and you want to cache decisions from the server, as well automatically handle refresh tokens. This mode is especially useful for applications acting as a client and resource server. If response_mode is set to permissions (default mode), the server only returns the list of granted permissions, without issuing a new access token. In addition to not issuing a new token, this method exposes the permissions granted by the server through the request as follows: app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'permissions'}), function (req, res) { const permissions = req.permissions; // show user profile }); Regardless of the response_mode in use, the keycloak.enforcer method will first try to check the permissions within the bearer token that was sent to your application. If the bearer token already carries the expected permissions, there is no need to interact with the server to obtain a decision. This is specially useful when your clients are capable of obtaining access tokens from the server with the expected permissions before accessing a protected resource, so they can use some capabilities provided by Keycloak Authorization Services such as incremental authorization and avoid additional requests to the server when keycloak.enforcer is enforcing access to the resource. By default, the policy enforcer will use the client_id defined to the application (for instance, via keycloak.json ) to reference a client in Keycloak that supports Keycloak Authorization Services. In this case, the client can not be public given that it is actually a resource server. If your application is acting as both a public client(frontend) and resource server(backend), you can use the following configuration to reference a different client in Keycloak with the policies that you want to enforce: keycloak.enforcer('user:profile', {resource_server_id: 'my-apiserver'}) It is recommended to use distinct clients in Keycloak to represent your frontend and backend. If the application you are protecting is enabled with Keycloak authorization services and you have defined client credentials in keycloak.json , you can push additional claims to the server and make them available to your policies in order to make decisions. For that, you can define a claims configuration option which expects a function that returns a JSON with the claims you want to push: app.get('/protected/resource', keycloak.enforcer(['resource:view', 'resource:write'], { claims: function(request) { return { "http.uri": ["/protected/resource"], "user.agent": // get user agent from request } } }), function (req, res) { // access granted For more details about how to configure Keycloak to protected your application resources, please take a look at the Authorization Services Guide . Advanced authorization To secure resources based on parts of the URL itself, assuming a role exists for each section: function protectBySection(token, request) { return token.hasRole( request.params.section ); } app.get( '/:section/:page', keycloak.protect( protectBySection ), sectionHandler ); Advanced Login Configuration: By default, all unauthorized requests will be redirected to the Red Hat build of Keycloak login page unless your client is bearer-only. However, a confidential or public client may host both browsable and API endpoints. To prevent redirects on unauthenticated API requests and instead return an HTTP 401, you can override the redirectToLogin function. For example, this override checks if the URL contains /api/ and disables login redirects: Keycloak.prototype.redirectToLogin = function(req) { const apiReqMatcher = /\/api\//i; return !apiReqMatcher.test(req.originalUrl || req.url); }; 4.6. Additional URLs Explicit user-triggered logout By default, the middleware catches calls to /logout to send the user through a Red Hat build of Keycloak-centric logout workflow. This can be changed by specifying a logout configuration parameter to the middleware() call: app.use( keycloak.middleware( { logout: '/logoff' } )); When the user-triggered logout is invoked a query parameter redirect_url can be passed: This parameter is then used as the redirect url of the OIDC logout endpoint and the user will be redirected to https://example.com/logged/out . Red Hat build of Keycloak Admin Callbacks Also, the middleware supports callbacks from the Red Hat build of Keycloak console to log out a single session or all sessions. By default, these type of admin callbacks occur relative to the root URL of / but can be changed by providing an admin parameter to the middleware() call: app.use( keycloak.middleware( { admin: '/callbacks' } ); 4.7. Complete example A complete example using the Node.js adapter usage can be found in Keycloak quickstarts for Node.js
[ "mkdir myapp && cd myapp", "\"dependencies\": { \"keycloak-connect\": \"file:keycloak-connect-26.0.10.tgz\" }", "const session = require('express-session'); const Keycloak = require('keycloak-connect'); const memoryStore = new session.MemoryStore(); const keycloak = new Keycloak({ store: memoryStore });", "npm install express-session", "\"scripts\": { \"test\": \"echo \\\"Error: no test specified\\\" && exit 1\", \"start\": \"node server.js\" },", "npm run start", "const kcConfig = { clientId: 'myclient', bearerOnly: true, serverUrl: 'http://localhost:8080', realm: 'myrealm', realmPublicKey: 'MIIBIjANB...' }; const keycloak = new Keycloak({ store: memoryStore }, kcConfig);", "const keycloak = new Keycloak({ store: memoryStore, idpHint: myIdP }, kcConfig);", "const session = require('express-session'); const memoryStore = new session.MemoryStore(); // Configure session app.use( session({ secret: 'mySecret', resave: false, saveUninitialized: true, store: memoryStore, }) ); const keycloak = new Keycloak({ store: memoryStore });", "const keycloak = new Keycloak({ scope: 'offline_access' });", "npm install express", "const express = require('express'); const app = express();", "app.use( keycloak.middleware() );", "app.listen(3000, function () { console.log('App listening on port 3000'); });", "const app = express(); app.set( 'trust proxy', true ); app.use( keycloak.middleware() );", "app.get( '/complain', keycloak.protect(), complaintHandler );", "app.get( '/special', keycloak.protect('special'), specialHandler );", "app.get( '/extra-special', keycloak.protect('other-app:special'), extraSpecialHandler );", "app.get( '/admin', keycloak.protect( 'realm:admin' ), adminHandler );", "app.get('/apis/me', keycloak.enforcer('user:profile'), userProfileHandler);", "app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), userProfileHandler);", "app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'token'}), function (req, res) { const token = req.kauth.grant.access_token.content; const permissions = token.authorization ? token.authorization.permissions : undefined; // show user profile });", "app.get('/apis/me', keycloak.enforcer('user:profile', {response_mode: 'permissions'}), function (req, res) { const permissions = req.permissions; // show user profile });", "keycloak.enforcer('user:profile', {resource_server_id: 'my-apiserver'})", "app.get('/protected/resource', keycloak.enforcer(['resource:view', 'resource:write'], { claims: function(request) { return { \"http.uri\": [\"/protected/resource\"], \"user.agent\": // get user agent from request } } }), function (req, res) { // access granted", "function protectBySection(token, request) { return token.hasRole( request.params.section ); } app.get( '/:section/:page', keycloak.protect( protectBySection ), sectionHandler );", "Keycloak.prototype.redirectToLogin = function(req) { const apiReqMatcher = /\\/api\\//i; return !apiReqMatcher.test(req.originalUrl || req.url); };", "app.use( keycloak.middleware( { logout: '/logoff' } ));", "https://example.com/logoff?redirect_url=https%3A%2F%2Fexample.com%3A3000%2Flogged%2Fout", "app.use( keycloak.middleware( { admin: '/callbacks' } );" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/securing_applications_and_services_guide/nodejs-adapter-
function::cmdline_arg
function::cmdline_arg Name function::cmdline_arg - Fetch a command line argument Synopsis Arguments n Argument to get (zero is the program itself) Description Returns argument the requested argument from the current process or the empty string when there are not that many arguments or there is a problem retrieving the argument. Argument zero is traditionally the command itself.
[ "cmdline_arg:string(n:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-cmdline-arg